Working with large Data Warehouses and Cubes...

In this post I include some of the best practices identified while working with large Data Warehouses and Cubes.
  • Use table partitioning for daily or monthly basis, depending on the data frequency.
  • Merge old partitions by considering the historical and changing data availability.
  • Design and implement table partitioning before loading the data. Otherwise you will be facing issues while trying to create the index for loaded data.
  • If you want to delete particular set of records, truncate the relevant partition. Because it will take more time if you try to delete it with large set of records.
  • Design and implement cube partitions.
  • Process the implemented partitions to load data, rather processing the whole measure group or cube. Otherwise it will utilize almost all the memory and CPU, if we try to load all at once.
  • Also use the Process Data option first and then use Process Index option. Sometimes it takes more time if you use the Process Full option.

No comments:

Post a Comment

Getting Error Description and Error Column Name in SSIS 2016...

If you check the Error Output Path of a SSIS Data Flow component, it only contains the ErrorCode and the ErrorColumn (actually this the I...