Working with large Data Warehouses and Cubes...

In this post I include some of the best practices identified while working with large Data Warehouses and Cubes.
  • Use table partitioning for daily or monthly basis, depending on the data frequency.
  • Merge old partitions by considering the historical and changing data availability.
  • Design and implement table partitioning before loading the data. Otherwise you will be facing issues while trying to create the index for loaded data.
  • If you want to delete particular set of records, truncate the relevant partition. Because it will take more time if you try to delete it with large set of records.
  • Design and implement cube partitions.
  • Process the implemented partitions to load data, rather processing the whole measure group or cube. Otherwise it will utilize almost all the memory and CPU, if we try to load all at once.
  • Also use the Process Data option first and then use Process Index option. Sometimes it takes more time if you use the Process Full option.

No comments:

Post a Comment

How to run UPDATE/INSERT/DELETE Statements on Azure SQL Database in Microsoft Fabric Notebook...

You can run UPDATE/INSERT/DELETE Statements on Azure SQL Database in Microsoft Fabric Notebook using Python SQL Driver - pyodbc.  For the Fa...