May 15, 2023, 10:30 AM
Hi everyone,
I'm working with a large dataset in Azure Table Storage (billions of entities) and I'm facing performance challenges when querying. I'm currently using PartitionKey and RowKey for filtering, but some queries require scanning across many partitions, which is proving to be very slow.
Are there any best practices or advanced techniques for efficiently querying large datasets in Table Storage? I'm considering:
- Using a different strategy for PartitionKey/RowKey design.
- Leveraging indexes or materialized views (if available for Table Storage).
- Exploring alternative Azure data services for this scenario.