Published on

October 14, 2017

Optimizing SQL Server Performance: Avoiding Data Starvation

Welcome to our blog series on optimizing SQL Server performance! In our previous article, we discussed the importance of ensuring that your SQL Server is processing data efficiently. Today, we will delve deeper into the concept of data starvation and explore how it can impact the performance of your SQL Server.

Identifying Data Starvation

One of the key indicators of data starvation is when the CPUs of your SQL Server are consistently underutilized, operating at less than 90% capacity. This can be caused by various factors, including application or hardware issues. In this article, we will focus on addressing hardware-related concerns.

To determine if your CPUs are being held back, you can monitor the “% User Time” metric in Performance Monitor. SQL Server operates in the user space, so a low percentage of time spent in this space may indicate CPU constraints. For example, if only 14% of the CPU’s total processing time is dedicated to SQL Server work, it suggests that the CPU is being underutilized.

Verifying Server Hardware

When it comes to hardware, the transport between your CPUs and storage plays a crucial role. It should be capable of handling the workload efficiently. Consider the following guidelines for optimal performance:

  • 10Gb NIC (iSCSI): ~550MB/s
  • 8Gb HBA (Fibre Channel): ~800MB/s
  • 16Gb HBA (Fibre Channel): ~1,600MB/s
  • 32Gb HBA (Fibre Channel): ~3,200MB/s

Understanding your workloads is also essential. Determine whether your mission-critical applications are OLAP (Online Analytical Processing) or OLTP (Online Transaction Processing). For OLAP workloads, focus on factors such as read block size, write block size, read latency, write latency, read bandwidth, and write bandwidth. Higher bandwidth allows for faster report generation, increased concurrent operations, and quicker data loading. Latency should be low, preferably below 2ms.

On the other hand, for OLTP workloads, prioritize read and write latencies. The faster the CPUs can resume processing, the fewer time slices will be missed. Additionally, consider IOPs (Input/Output Operations Per Second) as a measure of scale. Higher IOPs indicate a greater number of operations that can be executed.

Testing and Analysis

To evaluate the performance of your SQL Server, you can conduct tests and analyze the results. For OLAP workloads, run a table scan on a large table and monitor the bandwidth using PerfMon. Limit the results returned by applying a filter, such as “SELECT * FROM CUSTOMERS WHERE AGE < 0”. The read bandwidth should sustain around your theoretical peak based on the number of logical cores.

For OLTP workloads, use PerfMon to record CPU utilization, IOPs, and latency for the data, log, and tempdb volumes. During peak user workloads, ensure that CPU utilization consistently exceeds 90%. If not, investigate the latencies and aim to lower them. Lower latencies allow the CPU to resume processing faster, reducing wasted time slices and increasing CPU utilization.

Conclusion

SQL Server licenses can be a significant investment for any database platform. Therefore, it is crucial to ensure that your CPUs are consistently operating at optimal capacity. Data starvation, often caused by underpowered storage tiers, can lead to underutilized CPUs. By addressing hardware-related issues and optimizing performance, you can maximize the value of your SQL Server investment.

Call to Action

Are you experiencing data starvation issues with your SQL Server? Consider exploring Vexata solutions, which can help enhance the performance and efficiency of your SQL Server. Visit our website to learn more about how Vexata can optimize your SQL Server environment.

Click to rate this post!
[Total: 0 Average: 0]

Let's work together

Send us a message or book free introductory meeting with us using button below.