SQL Server on Physical Machine Best Practices

Page content

The intent of this post is a quick reference guide based on the recommendations made on Pure Storage Support page in the Microsoft Platform Guide. The target audience for this blog post is for SQL Server DBAs introducing them to the most impactful configurations and settings for running SQL Server on physical machines on Pure Storage.

Physical Host Configuration

Check with your hardware vendor to see if they publish a guide for SQL Server-specific configurations for their server platforms. Here are my thoughts on general guidance around CPU configuration and power management.

  • Enable Turbo Boost
  • Enable Hyper-threading
  • NUMA Enabled
  • Power Management set to OS Controlled or High Performance
  • Disable all processor C-states (C1E halt state)

Disk Configuration

Connecting a Physical Server to your FlashArray

For general best practices for connecting Windows hosts with Pure Storage, please see the Pure Storage Support page’s Microsoft Platform Guide . This guide tells you important information based on your connectivity method. Important sections on that page include:

Volume Layout

FlashArray is a data-reducing array, which can have some interesting implications on how you layout your volumes.

  • You don’t have to have dedicated volumes or RAID groups based on performance profile - Each one of your volumes will get the full performance of the array. So literally, you could put all your databases and files on a single volume and the system will perform well without conflicting with each other.

  • There are other SQL Server-specific reasons you will want to separate data and log files onto separate volumes.

    • First is recovery; separating data and log files gives you the ability to perform tail of the log backup in the event of a disaster.
    • Second is backup and restore performance. SQL Server allocates a reader thread per volume the database files are on. So the more volumes, the more reader threads, which can lead to higher backup throughput.
  • You can worry less about empty space management - Since FlashArray is data reducing, you can worry less about how much free space is available in each volume and if you’re wasting that space. So all that space that you used to keep as headroom on each volume doesn’t consume space on the array.

  • Granularity of snapshots and recovery - FlashArray provides snapshot capabilities that enable nearly instantaneous data copying between volumes. When designing your database file layout to volumes, consider if you’re going to use snapshots, isolating database files onto volumes that are snapshot and then presented to other Instances of SQL Server. When using snapshots with database files across multiple volumes, ensure that you’re using Protection Groups to perform the snapshot for a consistent snapshot across all volumes.

Windows Configuration

  • Power Plan - High Performance
  • Page file size - configure as a fixed 8GB allocation
  • Windows Updates - before installation of SQL Server, ensure all updates are applied
  • Instant file initialization - Configure Local Security Policy to “Perform Volume Management” for the SQL Server Service account
  • Lock pages in memory - Configure Local Security Policy to “Lock Pages In Memory” for the SQL Server Service account
  • Volume format - format all SQL Server volumes with 64KB NTFS Allocation Units
  • Block Alignment - Partitions have been volume aligned by default since Windows 2008. But it is still worth checking to ensure it’s configured properly. See this guide Windows File Systems for FlashArray for more details
  • SQL Server folder structure - create the following folder structure for SQL Server. This structure is helpful with the Granularity of snapshots. Additional volumes exposed as mount points can be attached if needed.
    • D:\DATA
    • L:\LOGS
    • S:\SYSTEM
    • T:\TempDB
  • Anti-Virus - ensure your AV system excludes MDF, NDF, LDF, SDF and BAK

SQL Server Instance Configuration

Here is a link to Pure Storage Support’s general SQL Server best practices recommendations.

  • Max Memory - tier 1 workloads should have MIN and MAX values
  • MAXDOP <= Maximum Number of Cores in a NUMA Node or <= 8
  • Cost Threshold for Parallelism - CTFP = 50 - start at 50 then measure again once you have workload running
  • Enable Optimize for Ad-hoc Workloads

Database Maintenance and Configuration

  • Backups - Yes, please have backups. And try to stagger them throughout your backup window if possible. You can use a layered approach combining array-based snapshots and SQL Server native backups.
  • Index and Statistics Maintenance - Also consider staggering maintenance jobs and using an intelligent maintenance framework like Ola Hallengren’s. Index maintenance still matters.
  • Data File Layout - For large and demanding databases, spread data out into file groups and data files that map back to the disk topology above. Add additional disks if needed for the workload.
  • TempDB - Create one TempDB file per core up to 8 cores.
  • Monitoring - have a SQL Server-specific monitoring strategy to capture workload and system metrics.
  • Enable Query Store - This can be your flight data recorder when you need to find performance issues. Consider looking at the Channel Advisor QDS Toolbox for an effective way to extract performance data.
  • Leverage community tools - such as sp_whoisactive for monitoring and dbatools for automation
  • Invest in Automation and Configuration Management - Check out this session I did at PASS Summit 2020 on Deploying and Managing SQL Server with dbatools!

Queue Depth Settings

  • On Pure Storage, if you’re seeing queuing in your HBA consider increasing your HBA’s queue depth.