Standard SQL Server Build

Often I’m asked what is the best practice for a single SQL Server installation. Well, that is a tricky questions and the answer is it always depends. Let’s discuss what a “standard” SQL Server build looks like if you had to start somewhere. Here let’s focus on Standard edition on a physical server. Enterprise edition and virtualization are topics that can stand on their own.
 
Processors – The higher the clock frequency the better
 
SQL Server is licensed by the core. Generally speaking, higher clock rates will yield higher data throughput. Meaning your query will get done faster. It’s a balance between clock speed cost and number of cores (SQL license costs). To see the impact of clock speed on your performance try adjusting your Windows power plan and time a workload…you will see the impact. Check out Glenn Berry’s post on this here and most recently here. Again since we’re likely licensing SQL by the core, we really want to feed the processor as quickly as possible, so next we discuss the need for a lot of memory and fast disks to do so.
 
Memory – At the moment memory is cheap, buy as much as you can afford. But not too much if you’re on Standard edition.

The maximum memory allowed on SQL Server Standard 2014 is 128GB per instance, 64GB if you’re on SQL Server Standard 2012. The more memory your system has the larger your memory allocation can be to things like the buffer pool. With a larger buffer pool, it’s more likely your query will be cached in memory and not have to retrieve your data from disk. Memory operates in nanoseconds, SSDs are microseconds and spinning disks are milliseconds. I know where I would want my data waiting for me. 
 
Storage – Isolate your major functions
 
This is where things depend highly on workload, specifically capacity and performance. This article is about the general case, so here’s we’re I’d start. Also, these basics apply to direct attached storage rather than SAN based storage. 
 
The disk controller should have a battery backed write cache. With a battery backed write cache your controller can tell the OS that it’s IO completed while caching the IO in its local memory. If you’re into it, add a hot spare. If you do, try to make all your drives the same size/type and it can be a global hot spare for any disk in the system this is simply a safety blanket in the event of a disk failure.
 
If you isolate your major functions as described below you’re ahead of the crowd already, the main reasons for this are differing IO patterns and the potential for recovery in the event of a disk failure:
Operating System – 2 disks RAID 1 – 100GB – you should just have your OS and SQL installation here. SSD if you can afford it, otherwise 15K RPM.

Databases – 4+ disks RAID 5 is OK. RAID10 is better – the number and capacity of these disks depend on your data size requirements, if you have the budget for SSD go for it, get MLC.

Transaction Logs – 2+ disks RAID 1, if you need more capacity 4+ disks RAID10 – if you have the budget for SSD go for it and get SLC

TempDB – 2+ disks RAID 1, if you need more capacity 4+ disks RAID10 – if you have the budget for SSD go for it and get SLC 
Backups
This 100% needs to be addressed in the first conversation about building a SQL Server. Where are you going to put them? How long are you going to keep them? How large are they going to be? Replication offsite? We could (and should) spend a whole day discussing backups. 
 
Here we describe a very generalized SQL Server hardware configuration, if you start here you’ll be in a good place. There are a ton of decision points that can drive a SQL Server hardware configuration in any direction. In future articles we’ll discuss how to configure Windows, Install SQL Server and instance level configuration basics for SQL Server.
 
If you need some assistance with your SQL Server installation, design and backups planning please feel free to contact me.
 
Follow me on Twitter: @nocentino