SQL Server Configuration Recommendations
QEST Platform Documentation 4.94
Applies to All
This article discusses options for configuring a SQL Server instance for use with QEST Platform. Guidance is provided for selecting an appropriate recovery model and designing a backup and data recovery plan.
Contents
Overview
QEST Platform applications require an SQL Server database. Correct configuration of the database server and SQL Server instance is essential for good QEST Platform performance.
The QEST Platform database contains business-critical data. Design and implementation of an appropriate backup and recovery strategy is a key component of a successful QEST Platform implementation.
Hardware and Software Requirements
QEST Platform minimum requirements are sized for 30-50 simultaneous users, and assume that the Database Server is dedicated to QEST Platform databases.
Software Versions
As of version 4.62, QEST Platform supports SQL Server 2008R2 and newer, and requires the database to be set to a compatibility level of 100 or higher.
QEST Platform only requires SQL Server Standard Edition, but business requirements such as high-availability or scaling to extremely large systems may dictate that Enterprise Edition is necessary.
Except in the case of a known incompatibility, the recommended version of SQL Server is the latest version. As of version 4.62, all supported releases of QEST Platform are verified for compatibility with SQL Server 2017. Use of a recent SQL Server version is strongly recommended, especially if purchasing SQL Server licenses for a new installation.
The Database Server may use any version of Windows Server that supports the target version of SQL Server. SQL Server 2016 and later require Windows Server 2012 R2 or later. Note that a 64-bit operating system is required to make use of the minimum RAM specification.
Minimum Hardware Specifications
4 CPU Cores (or vCPUs) @ 2.0GHz or faster
16GB RAM
300GB free hard drive space
Latency
It is vital that latency between the Database Server and Application Server(s) is minimized. As a guideline, good performance requires latency under 1ms between these servers or performance of the QEST Platform desktop applications (QESTLab, QEST Crushing Station, QEST Weigh and Measure Station, QESTMix) will be unacceptable. This requires that the servers are co-located in close proximity, and where possible connected to the same network switch.
Care should be taken when designing a deployment in which one or more QEST Platform servers are virtualized. Please ensure that the physical hosts will remain proximate and that virtual switching does not add unacceptable latency. If the Application Server or Database Server is migrated to a host such that latency between those servers exceeds 1ms, performance of the QEST Platform desktop applications will be unacceptable.
Scaling
Disk space requirements vary depending on the volume of data. As a guide, an operation that has an annual output of 100,000 Work Orders, 150,000 Samples, 500,000 Tests/Inspections and 200,000 Test Reports may grow approximately 40GB per annum. Typically 50% of this space is used for storing the binary images (PDFs) of Test Reports. If these have been published to Construction Hive they can be purged from the database (please contact Spectra QEST for details of the necessary configuration).
The QEST Platform desktop applications are extremely latency sensitive, and will benefit from higher CPU clockspeeds on the SQL Server.
For small deployments with fewer than 15 simultaneous users, contact Spectra QEST for advice on a reduced minimum specification. As an indication, in these cases the minimum specification for the SQL Server is usually sufficient to host both the SQL Server and the Application or Web server components.
SQL Server Licensing
This section provides general guidance on licensing for new SQL Server installations. For more details, see Microsoft's SQL Server 2017 Licensing Guide or consult a Microsoft sales vendor.
QEST Platform only requires SQL Server Standard Edition, but business requirements such as high-availability or scaling to extremely large systems may dictate that Enterprise Edition is necessary.
As of SQL Server 2017, Standard Edition may be licensed using either Core-based licensing or Server + CAL licensing. Enterprise Edition is only available with core-based licensing.
Server + CAL Licensing
With Server + CAL Licensing, a single server license is purchased for each database server. Additionally, Client Access Licenses (CALs) are purchased for each named user or allocated device that access the database server.
Clients are counted regardless of whether they access the database directly or via another server or application. It is not permitted to only purchase a single Device CAL for the Application Server - end users must be licensed, either with a User CAL allocated to themselves, or a Device CAL allocated to their terminal.
For QESTLab device CALs may be a useful option, since it is typical for multiple named users to share a smaller number of terminals. If User CALs are employed, an organization with 10 QESTLab users needs 10 user CALs, even if only 3 will work simultaneously. Conversely, QESTLab will only be accessed from the 3 computers in the lab, you only need 3 device CALs.
For QESTField it is typical that every user has their own device, thus User CALs are more appropriate except in niche applications. For example, Device CALs might be appropriate if field work were entered using a small fleet of ruggedized tablets.
If a user may use multiple QEST Platform applications (e.g. QESTLab and QESTField) they only require one user CAL. It is permitted to use a mix of User and Device CALs on a single SQL Server.
For small deployments it is expected that Server + CAL licensing will be cheaper, but this should be weighed against potential future growth of the application userbase. For large numbers of users or devices, Core-based licensing may be less expensive or more appropriate in the long term.
Core-based Licensing
With Core-based Licensing, a license is purchased for each core present in the Database Server. This server may then service unlimited users. Since User CALs are purchased per named user, this model is necessary if the organization does not control the users that directly or indirectly access your Database Server, or the number of users cannot easily be counted. This is not a concern for QEST Platform usage, but must be considered if the same SQL Server will also serve externally-facing products such as a website.
Core licenses are purchased in pairs, and each SQL Server requires at least 4 core licenses per physical processor, even if the server has fewer than 4 cores. When running SQL Server directly on a physical server, all physical cores must be licensed, regardless of the number of cores SQL Server is permitted to run on. Hyper-threading is not counted as contributing to physical cores for licensing purposes. When running SQL Server in a virtualized environment, core licenses are required for the number of virtual cores allocated to the Database Server virtual machine.
Core-based licensing is appropriate when the server is shared with externally-facing workloads, or when the total cost of Server + CAL licensing exceeds the cost of Core-based licensing. Core based licensing is the only option for Enterprise Edition.
Best Practices
SQL Server Settings
Maximum Server Memory
By default, SQL Server will allocate up to 2 petabytes of RAM to each instance. If physical RAM is exhausted, this will result in heavy swapping and seriously degraded performance. This value should be tuned to provide as much RAM as possible to SQL Server while still leaving sufficient RAM for other Windows processes.
As a rule of thumb, reserve the larger of 10% of RAM or 4GB, and set the Maximum Server Memory to the remainder. For example, for a server with 32GB RAM, 10% is 3.2GB, which is less than 4GB. Reserving 4GB for Windows, maximum server memory should be set to (32 - 4) * 1024MB = 28672MB.
RAM usage should be carefully monitored to ensure this reservation is sufficient. When the SQL Server shares resources with complex services such as Software Defined Storage it may be necessary to reserve a larger portion of memory.
Database-Level Settings
The following settings are crucial for consistent performance:
Auto Close: Off
Auto Shink: False
The database Compatibility Level should be set to the highest level supported by the current version of SQL Server.
Windows Server Settings
Power Plan
To maximize CPU burst performance and minimize application latency, the SQL Server power plan should be configured to "High Performance". Windows Server 2008 R2 and later default to the "Balanced" power plan, which reduces CPU clockspeeds under low-load conditions and leads to inconsistent SQL Server performance.
For systems virtualized with Hyper-V, this setting should be changed on both the hypervisor and the Database Server guest.
Use of the "High Performance" plan is strongly recommended but will result in increased power consumption, especially on Intel CPUs which predate the introduction of "Speed Shift" technology with Skylake-SP. Administrators may wish to explore scheduling power plan changes to reduce power consumption outside business hours.
Perform Volume Maintenance Tasks Permission
The Windows User running the SQL Server process should be granted the “Perform volume maintenance tasks” permission to speed up data file allocations. For more information see the section on Instant File Initialization.
Storage
File Placement
Prioritise minimising write contention by separating data files and transaction log files to different hard drives. Additionally, aim to place TempDB files on an additional drive. No database files should be placed on the Windows install drive (C:\) because of the risk that auto-growth may fill the drive and render the server inoperable.
If a storage array is used, ensure the array stripes data across spindles such that performance is equivalent or superior to the number of drives required above, i.e. write speeds at least 3-4x a single drive.
An SSD may be considered equivalent to a very fast storage array, and may thus be used for several of the above files - prioritise transaction log files, then TempDB files. Redundancy should still be employed to mitigate the risk of hardware failure.
For a virtual server, place different types of files on different virtual hard disks to maximise flexibility - those virtual hard drives may be moved between different underlying storage devices to optimise performance characteristics.
Auto Growth
Auto-growth is necessary to avoid disruption to users, but has significant performance impacts.
When an auto-growth event occurs, the newly allocated space is initialised by filling the file with zeros. For data files, this will halt the transaction which cause the growth until initialisation is complete, and may halt other transactions which require space. For log files, this will halt all transactions until initialisation is complete. If default settings are used, this may take several minutes.
Instant Database File Initialisation allows data files to grow without the time-consuming initialisation process, but does not change the behaviour of log file growth. As such, Spectra QEST recommend that data and log files have different auto-growth settings.
Data Files
Ensure Instant File Initialisation is permitted, and set a large auto-growth increment of several gigabytes. Do not use percentage-based growth.
Do not rely on auto-growth for regular operation. As part of your maintenance plan, monitor usage of your data files and manually grow them outside of business hours by a large increment (10GB or more) when required.
Transaction Log Files
For log files, Instant File Initialisation does not apply. Set a small auto-growth increment so that transactions are not stalled for an excessively long time. Start with a value of 64MB and tune for your storage. Do not use percentage-based growth.
Do not rely on auto-growth for regular operation. Pre-allocate enough transaction log that auto-growth is not required between Transaction Log backups. The required size will vary depending on volume of work and frequency of Transaction Log backups. Note that more transaction log space may be required during QEST Platform database upgrades, as upgrades generate a very large volume of database activity. An initial allocation of 10GB is reasonable, but up to 100GB should be available for growth during upgrades and other periods of unusual database activity.
TempDB
Create as many TempDB data files as you have CPU cores, up to a maximum of 8. Where possible, distribute these files between multiple physical drives, but do not use the same physical drive as the QESTLab data or log files unless there is only one drive available to the SQL server.
Preallocate space for the TempDB files so as not to rely on auto-growth. A total size of 1GB between the multiple files should be sufficient for typical use. Configure auto-growth to allow for unplanned exceptional TempDB use. If your individual TempDB files are less than 200MB in size, use a growth increment of 20MB. If they are larger than 200MB each, use a growth increment of 10%.
Ensure that the recovery model of TempDB is set to "Simple". This is the default and should never be changed.
Backups and Recovery Models
Types of Backup in SQL Server
Full Backup
A Full Backup provides enough data to restore the database to the instant the Full Backup began.
Despite its name, a Full Backup does not backup log files, and thus does not truncate the transaction log. To manage the size of the transaction log, it is necessary to perform a Transaction Log Backup.
Differential Backup
A Differential Backup stores only the data that has changed since the last Full Backup. To successfully restore a Differential Backup, you must have a copy of the last Full Backup.
If Differential Backups are used, it is important to avoid taking Full Backups outside of the backup schedule, as the following Differential Backups will be based on this unexpected Full Backup which may not be available at restore time. To avoid this issue, be sure to perform a Copy-Only Backup when taking backups outside the backup schedule, e.g. when refreshing a Development environment.
Transaction Log Backup
A Transaction Log backup stores the transaction log activity since the last transaction log backup.
Transaction Log Backups can be restored sequentially following the last Full or Differential backup to recover additional data, up to the instant the last Transaction Log Backup started.
When using the Full Recovery Model, a Transaction Log Backup is the only way to truncate the committed part of the log. Without a Transaction Log Backup, the Transaction Log will continue to grow indefinitely.
Backup Best Practices
- Backups must always be on different storage to the database files
- Backups, and management of past backups, should be scheduled to occur automatically
- A complete backup plan must include a documented and tested recovery process
- All SQL Server backups can be performed on a running system with minimal overhead - there is no reason to avoid running a backup during business hours
- Transaction Log backups should be run multiple times per day - on busy systems, multiple times per hour
- Use Copy-Only Backups when taking backups outside the backup schedule, to avoid disrupting the expected restore chain
Transaction Log Recovery Models
The choice of SQL Server Recovery Model allows you to trade flexibility in data restoration against complexity of managing the log file and backups. The two models appropriate for routine operation are discussed here.
Full Recovery Model
When operating under the Full Recovery Model, the log file contains the full history from the last Log Backup until the current instant. The Full Recovery Model enables point-in-time restoration to any instant prior to the start of the most recent log backup.
For improved data security, schedule frequent Log Backups to ensure maximum data can be recovered after catastrophic failure.
Under the Full Recovery Model the transaction log will grow until a Log Backup is performed. To avoid unbounded growth from filling disks and causing bad behaviour, Log Backups must be performed regularly. To justify use of the Full Recovery model, log backups should be taken much more frequently than full backups - depending on the customer's tolerance for data loss, this might be as much as several times per hour. Log backups must be retained at least as far back as the most recent Full Backup (or the most recent Differential Backup if this functionality is utilised), but should be retained longer in case the most recent full backup is damaged.
Log shipping, database mirroring, and Always On groups require use of the Full Recovery Model.
Simple Recovery Model
When operating under the Simple Recovery Model, the log file only contains the history of ongoing transactions. Once a transaction is complete, space in the log file is automatically reclaimed for future use.
When using the Simple Recovery Model, data can only be recovered to the point of the most recent backup. This increases the risk of data loss, but reduces the complexity of managing the log file.
The Simple Recovery Model is appropriate if snapshotting backup software such as ShadowProtect is used to provide near-continuous backup protection of the SQL Server data and log files.
The Simple Recovery Model may be appropriate for small databases where multiple full or differential data backups can be performed daily.
Tuning
Parallelism and MAXDOP
The Max Degree of Parallelism (MAXDOP) is tunable to suit the environment and workload. If set to the default value of "0", SQL Server may choose to use all CPU cores simultaneously on one query. This behavior may not be optimal for a many-user workload such as QESTLab, but is usually acceptable because most QESTLab queries do not generate parallel query plans.
Reasonable values for MAXDOP on the QEST Platform Database Server are between 0.25-1.0 times the number of CPU cores available to the SQL Server, with a minimum of 2 and a maximum of 8. For an SQL Server with a single NUMA node of 8 or fewer CPU cores, Spectra QEST recommend leaving the setting at the default of 0 unless addressing a specific issue. For a single NUMA node of greater than 8 cores, it is strongly recommended to set MAXDOP <= 8. For multiple NUMA node configurations, MAXDOP should be set no higher than the number of cores available to each NUMA node, with a maximum of 8. For more information, consult Microsoft's Recommendations and guidelines for the "max degree of parallelism" configuration option in SQL Server,
The Cost Threshold for Parallelism may be tuned to adjust when SQL Server will generate a parallel query plan. Lower values will result in more frequent use of parallel query plans, and depending on MAXDOP this may result in other queries being blocked while the parallel query completes. For fast modern processors, the default value of 5 is low, and you may attempt to tune for improved behavior. Perform testing at the default value of 5, and increased values of 25 and 50, then determine where in that spread provides optimal performance. For more information, see Microsoft's Configure the cost threshold for parallelism Server Configuration Option documentation.
Products described on these pages, including but not limited to QESTLab®, QESTNet, QESTField, Construction Hive, and associated products are Trademarks (™) of Spectra QEST Australia Pty Ltd and/or related companies.
The content of this page is confidential. Do not share, duplicate or distribute without permission.
© 2021 Spectra QEST® Australia Pty Ltd and/or related companies. Terms of Use and Privacy Statement
Integrity | Curiosity | Empathy | Unity
The content of this page is confidential and for internal Spectra QEST use only. Do not share, duplicate or distribute without permission.