If you own something that is really important to you, you need to keep that thing in tip-top shape all the time. It may be your car, your house, your shoe collection, your pets, your phone or your computer. Taking responsibility for the maintenance of these possessions will surely prevent any hassles down the road. Now, imagine that you’re responsible for maintaining and running a whole network of servers and databases for a large corporation, a government office, a school, a bank or the defense department. Even a tiny glitch in the system can create problems for you as a database administrator and for the users who are depending upon and expecting a perfectly running system. As a database administrator, it’s your job to make the system run at 100% all the time. When a glitch occurs, performance can drop drastically, the servers can run off of disc space and databases can even be corrupted. If this happens and your team has not set up a backup plan, everything will go kaput!

Image courtesy of sharetechnologies.net

Image courtesy of sharetechnologies.net

The goal of implementing a database maintenance plan is to ensure that the server’s database performs at its best and with as few problems as possible. Many experts agree that there is no one-size-fits-all solution for effective database maintenance, but there are some key areas that need to be addressed and checked. 5 areas to focus on include:

  • Backups
  • Statistics
  • Data and log file management
  • Index fragmentation
  • Detection of corrupt files

A poorly maintained database can lead to problems in one or more of the mentioned areas, which can lead to poor application performance, downtime and data loss.

The Lowdown On Database Maintenance

1. Backups

When data corruption or disaster occurs, one of the most effective methods of recovery is to restore the database using your backups. Assuming that you have backups in the first place, these should not be corrupt. If you want to know how to get a badly corrupted database to work again without backups, the answer is it will not happen. Without working backup files, your corrupted files cannot be used or saved. It is highly recommended that you do regular backups.

Establishing a good backup strategy can be achieved in four steps:

  • First – you need to do regular full-database backups. This will give you multiple points-in-time which you can restore your systems to.
  • Second – always keep your backups around for a few days just in case one becomes corrupted. It is better to have a backup from a few days ago than having no backup at all. Verify the backups’ integrity so that you will be sure that all backups will not go down when needed.
  • Third – if your full-database daily backup will not allow you to meet the maximum data loss the office can sustain, you need to investigate the different database backup options which are based on full-database backups. They should contain a record of the changes made since the last full-database backup.
  • Fourth – use log backups. They are only available in the FULL recovery models which provide a backup of all the log records that were generated since the last log backup. Maintaining log backups will give the DBAs an unlimited number of points-in-time recovery options.

2. Statistics

Statistics are usually generated by reading the index data. Statistics also determin the data distribution in the relevant columns in the server. This can be built by doing a scan on all data values for a specific column or can be based on a user-specified data percentage. If the value distribution is even in a column, a sampled scan is good enough in creating and updating the statistics faster as compared to a full scan.

Image courtesy of heartsandlaserbeams.com

Image courtesy of Heartsandlaserbeams

3. Log And Data File Management

Experts have stated that DBAs should make sure that the instant file initializations are configured; that the auto-growth is correctly configured; that the data and log files should be separated and isolated from the others; and that the auto-shrink is not enabled and is not part of the maintenance plan.

4. Index Fragmentation

It is also possible to run fragmentation within data files and the structures that save tables and indexes. This fragmentation is separate from the file-system level and the log file. There are two kinds of fragmentation that happen within a data file:

  • Fragmentation within the table structures that consist of pages
  • Fragmentation within the individual index pages and data

Any wasted space on the index pages can result in having more pages to hold the same amount of data. It means that any query will require more I/O to read the same amount of data. These unnecessary pages take up space in the data cache, and thus will use more server memory.

5. Detection Of Corruption

How will you as a DBA make sure that data remains encrypted and recoverable in the case of downtime or a system malfunction? Having a full disaster recovery plan is the best solution. Most corruptions are caused by hardware failure, which can consist of the I/O subsystems, the OS, the drivers, device drivers, RAID controllers, the disk drives, networks and cables. Another common failure is a power outage, especially when a hard drive is in the middle of database page writing. If this happens, it can lead to an incomplete page image on the hard disk. SQL servers 2005 and above have a checksum mechanism that can detect corruption on any page.

Image courtesy of search.dilbert.com

Image courtesy of search.dilbert.com

Things To Do When Upgrading to SQL 2014

You need to consider some essential points before upgrading your SQL server to the 2014 edition. You may not need to upgrade the entire platform, but there’s a big chance that you may need to upgrade your existing database. The points you need to consider include:

  • The upgrade from Evaluation of previous version of SQL server is not supported.
  • WOW mode failovers are not supported just like in previous versions.
  • You will be able to upgrade from Enterprise/Web/ Developer/Express/Workgroup to the Enterprise and Business Intelligence versions. The downside is that once you have upgraded you can’t go back to the previous versions, therefore it might be best to use Backup and Restore.
  • Before upgrading to 2014, DBAs should conduct an overview of the changes and pinpoint the elements that are not supported by the code from old or previous versions.
  • Verify that the functionality you are using is supported in the version/edition you wish to upgrade to.
  • Developers and DBAs should ensure that they have enabled Windows Authentication for the SQL Server Agent. The default config of the SQL Server Agent service account should also be a member of SYSADMIN.
  • Software and hardware requirements should also be checked to see if they can support the new version or not. Otherwise the hardware team will make recommendations to increase memory or CPU count to sustain the growth of data following the upgrade.
  • Cross platform upgrade is not supported. If your current version is a 32-bit version, you cannot upgrade to a 2014 64-bit version using the Setup. The best way to go about this is to detach and attach the database from the 32-bit instance to the 64-bit 2014 version.

Disaster can happen at any time and we need to make sure we’re prepared for anything. Having a contingency plan is the best weapon against any SQL server issues. It will save you time, money and a lot of headaches in the future.

About Author

Jon specialises in research and content creation for content marketing campaigns. He’s worked on campaigns for some of Australia's largest brands including across Technology, Cloud Computing, Renewable energy and Corporate event management. He’s an avid scooterist and musician.