Subscribe Now

* You will receive the latest news and updates on the Canadian IT marketplace.

Trending News

Blog Post

5 ways to improve your backup and restore process
BLOGS

5 ways to improve your backup and restore process 

So, how can backup performance be improved?  And what are the factors that contribute to failure? 

Here are some answers to these questions:

The monitoring process

A faulty monitoring process can inhibit the ability to identify backup failures that are occurring as well as the ability to understand the failures. Missing these backup failures only results in more failures in the future, not to mention tons of time spent studying logs, making spreadsheets and manually attempting to diagnose issues.  The truth is, many backup systems were created with just a handful of servers in mind, as a company grows these systems become overburdened.

Mike Johnson contributing writer ITiC 200

What is the solution?  Organizations with IT backup environments that are high-performing need to integrate a system that will automatically gather data together and offer a comprehensive outlook of the entire system, along with a view of each server and customer. 

Missed alerts

Alerts are sent by the backup system when a particular risk is brought to light.   This is an excellent system when departments are first established. However, over time, things change.  People leave, and factors such as changes in applications, servers, and backup devices are all causes of why sending out alerts in this way can be error prone. 

To ensure that alerts are not missed?  Real-time alerts should be sent to various administrators within a command center.  The alerts would be sent in the form of email, SNMP integration, and SMS.  This solution offers an immediate way for the appropriate person to reply directly, saving both time and money in the end.

RELATED CONTENT

A call for backup

SSD vs. HDD

Does deleting unwanted data constitute a crime?

Errors with command line driven operations

Though command line interfaces are preferred by many to accomplish a task quickly, they are not good at allowing consistent operation of backups.  Over time, different administrators introduce different ways to operate things, and this makes for an error-prone environment. 

An interface must be installed in the backup systems which would grant GUI operation of the complete backup environment.  This way, the error-prone command line interface is no longer the standard, but rather a simple preference.

Reports and planning are not allotted enough time

Very often administrators focus on one report – that is the report from the system that sent out an alert. Although this is important, it is not enough to provide a well-running backup environment.  Issues also arise when distributed server’s flush data to make room for new data.  When this happens, the data collected is only saved for around 14 days, which can result in information pertinent to a failed backup getting flushed.  If and when this happens, it is next to impossible to understand the failure. 

According to best practices, data coming from primary and distributed servers must be stored independently in their own databases.  This way, they will not get in the way of the daily backup operations that occur, and data is free to view and use at any time. 

Misconfigurations

If backup and recovery systems are misconfigured, operationally speaking, there will be difficulties.  This can happen with changing environments that grow quickly.  Some typical difficulties are:

  • Recovery logs are improperly sized – Going from recovery to database isn’t always a smooth transaction.  After backup of the database, flushing of the recovery log occurs.  Sometimes the log is full before the backing up of the database has happened, thus, the information is no longer being recorded.  It is vital that the file space of the recovery log be increased manually and restarted in order to avoid an unfixable emergency situation.
  • Disk to tape mismatch – Where tape is still in usage, backups are written first to disk, and then from the disk to tape.  In the event of a small disk pool, backup delays, and missed windows are likely to occur.  It is interesting to note that only one thread from a disk pool can be written to tape.  The tape must accept the speed needed to write data from the disk pool, or the disk pool will not take any more backup data.
  • Overload during simultaneous backup sessions – Often times, in new environments with new clients being added, client numbers exceed what the backup system can support.  This results in the dreaded missed backup window.

How do you prevent such situations?

Backup administrators must use systems that monitor by watching their environments keenly.  This way, even when mistakes happen, they will be able to find the errors quickly. 

An efficient backup program does not give teams the ability to walk away and forget about the health of their environments.  On the contrary, they must pair backup software with an excellent monitoring system in order to have a clear picture of the backup sphere.  

Many wonder if backup operations are a matter of art or science. The truth is, that backup environments are composed of both.  They are scientifically wired to succeed based off of rates of failure, capacity, speed, and time, and are changing and under pressure continuously.  That being said, administrators must be skilled in the art of properly using backup software.  They must have the knowledge and skill to know when certain software will be most efficient. 

Tools that assist in forecasting difficulties and visualizing trends must also be acquired.  The art of managing a backup environment must be passed on to future administrators with great care, as it holds a high learning curve until mastered.  

Mike Johnson is a technical writer for Rocket Software.  He writes on topics like data protection, backup monitoring, and reporting software.  He holds a Bachelor of Science Management degree from DeVry University.

Related posts