Archive | Data Backups

How to back up Microsoft Outlook settings files

If you have customized settings, such as toolbar settings and Favorites, that you want to replicate on another computer or restore to your computer, you might want to include the following files in your backup:

  • Outcmd.dat: This file stores toolbar and menu settings.
  • ProfileName.fav: This is your Favorites file, which includes the settings for the Outlook bar (only applies to Outlook 2002 and older versions).
  • ProfileName.xml: This file stores the Navigation Pane preferences (only applies to Outlook 2003 and newer versions).
  • ProfileName.nk2: This file stores the Nicknames for AutoComplete.
  • Signature files: Each signature has its own file and uses the same name as the signature that you used when you created it. For example, if you create a signature named MySig, the following files are created in the Signatures folder:
    • MySig.htm: This file stores the HTML Auto signature.
    • MySig.rtf: This file stores the Microsoft Outlook Rich Text Format (RTF) Auto signature.
    • MySig.txt: This file stores the plain text format Auto signature.

    The location of the signature files depends on the version of Windows that you are running. Use this list to find the appropriate location:

    • Windows Vista or Windows 7: Drive\users\Username\appdata, where Drive represents the drive that Outlook was installed to and Username represents the user name that Outlook was installed under.
    • Windows XP or Windows 2000: Drive\Documents and Settings\Username\Local Settings\Application Data\Microsoft\Outlook, where Drive represents the drive that Outlook was installed to and Username represents the user name that Outlook was installed under.
    • Windows 98 or Windows Me: Drive\Windows\Local Settings\Application Data, where Drive represents the drive that Outlook was installed to.

Note If you use Microsoft Word as your e-mail editor, signatures are stored in the Normal.dot file as AutoText entries. You should back up this file also.

Posted in Computers, Data Backups, Email, Exchange Server, How To's, Microsoft, Microsoft Office, Microsoft Outlook0 Comments

How to back up Microsoft OUTLOOK Personal Address Books

Your Personal Address Book might contain e-mail addresses and contact information that is not included in an Outlook Address Book or contact list. The Outlook Address Book can be kept either in an Exchange Server mailbox or in a .pst file. However, the Personal Address Book creates a separate file that is stored on your hard disk drive. To make sure that this address book is backed up, you must include any files that have the .pab extension in your backup process.

Follow these steps to locate your Personal Address Book file:

  1. If you are running Windows Vista: Click Start.If you are running Windows XP: Click Start, and then click Search.If you are running Microsoft Windows 95 or Microsoft Windows 98: Click Start, point to Find, and then click Files or Folders.If you are running Microsoft Windows 2000 or Microsoft Windows Millennium Edition (Me): Click Start, point to Search, and then click For Files or Folders.
  2. Type *.pab, and then press ENTER or click Find Now.Note the location of the .pab file. Use My Computer or Windows Explorer to copy the .pab file to the same folder or storage medium that contains the backup of the .pst file.

You can use this backup to restore your Personal Address Book to your computer or transfer it to another computer. Follow these steps to restore the Personal Address Book:

  1. Close any messaging programs such as Outlook, Microsoft Exchange, or Windows Messaging.
  2. Click Start, and then click Run. Copy and paste (or type) the following command in the Open box, and then press ENTER:
    control panel

    Control Panel opens.

    Note If you see the Pick a category screen, click User Accounts.

  3. Double-click the Mail icon.
  4. Click Show Profiles.
  5. Click the appropriate profile, and then click Properties.
  6. Click Email Accounts.
  7. Click Add a New Directory or Address Book, and then click Next
  8. Click Additional Address Books, and then click Next.
  9. Click Personal Address Book, and then click Next.
  10. Type the path and the name of the Personal Address Book file that you want to restore, click Apply, and then click OK.
  11. Click Close, and click then OK.

Note The Outlook Address Book is a service that the profile uses to make it easier to use a Contacts folder in a Mailbox, Personal Folder File, or Public Folder as an e-mail address book. The Outlook Address Book itself contains no data that has to be saved.

Posted in Computer Repair, Computers, Data Backups, Exchange Server, How To's, Microsoft, Microsoft Office, Microsoft Outlook0 Comments

How to transfer Outlook data from one computer to another computer

You cannot share or synchronize .pst files between one computer and another computer. However, you can still transfer Outlook data from one computer to another computer.

You might also want to create a new, secondary .pst file that is intended for transferring data only. Save the data that you want to transfer in this new .pst file and omit any data that you do not want to transfer. If you need to make a secondary .pst file to store data for transfer between two different computers, or for backup purposes, use the following steps:

  1. On the File menu, point to New, and then click Outlook Data File.
  2. Type a unique name for the new .pst file, for example, type Transfer.pst, and then click OK.
  3. Type a display name for the Personal Folders file, and then click OK.
  4. Close Outlook.

Follow these steps to copy an existing .pst file:

  1. Use the instructions in the “How to make a backup copy of a .pst file” section to make a backup copy of the .pst file that you want to transfer. Make sure that you copy the backup .pst file to a CD-ROM or other kind of removable media.
  2. Copy the backup .pst file from the removable media to the second computer.
  3. Follow the steps in the “How to import .pst file data into Outlook” section to import the .pst file data into Outlook on the second computer.

Posted in Computer Repair, Computers, Data Backups, Email, Exchange Server, How To's, Microsoft, Microsoft Office, Microsoft Outlook0 Comments

4 steps to preventing server downtime

Eliminating potential single points of failure is a time-tested strategy for reducing the
risk of downtime and data loss. Typically, network administrators or computer consultants do this by introducing redundancy in the application delivery infrastructure, and automating the process of monitoring and
correcting faults to ensure rapid response to problems as they arise. Most leading
companies adopting best practices for protecting critical applications and data also
look at the potential for the failure of an entire site, establishing redundant systems at
an alternative site to protect against site-wise disasters.

STEP #1 – PROTECT AGAINST SERVER FAILURES WITH QUALITY….don’t be a cheapskate with your own business by using low quality CHEAPO server and network hardware. Use HIGH Quality hardware.

HARDWARE AND COMPONENT REDUNDANCY
Unplanned downtime can be caused by a number of different events, including:
• Catastrophic server failures caused by memory, processor or motherboard
failures

Server component failures including power supplies, fans, internal disks,
disk controllers, host bus adapters and network adapters
Server core components include power supplies, fans, memory, CPUs and main logic
boards. Purchasing robust, name brand servers, performing recommended
preventative maintenance, and monitoring server errors for signs of future problems
can all help reduce the chances of automation downtime due to catastrophic server
failure.

You can reduce downtime caused by server component failures by adding
redundancy at the component level. Examples are: redundant power and cooling,
ECC memory, with the ability to correct single-bit memory errors, and combining
Ethernet cards with RAID.

STEP #2 – PROTECT AGAINST STORAGE FAILURES WITH
STORAGE DEVICE REDUNDANCY AND RAID

Storage protection relies on device redundancy combined with RAID storage
algorithms to protect data access and data integrity from hardware failures. There are
distinct issues for both local disk storage and for shared, network storage.

For local storage, it is quite easy to add extra disks configured with RAID protection.
A second disk controller is also required to prevent the controller itself from being a
single point of failure.

Access to shared storage relies on either a fibre channel or Ethernet storage network.
To assure uninterrupted access to shared storage, these networks must be designed
to eliminate all single points of failure. This requires redundancy of network paths,
network switches, and network connections to each storage array.

STEP #3 – PROTECT AGAINST NETWORK FAILURES WITH
REDUNDANT NETWORK PATHS, SWITCHES AND ROUTERS

The network infrastructure itself must be fault-tolerant, consisting of redundant
network paths, switches, routers and other network elements. Server connections can
also be duplicated to eliminate fail-overs caused by the failure of a single server or
network component.

Take care to ensure that the physical network hardware does not share common
components. For example, dual-ported network cards share common hardware logic,
and a single card failure can disable both ports. Full redundancy requires either two separate adapters or the combination of a built-in network port along with a separate network adapter.

STEP #4 – PROTECT AGAINST SITE FAILURES WITH DATA
REPLICATION TO ANOTHER SITE

The reasons for site failures can range from an air conditioning failure or leaking roof
that affects a single building, a power failure that affects a limited local area, or a
major hurricane that affects a large geographic area. Site disruptions can last
anywhere from a few hours to days or even weeks.

There are two methods for dealing with site disasters. One method is to tightly couple
redundant servers across high speed/low latency links, to provide zero data-loss and
zero downtime. The other method is to loosely couple redundant servers over
medium speed/higher latency/greater distance lines, to provide a disaster recovery
(DR) capability where a remote server can be restarted with a copy of the application
database missing only the last few updates. In the latter case, asynchronous data
replication is used to keep a backup copy of the data.
Combining data replication with error detection and fail over tools can help to get a
disaster recovery site up and running in minutes or hours, rather than days.

Posted in Computer Repair, Computers, Data Backups, Data Storage, Hard Drives, Hardware, High Availability, How To's, RAID Levels, Servers0 Comments

SOURCES OF SERVER AND NETWORK DOWNTIME

Unplanned server and network downtime can be caused by a number of different events:

• Catastrophic server failures caused by memory, processor or motherboard
failures

• Server component failures including power supplies, fans, internal disks,
disk controllers, host bus adapters and network adapters

• Software failures of the operating system, middleware or application

• Site problems such as power failures, network disruptions, fire, flooding or
natural disasters

To protect critical applications from downtime, you need to take steps to protect
against each potential source of downtime.

Eliminating potential single points of failure is a time-tested technical strategy for reducing the
risk of downtime and data loss. Typically, network administrators do this by introducing redundancy in
the application delivery infrastructure, and automating the process of monitoring and
correcting faults to ensure rapid response to problems as they arise. Most leading
companies adopting best practices for protecting critical applications and data also
look at the potential for the failure of an entire site, establishing redundant systems at
an alternative site to protect against site-wide disasters.

Posted in Computer Repair, Computers, Data Backups, Data Recovery, Data Storage, Hard Drives, High Availability, Memory, Motherboards, Networking, Servers0 Comments

THE IMPACT OF NETWORK AND OR SERVER DOWNTIME

A failure of a critical Microsoft Windows application can lead to two types of losses:

• Loss of the application service – the impact of downtime varies with the
application and the business. For example, for some businesses, email can
be an absolutely business-critical service that costs thousands of dollars a
minute when unavailable.

• Loss of data – the potential loss of data due to an outage can have
significant legal and financial impact, again depending on the specific type of
application.

In determining the impact of downtime, you must understand the cost to your
business in downtime per minute or hour. In some cases, you can determine a
quantifiable cost (orders not taken). Other, less direct costs may include loss of
reputation and customer churn.

The loss of production data can also be very costly, for a variety of reasons. In the
manufacturing environment, the loss of data could affect compliance with regulations,
leading to wasted product, fines, and potentially hazardous situations. For example, if
a pharmaceutical company that is manufacturing drugs does not show all of the
records of its collected data from the manufacturing process, the FDA could force the
company to throw away its entire batch of drugs. Because it is critical to know the
value for every variable when manufacturing drugs, the company could face fines for
not complying with FDA regulations.

Publicly-traded companies may need to ensure the integrity of financial data, while
financial institutions must adhere to SEC regulations for maintaining and protecting
data. For monitoring and control software, data loss and downtime interrupts your
ability to react to events, alarms, or changes that require immediate corrective action.

The bottom line is downtime is very expensive and preventing downtime is the most important factor in any business operation.

Posted in Computer Repair, Computers, Data Backups, Hardware, High Availability, Networking, Servers0 Comments

The Art of High Availability

All organizations are becoming increasingly reliant upon their computer systems. The
availability of those systems can be the difference between the organization succeeding
and failing. A commercial organization that fails is out of business with the consequences
rippling out to suppliers, customers, and the community.

This series will examine how we can configure our Windows Server 2008 environments to
provide the level of availability our organizations need. The topics we cover will comprise:

• The Art of High Availability—What do we mean by high availability? Why do we
need it, and how do we achieve it?

• Windows Server 2008 Native Technologies—What does Windows Server 2008
bring to the high?availability game, and how can we best use it?

• Non?Native Options for High Availability—Are there other ways of achieving high
availability, and how can we integrate these solutions into our environments?

The first question we need to consider is why we need highly available systems.

Why Do We Need It?
This question can be turned on its head by asking “Do all of our systems need to be highly
available?” The answer for many, if not most, organizations is no. The art of high
availability comes in deciding which systems need to be made highly available and how this
is going to be achieved. When thinking about these systems, we need to consider the effects
of the systems not being available.

Downtime Hurts
Downtime is when the computer system is unavailable to the user or customer and the business
process cannot be completed. If the server is up and the database is online but a network
problem prevents access, the system is suffering downtime. Availability is an end?to?end
activity. Downtime hurts in two ways: If a system is unavailable, the business process it
supports cannot be completed and there is an immediate loss of revenue. This could be due
to:

  • Customer orders not being placed or being lost
  • Staff not working
  • Orders not being processed

The second way that downtime hurts is loss of reputation. This loss can be even more
damaging in the long term if customers decide that your organization cannot be trusted to
deliver and they turn to a competitor. The ability to gain business increases with ease of
communication and access. The converse is that the ability to lose business increases just
as fast if not faster.

Mission Critical Systems on Microsoft Windows
Critical business systems are hosted on the Microsoft Windows platform. These can be customer
facing or internal, but without them, the business grinds to a halt. Email may not seem to be
a critical system, but it is essential to the modern business. More than 60% of person to person
communication is via email in most businesses. This includes internal and external
communications. If a company is non?responsive to communications, it is judged, perhaps
harshly, as being out of business. This can become reality if it progresses too long.

24 × 7 Business Culture
The “Global Village” concept has been accelerated by the adoption of the Internet for
business purposes. Globalization in this case means that business can come from anywhere
in the world—not necessarily your own time zone. If your business competes at this level,
high availability isn’t an option, it’s a necessity.

Legislation
Industries such as the financial services and health sector have a requirement to protect
the data they store. This requirement can involve the availability of the data. In other cases,
the systems must be highly available to meet safety requirements.

Once you know why you need it, you need to define what is meant by high availability.

What Is High Availability?
High availability is usually expressed in terms of a number of “9”s. Four nines is 99.99%
availability. The ultimate goal is often expressed as 5 “9”s availability (99.999%), which
equates to five and a quarter minutes of downtime per year. The more nines we need, the
greater the cost to achieve that level of protection.

One common argument is scheduled downtime. If downtime is scheduled, for example, for
application of a service pack, does that mean the system is unavailable? If the system is
counted as unavailable, any Service Level Agreements (SLAs) on downtime will probably
be broken. In hosting or outsourcing scenarios, this could lead to financial penalties.
However, if scheduled downtime doesn’t mean the system is counted as unavailable,
impressive availability figures can be achieved—but are they a true reflection of
availability to the users? There is no simple answer to these questions, but all systems
require preventative maintenance or they will fail. The disruption to service can be
minimized (for example, the patching nodes of a cluster in sequence) but cannot be
completely eliminated. Probably the best that can be achieved is to ensure that
maintenance windows are negotiated into the SLA.

These measurements are normally taken against the servers hosting the system. As we
have seen, the server being available doesn’t necessarily mean the system is available. We
have to extend our definition of highly available from protecting the server to also include
protecting the data.

The Server Clustering Service built?in to Microsoft Windows is often our first thought for protecting the
server. In the event of failure, the service automatically fails over to a standby server, and
the business system remains available. However, this doesn’t protect the data in that a
failure in the disk system, or even network failures, can make the system unavailable.

Do We Still Need to Back Up our server and data?
One common question is “Do I still need to take a backup?” The only possible answer is
YES!
High availability is not, and never can be, a substitute for a well?planned backup
regimen. Backup is your ultimate “get out of jail card.” When all else fails, you can always
restore from backup. However, this pre supposes a few points.

  • Test restores have been performed against the backup media. The last place you
    want to be is explaining why a business?critical system cannot be restored because
    the tapes cannot be read.
  • A plan exists to perform the restore that has been tested and practiced. Again, you
    don’t want to be performing recoveries where the systems and steps necessary for
    recovery are not understood.

Backup also forms an essential part of your disaster recovery planning.

Disaster Recovery vs. High Availability
These two topics, high availability and disaster recovery, are often thought of as being the
same thing. They are related but separate topics. High availability can be best summed up
as “keeping the lights on.” It is involved with keeping our business processes working and
dealing with day?to?day issues. Disaster recovery is the process and procedures required to
recover the critical infrastructure after a natural or man?made disaster. The important
point of disaster recovery planning is restoring the systems that are critical to the business
in the shortest possible time.

Traditionally, these are two separate subjects, but the technologies are converging. One
common disaster recovery technique is replicating the data to a standby data center. In the
event of a disaster, this center is brought online and business continues. There are some
applications, such as relational database systems and email systems, that can manage the
data replication to another location. At one end of the scale, we have a simple data
replication technique with a manual procedure required to bring the standby data online in
place of the primary data source. This can range up to full database mirroring where
transactions are committed to both the primary and mirror databases and fail over to the
mirror can be automatically triggered in the event of applications losing access to the
primary. In a geographically dispersed organization where systems are accessed over the
WAN, these techniques can supply both high availability and disaster recovery.

We have seen why we need high availability and what it is. We will now consider how we
are going to achieve the required level of high availability.

Achieving High Availability
When high availability is discussed, the usual assumption is that we are talking about
clustering Windows systems. In fact, technology is one of three areas that need to be in
place before high availability works properly:

  • People
  • Processes
  • Technology

People and Processes
These are the two points that are commonly overlooked. I have often heard people say that
clustering is hard or that they had a cluster for the application but still had a failure. More
often than not, these issues come down to a failure of the people and processes rather than
the technology.

The first question that should be asked is “Who owns the system?” The simple answer is
that IT owns the system. This is incorrect. There should be an established business owner
for all critical systems. They are the people who make decisions regarding the system from
a business perspective—especially decisions concerning potential downtime. A technical
owner may also be established. If there is no technical owner, multiple people try to make
decisions that are often conflicting. This can have a serious impact on availability.
Ownership implies responsibility and accountability. With these in place, it becomes
someone’s job to ensure the system remains available.

A second major issue is the skills and knowledge of the people administering highly
available systems. Do they really understand the technologies they are administering?
Unfortunately, the answer is often that they don’t. We wouldn’t make an untrained or
unskilled administrator responsible for a mainframe or a large UNIX system. We should
ensure the same standards are applied to our highly available Windows systems. I once
worked on a large Exchange 5.5 to Exchange 2003 migration. This involved a number of
multi?node server clusters, each running several instances of Microsoft Exchange. One of the Microsoft Exchange
administrators asked me “Why do I need to know anything about Active Directory?” Given
the tight integration between Exchange and Active Directory (AD), I found this an
incredible question. This was definitely a case of untrained and unskilled network administrator.

Last, but very definitely not least, we need to consider the processes around our high availability
systems. In particular, two questions need to be answered:

  • Do we have a change control system?
  • Do we follow it?

If the answer to either of these is no, our system won’t be highly available for very long. In
addition, all procedures we perform on our systems should be documented and tested.
They should always be performed as documented.

Technology
Technology will be the major focus of the next two articles, but for now, we need to
consider the wider implications of high availability. We normally concentrate on the
servers and ensure that the hardware has the maximum levels of resiliency. On top of this,
we need to consider other factors:

  • Network—Do we have redundant paths from client to server? Does this include
    LAN, WAN, and Internet access?
  • Does the storage introduce a single point of failure?
  • Has the operating system (OS) been hardened to the correct levels? Is there a
    procedure to ensure it remains hardened?
  • Does our infrastructure in terms of AD, DNS, and DHCP support high availability?
  • Does the application function in a high?availability environment?

Costs
Highly?available systems explicitly mean higher costs due to the technology and people we
need to utilize. The more availability we want, the higher the costs will rise. A business
decision must be made regarding the cost of implementing the highly?available system
when compared against the risk to the business of the system not being available.

This calculation should include the cost of downtime internally together with potential loss
of business and reputation. When a system is unavailable and people can’t work, the final
costs can be huge leading to the question “We lost how much?”

Summary
You need high availability data solutions to ensure your business processes keep functioning. This ensures
your revenue streams and your business reputation are protected. We help you achieve high availability
through the correct mixture of people, processes, and technology.

Posted in Computers, Data Backups, Data Storage, Hardware, High Availability, Servers0 Comments

What if you accidently delete a file or your data gets corrupted? How can you get your data back fast?

This sort of thing happens to small businesses and even large businesses in Orlando Florida more often then many people believe. Data gets deleted by accident or a file or files gets corrupted.
Hard drives fail, computers crash, servers get to hot and overheat, these things do happen and being able to restore data quickly is very important.

One of the nice advantages of disk based backup is that it can be done quickly with little disruption to your businesses applications and operations. One of the many ways to protect files is with automated daily backups to DAS or a NAS. Most backup solutions allow you to make  full backups to disk and frequent incremental backups of data that has changed since the last full backup daily or even several times a day.

Thanks to fast disk based incremental backups, if you accidentally delete a file or if a file, a directory, or an entire system becomes corrupted due to a virus or user or hardware failure, you can recover that file or data from prior state quickly and easily so you can get right back to work.

If you’re using a NAS solution to store backups, check to see if it comes with or works easily with software that offers point in time backup and recovery. Then you can rest assured that you’ll never really lose a precious file or data that you really need for your business.

If you accidentally deleted a file or if your server or computer crashed.
Do you know for sure you could easily and quickly restore your data?

When was the last time you actually tested your backup system?

Far too often we have seen small businesses in and around Orlando Florida use obsolete backup systems and when something goes wrong they learn all along their backup system has not been backing any data up!

TEST YOUR BACKUP SYSTEM and test it often.
Purposely remove a file or files and restore them from your backup.
This is the only way to know for sure your backup system really works.

Posted in Computers, Data Backups, Data Recovery, Data Storage0 Comments

You know you should be backing up your desktop computers and servers but you dont

You know you should be backing up your desktop computers and laptops, which frequently store the most recent information at your company. Unfortunately, the sad truth is that you and your colleagues probably don’t. Why? Backing up a PC is time consuming and not as easy as it should be, so you put it off, and then you put it off again.

If some of your staff actually takes the trouble to back up their PCs, they’re probably doing it infrequently, and may in fact be doing it incorrectly. When it comes time to actually recover files and data you may be unpleasantly surprised.

Operating without a backup strategy is risky behavior if your company is highly dependent on applications and information. If your company falls under federal regulations such as HIPAA or the Sarbanes Oxley Act, you may be in the unsavory position of having to swallow a fairly steep fine. You don’t have to be a large hospital to fall under HIPAA, you could just be small doctor’s office.

That’s why just about any business needs to devise a workable strategy for backing up its desktop and laptop PC’s and, even more important, for restoring that information when a file is corrupted,or lost or when a power failure or natural disaster takes computer systems down.

For most small – and medium-sized businesses there are four basic ways to do backup:

  • Backup to Tape….this is now obsolete technology by the way
  • Backup to Disk–DAS
  • Backup to Disk–NAS
  • Backup to Disk–SAN

Backup to Tape

Tape was the chosen medium for backup for many years, thanks to its low cost and high reliability. Tape also has the advantage of portability, which meant it can be taken off site easily.

Tape is barely a viable backup medium today. and tape drives have major drawbacks in comparison to today’s other backup solutions:

It’s slow Compared to disk storage, tape performance is slow. While tape was viable for backing the volumes of business data typical in the past, data storage has grown so enormously and backup windows have shrunk so much in most organizations, that there is often not enough time in the day or night to execute a full tape backup.

It is difficult and time consuming Somebody must be routinely responsible for loading, rotating and changing tapes-typically on a daily or weekly basis- and many small businesses don’t have the staff time and expertise to take on that responsibility.

It’s not easily accessible Tape is not a random access medium. Restoring data from tape requires considerable staff time to find, load, and access a file from the tape.

It’s not always reliable Tape backup devices such as autoloaders and tape libraries have mechanical parts that will fail. If tape backup is not handled the right way, you may never find out about a mechanical failure or user error until you need to restore data from tape.

Despite these drawbacks, there are much better backup solutions today.

Backup to Disk

Hard disk storage used to be expensive and unreliable, but over the years prices have come down and reliability has gone up so much that disk is now a very viable medium for backup as long as you are backing up to more then one hard drive per backup. Backing up to one of those cheap external hard drives is really cutting corners and is not considered a professional backup solution.

The advantages of disk-based backup are many:

It’s fast There’s no comparison between the performance of disk-based backup and restore and tape. What might take hours when you’re backing up to tape could take minutes when you’re backing up to or restoring data from a hard disk.

In addition to traditional backup there are also other useful disk-based data protection methods. For example, replication copies data from one disk to a second disk at a separate location. For companies that have little or no backup window, there’s little alternative to the performance of disk based data protection.

It’s easy Once the disk storage is installed, there’s no need to load, rotate or change anything for a long time. You can configure an automatic backup strategy and then let it run on its own.

It’s easily accessible Hard disks are random access devices, so retrieving a file from a hard disk is almost instantaneous and can usually be done by the user. With a tape you often have to wait several minutes while someone loads the tape and the backup software winds the tape over to the correct spot for retrieving the file.

Disk-based backup can be accomplished using DAS, NAS, or a SAN.

DAS backup can be either PC- or server-based:

PC-based – You can attach an external hard drive to each PC and configure PC-based backup software to do regular backups. This can be practical for one or two PC’s, but it can quickly become impractical for a rapidly growing small business with lots of PCs. You usually have to depend on the PC users to let backups take place, which is risky, particularly if users are on the road frequently.

Server based – You can install a backup server with its own DAS and backup all your PCs over the LAN. This is a great way to have centralized control over the backup process. However, it does require setting up and maintaining a server and server operating system and software, with all the requisite tuning and updating. Servers can also become a network bottleneck if they’re pulling data off of several PCs over the LAN.

Nevertheless, DAS based backup can be a viable solution for many small businesses as a speedier alternative to tape. Some organizations back up PCs to DAS for performance and then back up server-based DAS to tape as a secondary measure for portability, taking the tapes off site for storage where they can be retrieved in the event of a local disaster.

NAS

NAS makes a great backup solution for many small businesses because it’s easy to set up and maintain. Like network-based DAS backup it lets you push all your PC backups over the network to a single storage device, but unlike DAS, which has to be attached to a server, NAS can be located anywhere on the LAN.

Some NAS products come with their own tightly integrated backup and replication software tuned and preconfigured to work with that device. That can make setting up and implementing your backup strategy quick and easy. And backups to NAS can be automated so there’s little need for a staff person who has other things to do to take on the daily task of backup, as is required with tape backup.

If you’re looking for extra protection from natural disasters, look for a NAS backup solution that can also replicate over a wide area network to another storage device. You get the offsite advantages of tape without the tape handling issues.

SAN

With their fast, block-based disk architecture, Storage Area Networks are great solutions for high performance backups. By placing storage on a specialized storage network, SANs take the burden of backup off your regular corporate LAN so the performance of other network applications doesn’t get bogged down.

You don’t have to know Fibre Channel technology to operate a SAN. iSCSI is simple to use, offers very good SAN performance, and runs over typical Ethernet switches.

Even simpler, however, is taking advantage of the iSCSI capabilities offered by some of today’s NAS products. Many NAS units can partition off some storage as fast block based iSCSI SAN storage. Plug your PC or backup server into the storage with an Ethernet cable, do some simple configuration on the storage device and the host server or PC, and you can run high-speed SAN style backups on a portion of your NAS, while the rest of the device serves files over the LAN.

The bottom line is if you do not take backing your data seriously, you will when your server or computer crashes and you loose all your data. Will you care then?

Posted in Computer Repair, Computers, Data Backups, Data Storage0 Comments


Advert