Archive | Data Storage

Cloud Computing Provider

Cloud Computing has been a hot buzz word for a while now and is getting hotter and hotter.

I have 3 important suggestions for anyone considering storing their data in the cloud.

#1 Before you spend a single penny with a cloud service provider first and foremost call their tech support and verify you are not stuck in an endless loop of being transferred around and make sure your call is not transferred over seas to India or elsewhere outside of the United States.

#2 Verify the strength of the company and how long they have been in business.

#3 If the provider you choose passes the above tests to your own satisfaction go ahead and sign up and pay for 1 month of service then upload some “test data” then wait a few days then log back in and delete your test data then call their tech support and tell them you accidentally deleted your data and see for yourself how fast they are able to restore your data from a backup.

You may be surprised at how good the  cloud provider you are testing really is or isn’t is or you may be disgusted but it’s your data and you owe it to yourself to test the cloud hosting provide you consider before you store real data on a server located who knows where that you can not physically touch.

I will be testing some cloud hosting providers and will provide a complete rating on my personal experiences later on for my blog readers to see themselves.

In the mean time I wrote an article in 2010 that explains exactly what cloud computing really is.

CLICK HERE to read that article.

Posted in CLOUD Computing, Data Storage0 Comments

Hard Drive Performance Comparison

I have had many people ask me what is the difference between the Seagate ST31000528AS hard drive and the Seagate ST31000524AS hard drive.

The only real difference is performance.
They are both considered high quality hard drives.

He is a simple comparison:

Seagate ST31000528AS
Series Barracuda
Interface SATA 3.0Gb/s
Capacity 1TB
RPM 7200
Cache 32MB
Average Latency 4.16ms
Form Factor 3.5 inches


Seagate ST31000524AS
Series Barracuda
Interface SATA 6.0Gb/s
Capacity 1TB
RPM 7200
Cache 32MB
Average Latency 4.16ms
Form Factor 3.5 inches

So the only real difference between the two Seagate Barracuda hard drives is the Seagate ST31000528AS has a read / write speed of 3.0Gb/s while the Seagate ST31000524AS has a read / write speed of 6.0Gb/s.

Obviously I would prefer the faster hard-drive but both are high quality and reliable hard drives.



Posted in Computers, Data Storage, Hard Drives, Hardware0 Comments

4 steps to preventing server downtime

Eliminating potential single points of failure is a time-tested strategy for reducing the
risk of downtime and data loss. Typically, network administrators or computer consultants do this by introducing redundancy in the application delivery infrastructure, and automating the process of monitoring and
correcting faults to ensure rapid response to problems as they arise. Most leading
companies adopting best practices for protecting critical applications and data also
look at the potential for the failure of an entire site, establishing redundant systems at
an alternative site to protect against site-wise disasters.

STEP #1 – PROTECT AGAINST SERVER FAILURES WITH QUALITY….don’t be a cheapskate with your own business by using low quality CHEAPO server and network hardware. Use HIGH Quality hardware.

Unplanned downtime can be caused by a number of different events, including:
• Catastrophic server failures caused by memory, processor or motherboard

Server component failures including power supplies, fans, internal disks,
disk controllers, host bus adapters and network adapters
Server core components include power supplies, fans, memory, CPUs and main logic
boards. Purchasing robust, name brand servers, performing recommended
preventative maintenance, and monitoring server errors for signs of future problems
can all help reduce the chances of automation downtime due to catastrophic server

You can reduce downtime caused by server component failures by adding
redundancy at the component level. Examples are: redundant power and cooling,
ECC memory, with the ability to correct single-bit memory errors, and combining
Ethernet cards with RAID.


Storage protection relies on device redundancy combined with RAID storage
algorithms to protect data access and data integrity from hardware failures. There are
distinct issues for both local disk storage and for shared, network storage.

For local storage, it is quite easy to add extra disks configured with RAID protection.
A second disk controller is also required to prevent the controller itself from being a
single point of failure.

Access to shared storage relies on either a fibre channel or Ethernet storage network.
To assure uninterrupted access to shared storage, these networks must be designed
to eliminate all single points of failure. This requires redundancy of network paths,
network switches, and network connections to each storage array.


The network infrastructure itself must be fault-tolerant, consisting of redundant
network paths, switches, routers and other network elements. Server connections can
also be duplicated to eliminate fail-overs caused by the failure of a single server or
network component.

Take care to ensure that the physical network hardware does not share common
components. For example, dual-ported network cards share common hardware logic,
and a single card failure can disable both ports. Full redundancy requires either two separate adapters or the combination of a built-in network port along with a separate network adapter.


The reasons for site failures can range from an air conditioning failure or leaking roof
that affects a single building, a power failure that affects a limited local area, or a
major hurricane that affects a large geographic area. Site disruptions can last
anywhere from a few hours to days or even weeks.

There are two methods for dealing with site disasters. One method is to tightly couple
redundant servers across high speed/low latency links, to provide zero data-loss and
zero downtime. The other method is to loosely couple redundant servers over
medium speed/higher latency/greater distance lines, to provide a disaster recovery
(DR) capability where a remote server can be restarted with a copy of the application
database missing only the last few updates. In the latter case, asynchronous data
replication is used to keep a backup copy of the data.
Combining data replication with error detection and fail over tools can help to get a
disaster recovery site up and running in minutes or hours, rather than days.

Posted in Computer Repair, Computers, Data Backups, Data Storage, Hard Drives, Hardware, High Availability, How To's, RAID Levels, Servers0 Comments


Unplanned server and network downtime can be caused by a number of different events:

• Catastrophic server failures caused by memory, processor or motherboard

• Server component failures including power supplies, fans, internal disks,
disk controllers, host bus adapters and network adapters

• Software failures of the operating system, middleware or application

• Site problems such as power failures, network disruptions, fire, flooding or
natural disasters

To protect critical applications from downtime, you need to take steps to protect
against each potential source of downtime.

Eliminating potential single points of failure is a time-tested technical strategy for reducing the
risk of downtime and data loss. Typically, network administrators do this by introducing redundancy in
the application delivery infrastructure, and automating the process of monitoring and
correcting faults to ensure rapid response to problems as they arise. Most leading
companies adopting best practices for protecting critical applications and data also
look at the potential for the failure of an entire site, establishing redundant systems at
an alternative site to protect against site-wide disasters.

Posted in Computer Repair, Computers, Data Backups, Data Recovery, Data Storage, Hard Drives, High Availability, Memory, Motherboards, Networking, Servers0 Comments

The Art of High Availability

All organizations are becoming increasingly reliant upon their computer systems. The
availability of those systems can be the difference between the organization succeeding
and failing. A commercial organization that fails is out of business with the consequences
rippling out to suppliers, customers, and the community.

This series will examine how we can configure our Windows Server 2008 environments to
provide the level of availability our organizations need. The topics we cover will comprise:

• The Art of High Availability—What do we mean by high availability? Why do we
need it, and how do we achieve it?

• Windows Server 2008 Native Technologies—What does Windows Server 2008
bring to the high?availability game, and how can we best use it?

• Non?Native Options for High Availability—Are there other ways of achieving high
availability, and how can we integrate these solutions into our environments?

The first question we need to consider is why we need highly available systems.

Why Do We Need It?
This question can be turned on its head by asking “Do all of our systems need to be highly
available?” The answer for many, if not most, organizations is no. The art of high
availability comes in deciding which systems need to be made highly available and how this
is going to be achieved. When thinking about these systems, we need to consider the effects
of the systems not being available.

Downtime Hurts
Downtime is when the computer system is unavailable to the user or customer and the business
process cannot be completed. If the server is up and the database is online but a network
problem prevents access, the system is suffering downtime. Availability is an end?to?end
activity. Downtime hurts in two ways: If a system is unavailable, the business process it
supports cannot be completed and there is an immediate loss of revenue. This could be due

  • Customer orders not being placed or being lost
  • Staff not working
  • Orders not being processed

The second way that downtime hurts is loss of reputation. This loss can be even more
damaging in the long term if customers decide that your organization cannot be trusted to
deliver and they turn to a competitor. The ability to gain business increases with ease of
communication and access. The converse is that the ability to lose business increases just
as fast if not faster.

Mission Critical Systems on Microsoft Windows
Critical business systems are hosted on the Microsoft Windows platform. These can be customer
facing or internal, but without them, the business grinds to a halt. Email may not seem to be
a critical system, but it is essential to the modern business. More than 60% of person to person
communication is via email in most businesses. This includes internal and external
communications. If a company is non?responsive to communications, it is judged, perhaps
harshly, as being out of business. This can become reality if it progresses too long.

24 × 7 Business Culture
The “Global Village” concept has been accelerated by the adoption of the Internet for
business purposes. Globalization in this case means that business can come from anywhere
in the world—not necessarily your own time zone. If your business competes at this level,
high availability isn’t an option, it’s a necessity.

Industries such as the financial services and health sector have a requirement to protect
the data they store. This requirement can involve the availability of the data. In other cases,
the systems must be highly available to meet safety requirements.

Once you know why you need it, you need to define what is meant by high availability.

What Is High Availability?
High availability is usually expressed in terms of a number of “9”s. Four nines is 99.99%
availability. The ultimate goal is often expressed as 5 “9”s availability (99.999%), which
equates to five and a quarter minutes of downtime per year. The more nines we need, the
greater the cost to achieve that level of protection.

One common argument is scheduled downtime. If downtime is scheduled, for example, for
application of a service pack, does that mean the system is unavailable? If the system is
counted as unavailable, any Service Level Agreements (SLAs) on downtime will probably
be broken. In hosting or outsourcing scenarios, this could lead to financial penalties.
However, if scheduled downtime doesn’t mean the system is counted as unavailable,
impressive availability figures can be achieved—but are they a true reflection of
availability to the users? There is no simple answer to these questions, but all systems
require preventative maintenance or they will fail. The disruption to service can be
minimized (for example, the patching nodes of a cluster in sequence) but cannot be
completely eliminated. Probably the best that can be achieved is to ensure that
maintenance windows are negotiated into the SLA.

These measurements are normally taken against the servers hosting the system. As we
have seen, the server being available doesn’t necessarily mean the system is available. We
have to extend our definition of highly available from protecting the server to also include
protecting the data.

The Server Clustering Service built?in to Microsoft Windows is often our first thought for protecting the
server. In the event of failure, the service automatically fails over to a standby server, and
the business system remains available. However, this doesn’t protect the data in that a
failure in the disk system, or even network failures, can make the system unavailable.

Do We Still Need to Back Up our server and data?
One common question is “Do I still need to take a backup?” The only possible answer is
High availability is not, and never can be, a substitute for a well?planned backup
regimen. Backup is your ultimate “get out of jail card.” When all else fails, you can always
restore from backup. However, this pre supposes a few points.

  • Test restores have been performed against the backup media. The last place you
    want to be is explaining why a business?critical system cannot be restored because
    the tapes cannot be read.
  • A plan exists to perform the restore that has been tested and practiced. Again, you
    don’t want to be performing recoveries where the systems and steps necessary for
    recovery are not understood.

Backup also forms an essential part of your disaster recovery planning.

Disaster Recovery vs. High Availability
These two topics, high availability and disaster recovery, are often thought of as being the
same thing. They are related but separate topics. High availability can be best summed up
as “keeping the lights on.” It is involved with keeping our business processes working and
dealing with day?to?day issues. Disaster recovery is the process and procedures required to
recover the critical infrastructure after a natural or man?made disaster. The important
point of disaster recovery planning is restoring the systems that are critical to the business
in the shortest possible time.

Traditionally, these are two separate subjects, but the technologies are converging. One
common disaster recovery technique is replicating the data to a standby data center. In the
event of a disaster, this center is brought online and business continues. There are some
applications, such as relational database systems and email systems, that can manage the
data replication to another location. At one end of the scale, we have a simple data
replication technique with a manual procedure required to bring the standby data online in
place of the primary data source. This can range up to full database mirroring where
transactions are committed to both the primary and mirror databases and fail over to the
mirror can be automatically triggered in the event of applications losing access to the
primary. In a geographically dispersed organization where systems are accessed over the
WAN, these techniques can supply both high availability and disaster recovery.

We have seen why we need high availability and what it is. We will now consider how we
are going to achieve the required level of high availability.

Achieving High Availability
When high availability is discussed, the usual assumption is that we are talking about
clustering Windows systems. In fact, technology is one of three areas that need to be in
place before high availability works properly:

  • People
  • Processes
  • Technology

People and Processes
These are the two points that are commonly overlooked. I have often heard people say that
clustering is hard or that they had a cluster for the application but still had a failure. More
often than not, these issues come down to a failure of the people and processes rather than
the technology.

The first question that should be asked is “Who owns the system?” The simple answer is
that IT owns the system. This is incorrect. There should be an established business owner
for all critical systems. They are the people who make decisions regarding the system from
a business perspective—especially decisions concerning potential downtime. A technical
owner may also be established. If there is no technical owner, multiple people try to make
decisions that are often conflicting. This can have a serious impact on availability.
Ownership implies responsibility and accountability. With these in place, it becomes
someone’s job to ensure the system remains available.

A second major issue is the skills and knowledge of the people administering highly
available systems. Do they really understand the technologies they are administering?
Unfortunately, the answer is often that they don’t. We wouldn’t make an untrained or
unskilled administrator responsible for a mainframe or a large UNIX system. We should
ensure the same standards are applied to our highly available Windows systems. I once
worked on a large Exchange 5.5 to Exchange 2003 migration. This involved a number of
multi?node server clusters, each running several instances of Microsoft Exchange. One of the Microsoft Exchange
administrators asked me “Why do I need to know anything about Active Directory?” Given
the tight integration between Exchange and Active Directory (AD), I found this an
incredible question. This was definitely a case of untrained and unskilled network administrator.

Last, but very definitely not least, we need to consider the processes around our high availability
systems. In particular, two questions need to be answered:

  • Do we have a change control system?
  • Do we follow it?

If the answer to either of these is no, our system won’t be highly available for very long. In
addition, all procedures we perform on our systems should be documented and tested.
They should always be performed as documented.

Technology will be the major focus of the next two articles, but for now, we need to
consider the wider implications of high availability. We normally concentrate on the
servers and ensure that the hardware has the maximum levels of resiliency. On top of this,
we need to consider other factors:

  • Network—Do we have redundant paths from client to server? Does this include
    LAN, WAN, and Internet access?
  • Does the storage introduce a single point of failure?
  • Has the operating system (OS) been hardened to the correct levels? Is there a
    procedure to ensure it remains hardened?
  • Does our infrastructure in terms of AD, DNS, and DHCP support high availability?
  • Does the application function in a high?availability environment?

Highly?available systems explicitly mean higher costs due to the technology and people we
need to utilize. The more availability we want, the higher the costs will rise. A business
decision must be made regarding the cost of implementing the highly?available system
when compared against the risk to the business of the system not being available.

This calculation should include the cost of downtime internally together with potential loss
of business and reputation. When a system is unavailable and people can’t work, the final
costs can be huge leading to the question “We lost how much?”

You need high availability data solutions to ensure your business processes keep functioning. This ensures
your revenue streams and your business reputation are protected. We help you achieve high availability
through the correct mixture of people, processes, and technology.

Posted in Computers, Data Backups, Data Storage, Hardware, High Availability, Servers0 Comments

The Definition of RAID and the Most Common RAID Levels Explained

RAID stands for Redundant Array of Inexpensive (or Independent) Disks.

I prefer to call them Redundant Array of Independent disks because they use to be very expensive.

A RAID array is a set of multiple hard drives that make up a data storage system built for redundancy or business continuity. In most but not all configurations a RAID storage system can tolerate the failure of a hard drive without losing data however this ultimately depends on how the RAID array is configured.

Different RAID Levels and Their Common Uses

Each RAID level have pro’s and con’s and it is up to a network administrator to decide which RAID level is best for a specific situation. There are many factors to be taken into consideration and it boils down to Speed – performance and budget.

Here are some examples of some of some common RAID configurations or RAID levels.

RAID Level 0

RAID Level 0 provides no redundancy whatsoever and is completely foolish to use in a business environment for storing critical data. With a RAID 0 configuration if one hard drive dies the entire RAID array dies and you can kiss all of data on the RAID array goodbye when this happens. RAID 0 is usually popular with computer video gamers that only take performance into consideration and RAID 0 is usually twice as fast as other RAID levels. Re read this paragraph before considering using RAID 0 it to store your precious data. RAID Level 0 splits or stripes the data across drives, resulting in higher data throughput. Since no redundant information is stored, performance is very good, but the failure of any disk in the array results in total and complete data loss. Raid Level 0 is only used to increase hard drive performance.  A RAID 0 configuration uses 2 hard drives and you get the storage capacity of both of the hard drives. Example if you have 2 100 gig hard drives then you get 200 gigs of NON redundant storage space.

RAID Level 1

RAID Level 1 is usually referred to as hard drive mirroring AKA a mirror. A Level 1 RAID array provides redundancy by duplicating all the data from one drive on a second drive so that if one of the two hard drives drive fails, no data is lost. RAID 1 is very good for small businesses because it is affordable and reliable. A RAID 1 configuration uses 2 hard drives so if you have 2 identical hard drives you get the storage capacity of 1 of those hard drives. Example if you have a pair of 100 gig hard drives then you get 100 gigs of redundant storage space.

RAID Level 5

RAID Level 5 stripes data at a block level across several drives and distributes parity among the drives. No single disk is devoted to parity. This can speed small writes in multiprocessing systems. Because parity data is distributed on each drive, read performance tends to be lower than other RAID types.

The actual amount of available storage is about 70% to 80% of the total storage in the disk array. The storage penalty for redundancy is only about 20% to 30% of the total storage in the RAID 5 array. If one disk fails it is possible to rebuild the complete data set so that no data is lost. If more than one drive fails all the stored data will be lost. This gives a fairly low cost per megabyte while still retaining redundancy.
A RAID 5 configuration uses 3 or more hard drives. If you have for the sake of an example, 3 100 gig hard drives then you get approximately 200 gigs of actual storage capacity.

RAID 1+10

Raid 1+10 is commonly known as RAID 10 and is a combination of RAID 0 and RAID 1 – mirroring. What this means is you have 4 hard drives, 2 sets of the hard drives are each on a RAID 0 configuration and are then mirrored together on a RAID 1 configuration. Data is striped across the data mirror which provides both high performance and redundancy together. Any one of the hard drives can fail without data loss as long as the data mirror is not damaged. The RAID 10 array offers both high speed data transfer (write speed) advantages of striped arrays and increased data accessibility (read speed). System performance during a RAID rebuild drive is also better than that of parity based arrays, since data does not need to be regenerated from parity information, but is copied from the mirrored hard drive to another.

Now that you know what RAID is and what common RAID levels are used today never ever assume a RAID system is a backup solution because it is not. An Orlando computer consultant can help you decide which RAID level is best for your business or organization. Don’t ever just blindly purchase a server without the guidance of a professional network administrator. Without professional guidance you may go overboard and waste money on a RAID system that you don’t really need or you may wind up getting a RAID system that offers no data protection at all.

Posted in Computers, Data Storage, Hard Drives, Hardware, RAID Levels, What is?0 Comments

What is a Microsoft Small Business Server? and do you need one for your organization?

What is a Microsoft Small Business Server?

What is the difference between a Small Business Server and a single role server?

Here is a simple non technical explanation of what a Microsoft Small Business Server is and is not.

After reading this article you will have a better understanding so lets get started.

Larger companies such as fortune 500 or fortune 100 companies have many servers that do different things.
Examples are:

  • Multiple Domain controllers / file servers
  • Multiple SQL / database servers
  • Multiple Exchange servers
  • Multiple web servers
  • Multiple DHCP servers
  • and so forth…

Let’s pretend that “some big company” has 40 servers and each server has its own role to do something specific for the computer network. In theory this would mean that this company has 40 separate physical servers setup in a room to control the computers for this company. In today’s world this would be consolidated using server virtualization but that is getting off topic so I’m not going to get into that in this article.

Now let’s pretend you are a small business owner and you need a file server + a SQL database server + an exchange server. Ok so this means you would need 3 physical servers + 3 different server operating system licenses and many of other things and this can get expensive quickly not to mention an experienced network administrator to design, configure, deploy, test and manage this for you.

Now with a Microsoft Small Business Server Operating System you get 1 physical server that has multiple server roles built into 1 nice neat package. So you can have that file server and that database server and an exchange server and that web server all combined into 1 neat little package. This can save the small business owner money IF the server is properly configured and maintained.

Microsoft states the SBS – small business server will support up to 75 computer users / workstation computers. In theory this will work but in the real world if you have 75 computers connected to a SBS server you can expect very poor performance.

From my experience I will say that Microsoft SBS servers are pretty cool IF they are properly configured with the right hardware and software. I have seen many small businesses have an SBS server that were NEVER configured correctly or are just being used as a simple file server. In such a case the SBS server isn’t necessary and is a waste of money for the business owner.

So without getting into technical details this concludes what a Microsoft Small Business Server does.

If you are thinking about purchasing a new server for your business get to know an Orlando computer consultant and find out if a Microsoft Small Business Server will benefit your organization.

Posted in Computers, Data Storage, Microsoft, Operating Systems, Servers, What is?0 Comments

Orlando Computer Repair Company Talks About Hard Drive Clicking

Hard drive clicking also known as the click of death.

What does it mean when your hard drive makes a clicking sound?
A clicking hard drive is a sign of physical hard drive failure.

Without getting into a bunch of technical factors here I am going to keep this post short and to the point.

When a hard drive makes a clicking sound it is damaged beyond repair and the data on it is not accessable.

This problem can happen to anybody anywhere and is more common then many people realize.

This also applies to people that use external hard drives as a backup system for their data.

The bottom line is anything mechanical will fail and will not last forever.
This is especially true for hard drives including external hard drives.

When you hear your hard drive making a clicking noise unplug the hard drive and do not use it anymore.
You can attempt to copy your data off of the clicking hard drive but you are going to wind up causing more damage to the hard drive which will make it more expensive when a data recovery lab recovers your data.

When a hard drive clicks you will not be able to retrieve your data and you will have no choice but to send your hard drive to a data recovery company or an Orlando data recovery company if you live in Central Florida.

An Orlando Data Recovery company can get your data off of the damaged hard drive and onto a new hard drive.

Always be prepared for the worst and backup your data to a real backup system.
An external hard drive is not considered a professional backup solution and should only be used for a convience rather then a backup system.

Posted in Computer Repair, Computers, Data Recovery, Data Storage, Hard Drives0 Comments

Your server or servers are running out of storage space! What should you do?

So let’s say you’ve decided to take your business seriously and spend the money needed for a quality server. You may be using a file server to share files and printers or you may use it to run Microsoft Exchange for shared calendars or for email, host a database for your company or a CRM – Customer Relationship Management application. Perhaps you have two or three servers running a combination of these and each has its own backup system and each should.

What is likely to happen over time?

Storage inefficiency – You may find that one server, perhaps your file server, is constantly running out of storage space, while another server always seems to have too much storage space to spare, but no easy way to share it. This is a very inefficient scenario and the biggest reason why DAS solution is ultimately inefficient for growing small businesses.

Management headaches – Most DAS solutions have their own proprietary management software and interfaces and are not easy to manage remotely. You may find yourself with multiple different DAS solutions, each with its own management quirks and annoyances.

Consolidate Your Data

As with PCs, the answer to server overload is to consolidate your storage, unchain it from the server, and place it on the network where it can be shared among multiple servers and PCs. Why?

It’s efficient – You get a shared pool of networked storage that you can slice, dice, and allocate to users, applications, and servers at will. No more overloaded servers sitting next to servers with storage to spare.

It’s easy to upgrade –You no longer have to shut down your server and its applications to upgrade your storage. You can add storage to the network and make it instantly available without affecting your applications.

When it’s time to upgrade your servers, it’s no longer necessary to throw out the storage with the server or spend the time to migrate data to another server. You simply connect the new server to the network and configure it for access to your network storage. Now this isn’t always the case depending on what your server is hosting but more often than not this is a good solution to many small and medium sized businesses.

It’s cost effective – Storage makes up a significant portion of your server’s price and internal space. Separate storage on the network and you can spend fewer dollars on servers or buy more server performance and reliability for your dollar. You can also pack more servers into a smaller space, if that’s what you need to do, taking advantage of compact rack mount servers or even blade servers but don’t forget to keep your server closet or room COOL with Air Conditioning.

You have two choices for network storage: a SAN and a NAS.


Storage Area Networks (SANs) separate storage from your servers and put it on its own specialized high-performance storage network where it can be pooled and allocated to servers and applications. When a server runs out of storage, you simply allocate more storage from the SAN, rather than taking down the server to add physical storage.


Nothing beats the simplicity of NAS for fulfilling the needs of a typical small business. A NAS device sits directly on the network and, like a server, serves up files, not storage blocks. There are many advantages to NAS as a small business storage solution.

Independence – NAS devices can sit anywhere on the network, completely independent of servers, serving up files to any network connected PC’s or servers. If a server or PC goes down, the NAS is still functional. If power goes down, there’s no need for complex reconfiguration. With its simplicity, a NAS can be up and running again in minutes.

Ease of Use – NAS devices typically come as preconfigured turnkey solutions. There’s no need to install a host adapter or complex server operating system. You simply plug the NAS into the network and do some very light configuration, usually with a Web browser, and your NAS is up and running and accessible to your PCs.

Easy Upgrades – T o get more storage with NAS you simply plug in another NAS device and you’re up and running with additional file storage in minutes.

Flexibility – Today some NAS solutions also come with some built-in iSCSI capability, which can provide fast block-based storage to performance-hungry server applications that need it, while still allowing you to share and print files. In some cases you don’t even need a switch or special host adapter. You simply plug your server directly into the iSCSI port on the NAS. So you get the best of both worlds in a single easy to use and configure device.

Posted in Computer Repair, Computers, Data Storage, Hard Drives, Hardware, Servers0 Comments

What if you accidently delete a file or your data gets corrupted? How can you get your data back fast?

This sort of thing happens to small businesses and even large businesses in Orlando Florida more often then many people believe. Data gets deleted by accident or a file or files gets corrupted.
Hard drives fail, computers crash, servers get to hot and overheat, these things do happen and being able to restore data quickly is very important.

One of the nice advantages of disk based backup is that it can be done quickly with little disruption to your businesses applications and operations. One of the many ways to protect files is with automated daily backups to DAS or a NAS. Most backup solutions allow you to make  full backups to disk and frequent incremental backups of data that has changed since the last full backup daily or even several times a day.

Thanks to fast disk based incremental backups, if you accidentally delete a file or if a file, a directory, or an entire system becomes corrupted due to a virus or user or hardware failure, you can recover that file or data from prior state quickly and easily so you can get right back to work.

If you’re using a NAS solution to store backups, check to see if it comes with or works easily with software that offers point in time backup and recovery. Then you can rest assured that you’ll never really lose a precious file or data that you really need for your business.

If you accidentally deleted a file or if your server or computer crashed.
Do you know for sure you could easily and quickly restore your data?

When was the last time you actually tested your backup system?

Far too often we have seen small businesses in and around Orlando Florida use obsolete backup systems and when something goes wrong they learn all along their backup system has not been backing any data up!

TEST YOUR BACKUP SYSTEM and test it often.
Purposely remove a file or files and restore them from your backup.
This is the only way to know for sure your backup system really works.

Posted in Computers, Data Backups, Data Recovery, Data Storage0 Comments