Archive | Hardware

Hard Drive Performance Comparison

I have had many people ask me what is the difference between the Seagate ST31000528AS hard drive and the Seagate ST31000524AS hard drive.

The only real difference is performance.
They are both considered high quality hard drives.

He is a simple comparison:

Seagate ST31000528AS
Series Barracuda
Interface SATA 3.0Gb/s
Capacity 1TB
RPM 7200
Cache 32MB
Average Latency 4.16ms
Form Factor 3.5 inches

VS

Seagate ST31000524AS
Series Barracuda
Interface SATA 6.0Gb/s
Capacity 1TB
RPM 7200
Cache 32MB
Average Latency 4.16ms
Form Factor 3.5 inches

So the only real difference between the two Seagate Barracuda hard drives is the Seagate ST31000528AS has a read / write speed of 3.0Gb/s while the Seagate ST31000524AS has a read / write speed of 6.0Gb/s.

Obviously I would prefer the faster hard-drive but both are high quality and reliable hard drives.

 

 

Posted in Computers, Data Storage, Hard Drives, Hardware0 Comments

4 steps to preventing server downtime

Eliminating potential single points of failure is a time-tested strategy for reducing the
risk of downtime and data loss. Typically, network administrators or computer consultants do this by introducing redundancy in the application delivery infrastructure, and automating the process of monitoring and
correcting faults to ensure rapid response to problems as they arise. Most leading
companies adopting best practices for protecting critical applications and data also
look at the potential for the failure of an entire site, establishing redundant systems at
an alternative site to protect against site-wise disasters.

STEP #1 – PROTECT AGAINST SERVER FAILURES WITH QUALITY….don’t be a cheapskate with your own business by using low quality CHEAPO server and network hardware. Use HIGH Quality hardware.

HARDWARE AND COMPONENT REDUNDANCY
Unplanned downtime can be caused by a number of different events, including:
• Catastrophic server failures caused by memory, processor or motherboard
failures

Server component failures including power supplies, fans, internal disks,
disk controllers, host bus adapters and network adapters
Server core components include power supplies, fans, memory, CPUs and main logic
boards. Purchasing robust, name brand servers, performing recommended
preventative maintenance, and monitoring server errors for signs of future problems
can all help reduce the chances of automation downtime due to catastrophic server
failure.

You can reduce downtime caused by server component failures by adding
redundancy at the component level. Examples are: redundant power and cooling,
ECC memory, with the ability to correct single-bit memory errors, and combining
Ethernet cards with RAID.

STEP #2 – PROTECT AGAINST STORAGE FAILURES WITH
STORAGE DEVICE REDUNDANCY AND RAID

Storage protection relies on device redundancy combined with RAID storage
algorithms to protect data access and data integrity from hardware failures. There are
distinct issues for both local disk storage and for shared, network storage.

For local storage, it is quite easy to add extra disks configured with RAID protection.
A second disk controller is also required to prevent the controller itself from being a
single point of failure.

Access to shared storage relies on either a fibre channel or Ethernet storage network.
To assure uninterrupted access to shared storage, these networks must be designed
to eliminate all single points of failure. This requires redundancy of network paths,
network switches, and network connections to each storage array.

STEP #3 – PROTECT AGAINST NETWORK FAILURES WITH
REDUNDANT NETWORK PATHS, SWITCHES AND ROUTERS

The network infrastructure itself must be fault-tolerant, consisting of redundant
network paths, switches, routers and other network elements. Server connections can
also be duplicated to eliminate fail-overs caused by the failure of a single server or
network component.

Take care to ensure that the physical network hardware does not share common
components. For example, dual-ported network cards share common hardware logic,
and a single card failure can disable both ports. Full redundancy requires either two separate adapters or the combination of a built-in network port along with a separate network adapter.

STEP #4 – PROTECT AGAINST SITE FAILURES WITH DATA
REPLICATION TO ANOTHER SITE

The reasons for site failures can range from an air conditioning failure or leaking roof
that affects a single building, a power failure that affects a limited local area, or a
major hurricane that affects a large geographic area. Site disruptions can last
anywhere from a few hours to days or even weeks.

There are two methods for dealing with site disasters. One method is to tightly couple
redundant servers across high speed/low latency links, to provide zero data-loss and
zero downtime. The other method is to loosely couple redundant servers over
medium speed/higher latency/greater distance lines, to provide a disaster recovery
(DR) capability where a remote server can be restarted with a copy of the application
database missing only the last few updates. In the latter case, asynchronous data
replication is used to keep a backup copy of the data.
Combining data replication with error detection and fail over tools can help to get a
disaster recovery site up and running in minutes or hours, rather than days.

Posted in Computer Repair, Computers, Data Backups, Data Storage, Hard Drives, Hardware, High Availability, How To's, RAID Levels, Servers0 Comments

THE IMPACT OF NETWORK AND OR SERVER DOWNTIME

A failure of a critical Microsoft Windows application can lead to two types of losses:

• Loss of the application service – the impact of downtime varies with the
application and the business. For example, for some businesses, email can
be an absolutely business-critical service that costs thousands of dollars a
minute when unavailable.

• Loss of data – the potential loss of data due to an outage can have
significant legal and financial impact, again depending on the specific type of
application.

In determining the impact of downtime, you must understand the cost to your
business in downtime per minute or hour. In some cases, you can determine a
quantifiable cost (orders not taken). Other, less direct costs may include loss of
reputation and customer churn.

The loss of production data can also be very costly, for a variety of reasons. In the
manufacturing environment, the loss of data could affect compliance with regulations,
leading to wasted product, fines, and potentially hazardous situations. For example, if
a pharmaceutical company that is manufacturing drugs does not show all of the
records of its collected data from the manufacturing process, the FDA could force the
company to throw away its entire batch of drugs. Because it is critical to know the
value for every variable when manufacturing drugs, the company could face fines for
not complying with FDA regulations.

Publicly-traded companies may need to ensure the integrity of financial data, while
financial institutions must adhere to SEC regulations for maintaining and protecting
data. For monitoring and control software, data loss and downtime interrupts your
ability to react to events, alarms, or changes that require immediate corrective action.

The bottom line is downtime is very expensive and preventing downtime is the most important factor in any business operation.

Posted in Computer Repair, Computers, Data Backups, Hardware, High Availability, Networking, Servers0 Comments

The Art of High Availability

All organizations are becoming increasingly reliant upon their computer systems. The
availability of those systems can be the difference between the organization succeeding
and failing. A commercial organization that fails is out of business with the consequences
rippling out to suppliers, customers, and the community.

This series will examine how we can configure our Windows Server 2008 environments to
provide the level of availability our organizations need. The topics we cover will comprise:

• The Art of High Availability—What do we mean by high availability? Why do we
need it, and how do we achieve it?

• Windows Server 2008 Native Technologies—What does Windows Server 2008
bring to the high?availability game, and how can we best use it?

• Non?Native Options for High Availability—Are there other ways of achieving high
availability, and how can we integrate these solutions into our environments?

The first question we need to consider is why we need highly available systems.

Why Do We Need It?
This question can be turned on its head by asking “Do all of our systems need to be highly
available?” The answer for many, if not most, organizations is no. The art of high
availability comes in deciding which systems need to be made highly available and how this
is going to be achieved. When thinking about these systems, we need to consider the effects
of the systems not being available.

Downtime Hurts
Downtime is when the computer system is unavailable to the user or customer and the business
process cannot be completed. If the server is up and the database is online but a network
problem prevents access, the system is suffering downtime. Availability is an end?to?end
activity. Downtime hurts in two ways: If a system is unavailable, the business process it
supports cannot be completed and there is an immediate loss of revenue. This could be due
to:

  • Customer orders not being placed or being lost
  • Staff not working
  • Orders not being processed

The second way that downtime hurts is loss of reputation. This loss can be even more
damaging in the long term if customers decide that your organization cannot be trusted to
deliver and they turn to a competitor. The ability to gain business increases with ease of
communication and access. The converse is that the ability to lose business increases just
as fast if not faster.

Mission Critical Systems on Microsoft Windows
Critical business systems are hosted on the Microsoft Windows platform. These can be customer
facing or internal, but without them, the business grinds to a halt. Email may not seem to be
a critical system, but it is essential to the modern business. More than 60% of person to person
communication is via email in most businesses. This includes internal and external
communications. If a company is non?responsive to communications, it is judged, perhaps
harshly, as being out of business. This can become reality if it progresses too long.

24 × 7 Business Culture
The “Global Village” concept has been accelerated by the adoption of the Internet for
business purposes. Globalization in this case means that business can come from anywhere
in the world—not necessarily your own time zone. If your business competes at this level,
high availability isn’t an option, it’s a necessity.

Legislation
Industries such as the financial services and health sector have a requirement to protect
the data they store. This requirement can involve the availability of the data. In other cases,
the systems must be highly available to meet safety requirements.

Once you know why you need it, you need to define what is meant by high availability.

What Is High Availability?
High availability is usually expressed in terms of a number of “9”s. Four nines is 99.99%
availability. The ultimate goal is often expressed as 5 “9”s availability (99.999%), which
equates to five and a quarter minutes of downtime per year. The more nines we need, the
greater the cost to achieve that level of protection.

One common argument is scheduled downtime. If downtime is scheduled, for example, for
application of a service pack, does that mean the system is unavailable? If the system is
counted as unavailable, any Service Level Agreements (SLAs) on downtime will probably
be broken. In hosting or outsourcing scenarios, this could lead to financial penalties.
However, if scheduled downtime doesn’t mean the system is counted as unavailable,
impressive availability figures can be achieved—but are they a true reflection of
availability to the users? There is no simple answer to these questions, but all systems
require preventative maintenance or they will fail. The disruption to service can be
minimized (for example, the patching nodes of a cluster in sequence) but cannot be
completely eliminated. Probably the best that can be achieved is to ensure that
maintenance windows are negotiated into the SLA.

These measurements are normally taken against the servers hosting the system. As we
have seen, the server being available doesn’t necessarily mean the system is available. We
have to extend our definition of highly available from protecting the server to also include
protecting the data.

The Server Clustering Service built?in to Microsoft Windows is often our first thought for protecting the
server. In the event of failure, the service automatically fails over to a standby server, and
the business system remains available. However, this doesn’t protect the data in that a
failure in the disk system, or even network failures, can make the system unavailable.

Do We Still Need to Back Up our server and data?
One common question is “Do I still need to take a backup?” The only possible answer is
YES!
High availability is not, and never can be, a substitute for a well?planned backup
regimen. Backup is your ultimate “get out of jail card.” When all else fails, you can always
restore from backup. However, this pre supposes a few points.

  • Test restores have been performed against the backup media. The last place you
    want to be is explaining why a business?critical system cannot be restored because
    the tapes cannot be read.
  • A plan exists to perform the restore that has been tested and practiced. Again, you
    don’t want to be performing recoveries where the systems and steps necessary for
    recovery are not understood.

Backup also forms an essential part of your disaster recovery planning.

Disaster Recovery vs. High Availability
These two topics, high availability and disaster recovery, are often thought of as being the
same thing. They are related but separate topics. High availability can be best summed up
as “keeping the lights on.” It is involved with keeping our business processes working and
dealing with day?to?day issues. Disaster recovery is the process and procedures required to
recover the critical infrastructure after a natural or man?made disaster. The important
point of disaster recovery planning is restoring the systems that are critical to the business
in the shortest possible time.

Traditionally, these are two separate subjects, but the technologies are converging. One
common disaster recovery technique is replicating the data to a standby data center. In the
event of a disaster, this center is brought online and business continues. There are some
applications, such as relational database systems and email systems, that can manage the
data replication to another location. At one end of the scale, we have a simple data
replication technique with a manual procedure required to bring the standby data online in
place of the primary data source. This can range up to full database mirroring where
transactions are committed to both the primary and mirror databases and fail over to the
mirror can be automatically triggered in the event of applications losing access to the
primary. In a geographically dispersed organization where systems are accessed over the
WAN, these techniques can supply both high availability and disaster recovery.

We have seen why we need high availability and what it is. We will now consider how we
are going to achieve the required level of high availability.

Achieving High Availability
When high availability is discussed, the usual assumption is that we are talking about
clustering Windows systems. In fact, technology is one of three areas that need to be in
place before high availability works properly:

  • People
  • Processes
  • Technology

People and Processes
These are the two points that are commonly overlooked. I have often heard people say that
clustering is hard or that they had a cluster for the application but still had a failure. More
often than not, these issues come down to a failure of the people and processes rather than
the technology.

The first question that should be asked is “Who owns the system?” The simple answer is
that IT owns the system. This is incorrect. There should be an established business owner
for all critical systems. They are the people who make decisions regarding the system from
a business perspective—especially decisions concerning potential downtime. A technical
owner may also be established. If there is no technical owner, multiple people try to make
decisions that are often conflicting. This can have a serious impact on availability.
Ownership implies responsibility and accountability. With these in place, it becomes
someone’s job to ensure the system remains available.

A second major issue is the skills and knowledge of the people administering highly
available systems. Do they really understand the technologies they are administering?
Unfortunately, the answer is often that they don’t. We wouldn’t make an untrained or
unskilled administrator responsible for a mainframe or a large UNIX system. We should
ensure the same standards are applied to our highly available Windows systems. I once
worked on a large Exchange 5.5 to Exchange 2003 migration. This involved a number of
multi?node server clusters, each running several instances of Microsoft Exchange. One of the Microsoft Exchange
administrators asked me “Why do I need to know anything about Active Directory?” Given
the tight integration between Exchange and Active Directory (AD), I found this an
incredible question. This was definitely a case of untrained and unskilled network administrator.

Last, but very definitely not least, we need to consider the processes around our high availability
systems. In particular, two questions need to be answered:

  • Do we have a change control system?
  • Do we follow it?

If the answer to either of these is no, our system won’t be highly available for very long. In
addition, all procedures we perform on our systems should be documented and tested.
They should always be performed as documented.

Technology
Technology will be the major focus of the next two articles, but for now, we need to
consider the wider implications of high availability. We normally concentrate on the
servers and ensure that the hardware has the maximum levels of resiliency. On top of this,
we need to consider other factors:

  • Network—Do we have redundant paths from client to server? Does this include
    LAN, WAN, and Internet access?
  • Does the storage introduce a single point of failure?
  • Has the operating system (OS) been hardened to the correct levels? Is there a
    procedure to ensure it remains hardened?
  • Does our infrastructure in terms of AD, DNS, and DHCP support high availability?
  • Does the application function in a high?availability environment?

Costs
Highly?available systems explicitly mean higher costs due to the technology and people we
need to utilize. The more availability we want, the higher the costs will rise. A business
decision must be made regarding the cost of implementing the highly?available system
when compared against the risk to the business of the system not being available.

This calculation should include the cost of downtime internally together with potential loss
of business and reputation. When a system is unavailable and people can’t work, the final
costs can be huge leading to the question “We lost how much?”

Summary
You need high availability data solutions to ensure your business processes keep functioning. This ensures
your revenue streams and your business reputation are protected. We help you achieve high availability
through the correct mixture of people, processes, and technology.

Posted in Computers, Data Backups, Data Storage, Hardware, High Availability, Servers0 Comments

iPhone 5 rumor roundup: what to expect from the next iPhone

Cheaper, faster, better camera among likely improvements

The farther into 2011 we get, the closer we come to a new version of the Apple iPhone. Apple has released a new iPhone each year like clockwork — so what can we expect to see in the iPhone 5?

As it always is with rumors, the iPhone 5 has been associated with just about every new technology out there. Here are the things that seem most likely to actually happen:

Summer release date

This is a no brainer because Apple has been quite consistent in releasing a new iPhone around June or July of each year. You can expect with reasonable certainty that Apple will do the same with the iPhone 5 in 2011.

Faster, multi-Core processor

One of the most widely reported and sourced rumors is that Apple is manufacturing a successor to the A4 processor in the iPhone 4 and iPad. The A5 processor will be based on a Cortex A9 design and feature multiple cores. That would mean significant increases in performance and possibly battery life. It would also keep Apple securely in the technology curve as several other manufacturers are introducing dual-core processors into Android smartphones.

Integrated graphics upgrade

Along with a newer, faster processor, the iPhone 5 is rumored to have an upgraded integrated graphics and video processor (also referred to as IGPU). With Apple’s continuing emphasis on media, especially video and apps, this makes sense because a new graphics processor would boost the iPhone 5’s media capabilities. That means better video, game graphics and possibly even HDMI-out to TVs.  Apple’s OpenCL technology would also be able to use the IGPU to do additional tasks when idle.

NFC technology makes iPhone your iWallet

Near-field communication technology would make it possible to pay for goods and services by simply waving your iPhone 5 over a terminal, no cards or checks needed. Multiple sources have cited engineers working on NFC technology for the iPhone 5. Rumors even indicate that iTunes would expand to manage your debit or credit accounts, basically making the iPhone 5 a futuristic wallet.

New antenna, bye-bye deathgrip

Apple took a beating in the Antennagate scandal following the release of the iPhone 4. Holding the iPhone a certain way, dubbed the “deathgrip,” would drastically reduce the signal. It’s no surprise that Apple is rumored to be changing the antenna design, possibly moving it behind the Apple logo on back, in order to alleviate the problem.

Spec bump

As with each iteration of the iPhone, the iPhone 5 specs will likely get a refresh. Aside from the processors mentioned above, this likely means a little more memory, more storage and possibly a slight increase in screen size (evidence points to a 3.7-inch screen instead of 3.5).

Those are the rumors that are either backed up by multiple sources or obvious from Apple’s iPhone record. The following are several other rumors that are much less reliable, for one reason or another.

Better camera, 1080p video recording

This one isn’t unbelievable, but there is little evidence to back this up. The 5 megapixel, 720p video recording rear camera is already pretty impressive, and Apple may see no reason to update it on this version cycle. However, the upgraded graphics processor could certainly handle 1080p video recording, and 8 megapixel 1080p cameras are the standard for new smartphones.

LTE 4G speeds

4G is all the rage, so it seems like an obvious move for Apple, right? They certainly wouldn’t want to be left behind when almost every other manufacturer is aiming to put out a 4G phone soon, would they? Well, it seems like an obvious choice, but several reliable sources have said that Apple isn’t working on LTE compatibility. Instead, the company is said to be including a dual GSM/CMDA chip (3G technology) from Qualcomm for the iPhone 5. This wouldn’t be the first time Apple decided to hold back, either. The original iPhone was only able to download data at 2G, or EDGE, speeds even though AT&T already had a 3G network in place.

A cheaper iPhone?

Several outlets have breathlessly reported on backroom deals Apple has made that would lead to a cheaper iPhone. However, that seems pretty unlikely. The new hardware isn’t going to be that much cheaper, if it is at all, and Apple already has a pretty stable price point for the iPhone. What’s more, carriers already heavily subsidize the iPhone, so even if its retail price dropped, carriers wouldn’t necessarily lower the subsidized price, too.

Verizon and AT&T Simultaneously?

Just because Verizon finally got access to the iPhone 4 doesn’t mean they have rights to the iPhone 5. AT&T had a longstanding exclusive deal with Apple, and it’s quite possible they might be able to negotiate another exclusivity period for the iPhone 5, even if it’s only a few months to give AT&T a head start. On the other hand, Verizon will be actively lobbying for the same opportunity, and it’s in Apple’s best interest to have the iPhone on as many networks as possible.

Posted in APPLE, AT&T, Cell Phones, Hardware, IPhone, Verizon Wireless, Wireless Carriers0 Comments

The Definition of RAID and the Most Common RAID Levels Explained

RAID stands for Redundant Array of Inexpensive (or Independent) Disks.

I prefer to call them Redundant Array of Independent disks because they use to be very expensive.

A RAID array is a set of multiple hard drives that make up a data storage system built for redundancy or business continuity. In most but not all configurations a RAID storage system can tolerate the failure of a hard drive without losing data however this ultimately depends on how the RAID array is configured.

Different RAID Levels and Their Common Uses

Each RAID level have pro’s and con’s and it is up to a network administrator to decide which RAID level is best for a specific situation. There are many factors to be taken into consideration and it boils down to Speed – performance and budget.

Here are some examples of some of some common RAID configurations or RAID levels.

RAID Level 0

RAID Level 0 provides no redundancy whatsoever and is completely foolish to use in a business environment for storing critical data. With a RAID 0 configuration if one hard drive dies the entire RAID array dies and you can kiss all of data on the RAID array goodbye when this happens. RAID 0 is usually popular with computer video gamers that only take performance into consideration and RAID 0 is usually twice as fast as other RAID levels. Re read this paragraph before considering using RAID 0 it to store your precious data. RAID Level 0 splits or stripes the data across drives, resulting in higher data throughput. Since no redundant information is stored, performance is very good, but the failure of any disk in the array results in total and complete data loss. Raid Level 0 is only used to increase hard drive performance.  A RAID 0 configuration uses 2 hard drives and you get the storage capacity of both of the hard drives. Example if you have 2 100 gig hard drives then you get 200 gigs of NON redundant storage space.

RAID Level 1

RAID Level 1 is usually referred to as hard drive mirroring AKA a mirror. A Level 1 RAID array provides redundancy by duplicating all the data from one drive on a second drive so that if one of the two hard drives drive fails, no data is lost. RAID 1 is very good for small businesses because it is affordable and reliable. A RAID 1 configuration uses 2 hard drives so if you have 2 identical hard drives you get the storage capacity of 1 of those hard drives. Example if you have a pair of 100 gig hard drives then you get 100 gigs of redundant storage space.

RAID Level 5

RAID Level 5 stripes data at a block level across several drives and distributes parity among the drives. No single disk is devoted to parity. This can speed small writes in multiprocessing systems. Because parity data is distributed on each drive, read performance tends to be lower than other RAID types.

The actual amount of available storage is about 70% to 80% of the total storage in the disk array. The storage penalty for redundancy is only about 20% to 30% of the total storage in the RAID 5 array. If one disk fails it is possible to rebuild the complete data set so that no data is lost. If more than one drive fails all the stored data will be lost. This gives a fairly low cost per megabyte while still retaining redundancy.
A RAID 5 configuration uses 3 or more hard drives. If you have for the sake of an example, 3 100 gig hard drives then you get approximately 200 gigs of actual storage capacity.

RAID 1+10

Raid 1+10 is commonly known as RAID 10 and is a combination of RAID 0 and RAID 1 – mirroring. What this means is you have 4 hard drives, 2 sets of the hard drives are each on a RAID 0 configuration and are then mirrored together on a RAID 1 configuration. Data is striped across the data mirror which provides both high performance and redundancy together. Any one of the hard drives can fail without data loss as long as the data mirror is not damaged. The RAID 10 array offers both high speed data transfer (write speed) advantages of striped arrays and increased data accessibility (read speed). System performance during a RAID rebuild drive is also better than that of parity based arrays, since data does not need to be regenerated from parity information, but is copied from the mirrored hard drive to another.

Now that you know what RAID is and what common RAID levels are used today never ever assume a RAID system is a backup solution because it is not. An Orlando computer consultant can help you decide which RAID level is best for your business or organization. Don’t ever just blindly purchase a server without the guidance of a professional network administrator. Without professional guidance you may go overboard and waste money on a RAID system that you don’t really need or you may wind up getting a RAID system that offers no data protection at all.

Posted in Computers, Data Storage, Hard Drives, Hardware, RAID Levels, What is?0 Comments

Your server or servers are running out of storage space! What should you do?

So let’s say you’ve decided to take your business seriously and spend the money needed for a quality server. You may be using a file server to share files and printers or you may use it to run Microsoft Exchange for shared calendars or for email, host a database for your company or a CRM – Customer Relationship Management application. Perhaps you have two or three servers running a combination of these and each has its own backup system and each should.

What is likely to happen over time?

Storage inefficiency – You may find that one server, perhaps your file server, is constantly running out of storage space, while another server always seems to have too much storage space to spare, but no easy way to share it. This is a very inefficient scenario and the biggest reason why DAS solution is ultimately inefficient for growing small businesses.

Management headaches – Most DAS solutions have their own proprietary management software and interfaces and are not easy to manage remotely. You may find yourself with multiple different DAS solutions, each with its own management quirks and annoyances.

Consolidate Your Data

As with PCs, the answer to server overload is to consolidate your storage, unchain it from the server, and place it on the network where it can be shared among multiple servers and PCs. Why?

It’s efficient – You get a shared pool of networked storage that you can slice, dice, and allocate to users, applications, and servers at will. No more overloaded servers sitting next to servers with storage to spare.

It’s easy to upgrade –You no longer have to shut down your server and its applications to upgrade your storage. You can add storage to the network and make it instantly available without affecting your applications.

When it’s time to upgrade your servers, it’s no longer necessary to throw out the storage with the server or spend the time to migrate data to another server. You simply connect the new server to the network and configure it for access to your network storage. Now this isn’t always the case depending on what your server is hosting but more often than not this is a good solution to many small and medium sized businesses.

It’s cost effective – Storage makes up a significant portion of your server’s price and internal space. Separate storage on the network and you can spend fewer dollars on servers or buy more server performance and reliability for your dollar. You can also pack more servers into a smaller space, if that’s what you need to do, taking advantage of compact rack mount servers or even blade servers but don’t forget to keep your server closet or room COOL with Air Conditioning.

You have two choices for network storage: a SAN and a NAS.

SAN

Storage Area Networks (SANs) separate storage from your servers and put it on its own specialized high-performance storage network where it can be pooled and allocated to servers and applications. When a server runs out of storage, you simply allocate more storage from the SAN, rather than taking down the server to add physical storage.

NAS

Nothing beats the simplicity of NAS for fulfilling the needs of a typical small business. A NAS device sits directly on the network and, like a server, serves up files, not storage blocks. There are many advantages to NAS as a small business storage solution.

Independence – NAS devices can sit anywhere on the network, completely independent of servers, serving up files to any network connected PC’s or servers. If a server or PC goes down, the NAS is still functional. If power goes down, there’s no need for complex reconfiguration. With its simplicity, a NAS can be up and running again in minutes.

Ease of Use – NAS devices typically come as preconfigured turnkey solutions. There’s no need to install a host adapter or complex server operating system. You simply plug the NAS into the network and do some very light configuration, usually with a Web browser, and your NAS is up and running and accessible to your PCs.

Easy Upgrades – T o get more storage with NAS you simply plug in another NAS device and you’re up and running with additional file storage in minutes.

Flexibility – Today some NAS solutions also come with some built-in iSCSI capability, which can provide fast block-based storage to performance-hungry server applications that need it, while still allowing you to share and print files. In some cases you don’t even need a switch or special host adapter. You simply plug your server directly into the iSCSI port on the NAS. So you get the best of both worlds in a single easy to use and configure device.

Posted in Computer Repair, Computers, Data Storage, Hard Drives, Hardware, Servers0 Comments

Data storage solutions for small to medium sized business in and around Orlando Florida.

Not very long ago there was a crystal clear distinction between the data storage needs of small businesses and larger companies.  Small businesses depended mostly on storage contained in PCs scattered around the office, with maybe a small Windows PC or server used to share files and print services. Most small businesses lack having a network administrator or anyone with the expertise necessary to configure a server correctly or maintain even a small network.  If files are unavailable for a few hours or even a day, it is an inconvenience, not a disaster. Most large enterprises have complex networking infrastructures, with stringent security and performance requirements maintained by professional network administrators and computer experts.

The line between the storage needs of small and larger businesses has grown dramatically in the past decade. Today’s small businesses and organizations find themselves tackling many of the same storage issues as their larger counterparts, including:

How to Meet Growing Storage Requirements – The storage needs of small businesses have dramatically grown thanks to the digitization of formerly paper documents; increased use of voice, video, and other rich media; the Internet; and regulations requiring years of data and file retention. Many businesses have seen their storage requirements double and triple year after year. They need efficient ways to store and share much larger volumes of data without busting their budget or hiring an IT department.

How to Protect Your Mission Critical Data – As with larger companies, many small businesses rely on being able to access data quickly and efficiently and can barely function without data access even for a few hours. Small businesses need to find a low cost way to backup and protect their data.

How to Reduce or Eliminate Computer Downtime – Small businesses increasingly partner with larger enterprises in a global environment, or work with customers across time zones. They need simple, low cost, efficient ways to keep their data accessible on a 24 by 7 basis without sacrificing backup and server maintenance.

How to Fulfill Stringent Audit and Regulatory Requirements – It’s not just large businesses that are affected by audits and federal regulations such as HIPAA and the Sarbanes Oxley Act. Many of the smaller businesses such as lawyers and doctors need simple, workable strategies for storing and protecting sensitive data with a level of effectiveness and sophistication equivalent to that of their enterprise counterparts.

How to use storage effectively in a Virtual Server Environment – As we take advantage of the Web and server based applications such as email, shared calendars,  and CRM – customer relationship management, not many small businesses have migrated to server virtualization yet due the cost associated with paying a professional to design, build, configure, deploy, migrate data and other highly technical  tasks required to get the job done right. The long term benefits of a virtualized server environment in a small to medium size organization are savings in electricity and physical footprint space.

While these issues can seem daunting to small organizations with a limited IT budget, the good news is that Central Florida Computer Engineering has extensive experience designing, deploying and supporting the IT and data storage needs of small and medium sized businesses all over Central Florida.

Posted in Computers, Data Storage, Hardware, Servers0 Comments

How to unlock and delete hidden EISE partations on most name brand computers

Did you know almost all major computer vendors like DELL, HP, IBM, GATEWAY, ACER, ASUS, SONY, Toshiba, and other pre built computer systems or laptops come with a special and hidden EISA hard drive partition that contain a system recovery utility and or diagnostic tools to restore the computer back to its factory default out of the box configuration?

These hidden EISA partitions are not really hidden because you can see them.  These partitions are usually in the FAT or NTFS file system, you cannot assign them a drive letter, format them or even delete them like you can with other hard drive partitions. These hidden partitions usually take up several gigabytes to more than 10 gigabytes in storage space.

Here is a screen shot showing the commands I typed to delete DRIVE 2.
Notice how this hard drive has a 10 gig EISA partition.
This is 10 gigs of space I don’t want to waste so as you can see below I used the command prompt and the Microsoft hard drive partition tool called diskpart to delete this EISA partition and I will delete the other partition when I install the OS. Since I am going to reinstall the operating system clean without the garbage software major computer vendors cram on their computers. I will destroy all the partitions and install only the specific software I choose to use on this particular computer.

Here are the written step by step instructions so you can compare them to the screen shot above and understand what each command means or does. I even highlighted the commands in yellow so they are that much easier to identify.

Here are the written step by step instructions so you can compare them to the screen shot above and understand what each command means or does. I even highlighted the commands in bold so they are that much easier to identify.
1. Open the command prompt
2. Type diskpart (hit the enter button)
3. Now type list disk (hit the enter button)
4. Now type select disk 2 (hit the enter button)
5. Now type list part (hit the enter button)
6. Now type select part 1 (hit the enter button)
7. Now type delete part override (hit the enter button)
8. Now type exit

Now the EISA partition is gone and you can take advantage of the freed up hard drive space.

Posted in Computer Repair, Computers, Data Storage, Hard Drives, Hardware, How To's0 Comments


Advert