Archive | Servers

4 steps to preventing server downtime

Eliminating potential single points of failure is a time-tested strategy for reducing the
risk of downtime and data loss. Typically, network administrators or computer consultants do this by introducing redundancy in the application delivery infrastructure, and automating the process of monitoring and
correcting faults to ensure rapid response to problems as they arise. Most leading
companies adopting best practices for protecting critical applications and data also
look at the potential for the failure of an entire site, establishing redundant systems at
an alternative site to protect against site-wise disasters.

STEP #1 – PROTECT AGAINST SERVER FAILURES WITH QUALITY….don’t be a cheapskate with your own business by using low quality CHEAPO server and network hardware. Use HIGH Quality hardware.

Unplanned downtime can be caused by a number of different events, including:
• Catastrophic server failures caused by memory, processor or motherboard

Server component failures including power supplies, fans, internal disks,
disk controllers, host bus adapters and network adapters
Server core components include power supplies, fans, memory, CPUs and main logic
boards. Purchasing robust, name brand servers, performing recommended
preventative maintenance, and monitoring server errors for signs of future problems
can all help reduce the chances of automation downtime due to catastrophic server

You can reduce downtime caused by server component failures by adding
redundancy at the component level. Examples are: redundant power and cooling,
ECC memory, with the ability to correct single-bit memory errors, and combining
Ethernet cards with RAID.


Storage protection relies on device redundancy combined with RAID storage
algorithms to protect data access and data integrity from hardware failures. There are
distinct issues for both local disk storage and for shared, network storage.

For local storage, it is quite easy to add extra disks configured with RAID protection.
A second disk controller is also required to prevent the controller itself from being a
single point of failure.

Access to shared storage relies on either a fibre channel or Ethernet storage network.
To assure uninterrupted access to shared storage, these networks must be designed
to eliminate all single points of failure. This requires redundancy of network paths,
network switches, and network connections to each storage array.


The network infrastructure itself must be fault-tolerant, consisting of redundant
network paths, switches, routers and other network elements. Server connections can
also be duplicated to eliminate fail-overs caused by the failure of a single server or
network component.

Take care to ensure that the physical network hardware does not share common
components. For example, dual-ported network cards share common hardware logic,
and a single card failure can disable both ports. Full redundancy requires either two separate adapters or the combination of a built-in network port along with a separate network adapter.


The reasons for site failures can range from an air conditioning failure or leaking roof
that affects a single building, a power failure that affects a limited local area, or a
major hurricane that affects a large geographic area. Site disruptions can last
anywhere from a few hours to days or even weeks.

There are two methods for dealing with site disasters. One method is to tightly couple
redundant servers across high speed/low latency links, to provide zero data-loss and
zero downtime. The other method is to loosely couple redundant servers over
medium speed/higher latency/greater distance lines, to provide a disaster recovery
(DR) capability where a remote server can be restarted with a copy of the application
database missing only the last few updates. In the latter case, asynchronous data
replication is used to keep a backup copy of the data.
Combining data replication with error detection and fail over tools can help to get a
disaster recovery site up and running in minutes or hours, rather than days.

Posted in Computer Repair, Computers, Data Backups, Data Storage, Hard Drives, Hardware, High Availability, How To's, RAID Levels, Servers0 Comments


Unplanned server and network downtime can be caused by a number of different events:

• Catastrophic server failures caused by memory, processor or motherboard

• Server component failures including power supplies, fans, internal disks,
disk controllers, host bus adapters and network adapters

• Software failures of the operating system, middleware or application

• Site problems such as power failures, network disruptions, fire, flooding or
natural disasters

To protect critical applications from downtime, you need to take steps to protect
against each potential source of downtime.

Eliminating potential single points of failure is a time-tested technical strategy for reducing the
risk of downtime and data loss. Typically, network administrators do this by introducing redundancy in
the application delivery infrastructure, and automating the process of monitoring and
correcting faults to ensure rapid response to problems as they arise. Most leading
companies adopting best practices for protecting critical applications and data also
look at the potential for the failure of an entire site, establishing redundant systems at
an alternative site to protect against site-wide disasters.

Posted in Computer Repair, Computers, Data Backups, Data Recovery, Data Storage, Hard Drives, High Availability, Memory, Motherboards, Networking, Servers0 Comments


A failure of a critical Microsoft Windows application can lead to two types of losses:

• Loss of the application service – the impact of downtime varies with the
application and the business. For example, for some businesses, email can
be an absolutely business-critical service that costs thousands of dollars a
minute when unavailable.

• Loss of data – the potential loss of data due to an outage can have
significant legal and financial impact, again depending on the specific type of

In determining the impact of downtime, you must understand the cost to your
business in downtime per minute or hour. In some cases, you can determine a
quantifiable cost (orders not taken). Other, less direct costs may include loss of
reputation and customer churn.

The loss of production data can also be very costly, for a variety of reasons. In the
manufacturing environment, the loss of data could affect compliance with regulations,
leading to wasted product, fines, and potentially hazardous situations. For example, if
a pharmaceutical company that is manufacturing drugs does not show all of the
records of its collected data from the manufacturing process, the FDA could force the
company to throw away its entire batch of drugs. Because it is critical to know the
value for every variable when manufacturing drugs, the company could face fines for
not complying with FDA regulations.

Publicly-traded companies may need to ensure the integrity of financial data, while
financial institutions must adhere to SEC regulations for maintaining and protecting
data. For monitoring and control software, data loss and downtime interrupts your
ability to react to events, alarms, or changes that require immediate corrective action.

The bottom line is downtime is very expensive and preventing downtime is the most important factor in any business operation.

Posted in Computer Repair, Computers, Data Backups, Hardware, High Availability, Networking, Servers0 Comments

The Art of High Availability

All organizations are becoming increasingly reliant upon their computer systems. The
availability of those systems can be the difference between the organization succeeding
and failing. A commercial organization that fails is out of business with the consequences
rippling out to suppliers, customers, and the community.

This series will examine how we can configure our Windows Server 2008 environments to
provide the level of availability our organizations need. The topics we cover will comprise:

• The Art of High Availability—What do we mean by high availability? Why do we
need it, and how do we achieve it?

• Windows Server 2008 Native Technologies—What does Windows Server 2008
bring to the high?availability game, and how can we best use it?

• Non?Native Options for High Availability—Are there other ways of achieving high
availability, and how can we integrate these solutions into our environments?

The first question we need to consider is why we need highly available systems.

Why Do We Need It?
This question can be turned on its head by asking “Do all of our systems need to be highly
available?” The answer for many, if not most, organizations is no. The art of high
availability comes in deciding which systems need to be made highly available and how this
is going to be achieved. When thinking about these systems, we need to consider the effects
of the systems not being available.

Downtime Hurts
Downtime is when the computer system is unavailable to the user or customer and the business
process cannot be completed. If the server is up and the database is online but a network
problem prevents access, the system is suffering downtime. Availability is an end?to?end
activity. Downtime hurts in two ways: If a system is unavailable, the business process it
supports cannot be completed and there is an immediate loss of revenue. This could be due

  • Customer orders not being placed or being lost
  • Staff not working
  • Orders not being processed

The second way that downtime hurts is loss of reputation. This loss can be even more
damaging in the long term if customers decide that your organization cannot be trusted to
deliver and they turn to a competitor. The ability to gain business increases with ease of
communication and access. The converse is that the ability to lose business increases just
as fast if not faster.

Mission Critical Systems on Microsoft Windows
Critical business systems are hosted on the Microsoft Windows platform. These can be customer
facing or internal, but without them, the business grinds to a halt. Email may not seem to be
a critical system, but it is essential to the modern business. More than 60% of person to person
communication is via email in most businesses. This includes internal and external
communications. If a company is non?responsive to communications, it is judged, perhaps
harshly, as being out of business. This can become reality if it progresses too long.

24 × 7 Business Culture
The “Global Village” concept has been accelerated by the adoption of the Internet for
business purposes. Globalization in this case means that business can come from anywhere
in the world—not necessarily your own time zone. If your business competes at this level,
high availability isn’t an option, it’s a necessity.

Industries such as the financial services and health sector have a requirement to protect
the data they store. This requirement can involve the availability of the data. In other cases,
the systems must be highly available to meet safety requirements.

Once you know why you need it, you need to define what is meant by high availability.

What Is High Availability?
High availability is usually expressed in terms of a number of “9”s. Four nines is 99.99%
availability. The ultimate goal is often expressed as 5 “9”s availability (99.999%), which
equates to five and a quarter minutes of downtime per year. The more nines we need, the
greater the cost to achieve that level of protection.

One common argument is scheduled downtime. If downtime is scheduled, for example, for
application of a service pack, does that mean the system is unavailable? If the system is
counted as unavailable, any Service Level Agreements (SLAs) on downtime will probably
be broken. In hosting or outsourcing scenarios, this could lead to financial penalties.
However, if scheduled downtime doesn’t mean the system is counted as unavailable,
impressive availability figures can be achieved—but are they a true reflection of
availability to the users? There is no simple answer to these questions, but all systems
require preventative maintenance or they will fail. The disruption to service can be
minimized (for example, the patching nodes of a cluster in sequence) but cannot be
completely eliminated. Probably the best that can be achieved is to ensure that
maintenance windows are negotiated into the SLA.

These measurements are normally taken against the servers hosting the system. As we
have seen, the server being available doesn’t necessarily mean the system is available. We
have to extend our definition of highly available from protecting the server to also include
protecting the data.

The Server Clustering Service built?in to Microsoft Windows is often our first thought for protecting the
server. In the event of failure, the service automatically fails over to a standby server, and
the business system remains available. However, this doesn’t protect the data in that a
failure in the disk system, or even network failures, can make the system unavailable.

Do We Still Need to Back Up our server and data?
One common question is “Do I still need to take a backup?” The only possible answer is
High availability is not, and never can be, a substitute for a well?planned backup
regimen. Backup is your ultimate “get out of jail card.” When all else fails, you can always
restore from backup. However, this pre supposes a few points.

  • Test restores have been performed against the backup media. The last place you
    want to be is explaining why a business?critical system cannot be restored because
    the tapes cannot be read.
  • A plan exists to perform the restore that has been tested and practiced. Again, you
    don’t want to be performing recoveries where the systems and steps necessary for
    recovery are not understood.

Backup also forms an essential part of your disaster recovery planning.

Disaster Recovery vs. High Availability
These two topics, high availability and disaster recovery, are often thought of as being the
same thing. They are related but separate topics. High availability can be best summed up
as “keeping the lights on.” It is involved with keeping our business processes working and
dealing with day?to?day issues. Disaster recovery is the process and procedures required to
recover the critical infrastructure after a natural or man?made disaster. The important
point of disaster recovery planning is restoring the systems that are critical to the business
in the shortest possible time.

Traditionally, these are two separate subjects, but the technologies are converging. One
common disaster recovery technique is replicating the data to a standby data center. In the
event of a disaster, this center is brought online and business continues. There are some
applications, such as relational database systems and email systems, that can manage the
data replication to another location. At one end of the scale, we have a simple data
replication technique with a manual procedure required to bring the standby data online in
place of the primary data source. This can range up to full database mirroring where
transactions are committed to both the primary and mirror databases and fail over to the
mirror can be automatically triggered in the event of applications losing access to the
primary. In a geographically dispersed organization where systems are accessed over the
WAN, these techniques can supply both high availability and disaster recovery.

We have seen why we need high availability and what it is. We will now consider how we
are going to achieve the required level of high availability.

Achieving High Availability
When high availability is discussed, the usual assumption is that we are talking about
clustering Windows systems. In fact, technology is one of three areas that need to be in
place before high availability works properly:

  • People
  • Processes
  • Technology

People and Processes
These are the two points that are commonly overlooked. I have often heard people say that
clustering is hard or that they had a cluster for the application but still had a failure. More
often than not, these issues come down to a failure of the people and processes rather than
the technology.

The first question that should be asked is “Who owns the system?” The simple answer is
that IT owns the system. This is incorrect. There should be an established business owner
for all critical systems. They are the people who make decisions regarding the system from
a business perspective—especially decisions concerning potential downtime. A technical
owner may also be established. If there is no technical owner, multiple people try to make
decisions that are often conflicting. This can have a serious impact on availability.
Ownership implies responsibility and accountability. With these in place, it becomes
someone’s job to ensure the system remains available.

A second major issue is the skills and knowledge of the people administering highly
available systems. Do they really understand the technologies they are administering?
Unfortunately, the answer is often that they don’t. We wouldn’t make an untrained or
unskilled administrator responsible for a mainframe or a large UNIX system. We should
ensure the same standards are applied to our highly available Windows systems. I once
worked on a large Exchange 5.5 to Exchange 2003 migration. This involved a number of
multi?node server clusters, each running several instances of Microsoft Exchange. One of the Microsoft Exchange
administrators asked me “Why do I need to know anything about Active Directory?” Given
the tight integration between Exchange and Active Directory (AD), I found this an
incredible question. This was definitely a case of untrained and unskilled network administrator.

Last, but very definitely not least, we need to consider the processes around our high availability
systems. In particular, two questions need to be answered:

  • Do we have a change control system?
  • Do we follow it?

If the answer to either of these is no, our system won’t be highly available for very long. In
addition, all procedures we perform on our systems should be documented and tested.
They should always be performed as documented.

Technology will be the major focus of the next two articles, but for now, we need to
consider the wider implications of high availability. We normally concentrate on the
servers and ensure that the hardware has the maximum levels of resiliency. On top of this,
we need to consider other factors:

  • Network—Do we have redundant paths from client to server? Does this include
    LAN, WAN, and Internet access?
  • Does the storage introduce a single point of failure?
  • Has the operating system (OS) been hardened to the correct levels? Is there a
    procedure to ensure it remains hardened?
  • Does our infrastructure in terms of AD, DNS, and DHCP support high availability?
  • Does the application function in a high?availability environment?

Highly?available systems explicitly mean higher costs due to the technology and people we
need to utilize. The more availability we want, the higher the costs will rise. A business
decision must be made regarding the cost of implementing the highly?available system
when compared against the risk to the business of the system not being available.

This calculation should include the cost of downtime internally together with potential loss
of business and reputation. When a system is unavailable and people can’t work, the final
costs can be huge leading to the question “We lost how much?”

You need high availability data solutions to ensure your business processes keep functioning. This ensures
your revenue streams and your business reputation are protected. We help you achieve high availability
through the correct mixture of people, processes, and technology.

Posted in Computers, Data Backups, Data Storage, Hardware, High Availability, Servers0 Comments

What is a Microsoft Small Business Server? and do you need one for your organization?

What is a Microsoft Small Business Server?

What is the difference between a Small Business Server and a single role server?

Here is a simple non technical explanation of what a Microsoft Small Business Server is and is not.

After reading this article you will have a better understanding so lets get started.

Larger companies such as fortune 500 or fortune 100 companies have many servers that do different things.
Examples are:

  • Multiple Domain controllers / file servers
  • Multiple SQL / database servers
  • Multiple Exchange servers
  • Multiple web servers
  • Multiple DHCP servers
  • and so forth…

Let’s pretend that “some big company” has 40 servers and each server has its own role to do something specific for the computer network. In theory this would mean that this company has 40 separate physical servers setup in a room to control the computers for this company. In today’s world this would be consolidated using server virtualization but that is getting off topic so I’m not going to get into that in this article.

Now let’s pretend you are a small business owner and you need a file server + a SQL database server + an exchange server. Ok so this means you would need 3 physical servers + 3 different server operating system licenses and many of other things and this can get expensive quickly not to mention an experienced network administrator to design, configure, deploy, test and manage this for you.

Now with a Microsoft Small Business Server Operating System you get 1 physical server that has multiple server roles built into 1 nice neat package. So you can have that file server and that database server and an exchange server and that web server all combined into 1 neat little package. This can save the small business owner money IF the server is properly configured and maintained.

Microsoft states the SBS – small business server will support up to 75 computer users / workstation computers. In theory this will work but in the real world if you have 75 computers connected to a SBS server you can expect very poor performance.

From my experience I will say that Microsoft SBS servers are pretty cool IF they are properly configured with the right hardware and software. I have seen many small businesses have an SBS server that were NEVER configured correctly or are just being used as a simple file server. In such a case the SBS server isn’t necessary and is a waste of money for the business owner.

So without getting into technical details this concludes what a Microsoft Small Business Server does.

If you are thinking about purchasing a new server for your business get to know an Orlando computer consultant and find out if a Microsoft Small Business Server will benefit your organization.

Posted in Computers, Data Storage, Microsoft, Operating Systems, Servers, What is?0 Comments

Your server or servers are running out of storage space! What should you do?

So let’s say you’ve decided to take your business seriously and spend the money needed for a quality server. You may be using a file server to share files and printers or you may use it to run Microsoft Exchange for shared calendars or for email, host a database for your company or a CRM – Customer Relationship Management application. Perhaps you have two or three servers running a combination of these and each has its own backup system and each should.

What is likely to happen over time?

Storage inefficiency – You may find that one server, perhaps your file server, is constantly running out of storage space, while another server always seems to have too much storage space to spare, but no easy way to share it. This is a very inefficient scenario and the biggest reason why DAS solution is ultimately inefficient for growing small businesses.

Management headaches – Most DAS solutions have their own proprietary management software and interfaces and are not easy to manage remotely. You may find yourself with multiple different DAS solutions, each with its own management quirks and annoyances.

Consolidate Your Data

As with PCs, the answer to server overload is to consolidate your storage, unchain it from the server, and place it on the network where it can be shared among multiple servers and PCs. Why?

It’s efficient – You get a shared pool of networked storage that you can slice, dice, and allocate to users, applications, and servers at will. No more overloaded servers sitting next to servers with storage to spare.

It’s easy to upgrade –You no longer have to shut down your server and its applications to upgrade your storage. You can add storage to the network and make it instantly available without affecting your applications.

When it’s time to upgrade your servers, it’s no longer necessary to throw out the storage with the server or spend the time to migrate data to another server. You simply connect the new server to the network and configure it for access to your network storage. Now this isn’t always the case depending on what your server is hosting but more often than not this is a good solution to many small and medium sized businesses.

It’s cost effective – Storage makes up a significant portion of your server’s price and internal space. Separate storage on the network and you can spend fewer dollars on servers or buy more server performance and reliability for your dollar. You can also pack more servers into a smaller space, if that’s what you need to do, taking advantage of compact rack mount servers or even blade servers but don’t forget to keep your server closet or room COOL with Air Conditioning.

You have two choices for network storage: a SAN and a NAS.


Storage Area Networks (SANs) separate storage from your servers and put it on its own specialized high-performance storage network where it can be pooled and allocated to servers and applications. When a server runs out of storage, you simply allocate more storage from the SAN, rather than taking down the server to add physical storage.


Nothing beats the simplicity of NAS for fulfilling the needs of a typical small business. A NAS device sits directly on the network and, like a server, serves up files, not storage blocks. There are many advantages to NAS as a small business storage solution.

Independence – NAS devices can sit anywhere on the network, completely independent of servers, serving up files to any network connected PC’s or servers. If a server or PC goes down, the NAS is still functional. If power goes down, there’s no need for complex reconfiguration. With its simplicity, a NAS can be up and running again in minutes.

Ease of Use – NAS devices typically come as preconfigured turnkey solutions. There’s no need to install a host adapter or complex server operating system. You simply plug the NAS into the network and do some very light configuration, usually with a Web browser, and your NAS is up and running and accessible to your PCs.

Easy Upgrades – T o get more storage with NAS you simply plug in another NAS device and you’re up and running with additional file storage in minutes.

Flexibility – Today some NAS solutions also come with some built-in iSCSI capability, which can provide fast block-based storage to performance-hungry server applications that need it, while still allowing you to share and print files. In some cases you don’t even need a switch or special host adapter. You simply plug your server directly into the iSCSI port on the NAS. So you get the best of both worlds in a single easy to use and configure device.

Posted in Computer Repair, Computers, Data Storage, Hard Drives, Hardware, Servers0 Comments

Your office has microsoft workstations, your tech guy likes linux, your graphics guy uses a mac. How can you share files easily between the 3 different computer platforms?

While most small businesses operate solely on Microsoft Windows based computers or Mac computers, it’s not uncommon for a small business to have a combination of two or three of these. Perhaps your business has 5 to 10 or more Windows based PCs and one or two Macs used by your graphic or media artists. Or perhaps your boss has a Mac at home and wants to use one at work as well. You may even have one techie person in your company who loves Linux or is using a Linux based PC for a particular specialized application or project  or perhaps your tech guy absolutely hates Microsoft and and wants to stick with Linux.

How can your business share its files with these various computing platforms?

Here are some common ways to do this.

  • You can configure one or more of these different PCs to share files with the others.
  • You can install a Windows or Mac OS/X based server on your network and then configure it to support all three client operating systems or vice versa. In fact, all these operating systems support client PCs running other operating systems. However, configuring them takes a good amount of technical expertise and troubleshooting time. It’s no secret that Windows servers tend to favor Windows clients and Mac servers tend to favor Mac clients.

That’s why businesses looking to share files and printing among Macs, Windows, and Linux based PC’s often take the third option, which is Network Attached Storage (NAS).

NAS devices are simple way to add storage to your network and share files with many different types of clients. You simply plug a NAS device directly into the network, do some simple Web based configuration, and you are up and running in minutes.

If you’re working with mix of client operating systems on the network, you should make sure your NAS comes with built-in support for the following file sharing protocols:

Network File System (NFS), a file sharing protocol commonly familiar to Unix and Linux based PC’s.

Common Internet File System (CIFS), also known as Server Message Block, a file sharing protocol commonly used in Windows-based networks.

Bonjour, a protocol used by Mac OS/X computers to discover printers and other computers and their services on the network.

Apple FileProtocol (AFP), a file services protocol used by the Macintosh OS and OS X.

Many NAS products support most or all of these protocols, which makes it very easy to connect all of your Macs, Windows PCs, and Unix/Linux systems to share files and NAS attached printers. Very little configuration is needed. Easily, they all just work.

Posted in APPLE, Computers, Data Storage, Linux, Microsoft, Servers0 Comments

What is CLOUD computing?

What exactly is CLOUD computing? or CLOUD hosting?

Maybe you overheard  someone mention CLOUD computing or CLOUD hosting and now you are curious?
Maybe somebody you know uses a CLOUD and now you want to know more about CLOUD Computing or CLOUD hosting? Whatever the reason may be you want to know more about this hot topic and we are going to discuss the facts and the myths about CLOUD computing – CLOUD hosting.

So what exactly is CLOUD computing? and how does it work?
Are there any benefits to CLOUD computing? or is it just a bunch of hype?
These are all good questions so lets get started.

The name  ‘CLOUD” is just a nickname or slang for the internet.

CLOUD computing is another word for Virtual Server or Virtual Hosting or Virtual Private Server or Server Consolidation. As you can see this one technology has many different nick names.
Ok so what is a virtual server or virtual web hosting anyways?

First lets make sure you know what a server or web server / web hosting is.
A server is a special computer used to control a network, access control, store data – files, host email, host websites and many other services or roles. A web server or web hosting is a special computer usually stored in a data center which stores your website and must remain connected to the internet 24-7 in order for your website to be visible 24-7.
Most small businesses rent or lease web hosting and pay a monthly fee to host their website.

Years ago before CLOUD computing aka server virtualization came along one physical server would host 1 single website. So if you had 3 websites you would need to have 3 physical servers. Renting 3 web servers is not cheap especially for the small business owner.

Now if you have or had lets pretend and say 3 servers in your office for your business network.
And assuming each of your servers is doing something different such as server #1 is your office file server or primary domain controller and server #2 hosts your companies database and server #3 hosts something else for you.

So now you have a vague understanding of what a server does.
Remember this is a simple explanation of CLOUD computing and is not a technical how to guide.
Detailed information on how to design, configure, manage, etc virtual servers is a long and complex topic in itself.

Server virtualization allows you to combine several servers into one physical server.
So instead of having the 3 physical servers in your office you can have everything combined on 1 physical server using server virtualization technology. Imagine 1 physical server that has several operating systems running at the same time just as if they were  different physical servers. You can reboot one virtual server without rebooting all the virtual servers running on the same physical server. This can also describes as server consolidation.

What are the benefits of CLOUD computing, CLOUD hosting, Virtual servers, server consolidation?
You get the idea there are many names for this.

Some of the many benefits are.

Cost savings
One physical server costs less then several physical servers.
It costs less to share a physical server with others……vs paying to rent an entire web server just for yourself.

Physical foot print – physical space
One physical server uses less physical space then several physical servers.

One server uses less electricity then several servers.
One server does not generate as much heat as several servers.

What are the cons? are there any? One that comes to mind is hardware failure.
If the server physically shuts off whether it be a blown power supply of physical hard drive failure you can count on all the virtual servers shutting down because remember the virtual servers are running on the one physical server.

Before you choose whether or not CLOUD computing is right for you, your website or your office – environment consult with a professional network administrator – computer and network expert that has extensive real world experience with virtual servers AKA CLOUDS. A CLOUD or virtual server may or may not be beneficial to you. There are many technical and financial factors that go into deciding whether or not a virtual server will be beneficial to your business or organization.

We know of one local company located in Orlando Florida that has been specializing in CLOUD computing – virtual hosting since 2004.

Infinitum Technologies, Inc which offers turn key virtual web servers and custom designed CLOUD hosting systems, dedicated servers, server clusters and all kinds of custom server systems and configurations to suite a wide variety of needs from hosting websites, email, databases , remote backup solutions and much more.  Infintium Technologies is really a pioneer in virtual hosting technology and custom virtualized server systems. There are a gazillion hosting companies out there that re-sell or market CLOUD hosting as the new hot technology to have when in fact this technology isn’t new and the only thing new about this technology is the name CLOUD computing.

We have provided professional support to Infintium Technologies when they expanded and setup another network in Orlando Florida. We were there day and night with owners Sean Faircloth and Richard Blundell and know first hand these guys specialize in CLOUD computing, know what they are doing and are serious about CLOUD computing.

So there you have it.
Now you know what CLOUD computing is.

Posted in CLOUD Computing, Servers0 Comments

Ohh No! My business has grown, my computers and server are running out of storage space and our computer network is a mess!

Like most small businesses, your technology investments likely started small. You invested in a PC for yourself and a few other desktop computers for staff members. Perhaps you even have a small server. Most likely users keep his or her files on his PC; when someone needs a file they grab it with a USB flash drive or send it via email. Perhaps you have a shared folder setup so users share a folder with each other on your network.

Suddenly your business grew and what happened? Some of your computers are running out of storage or are not performing very good. Files are scattered across computers and you run into not being able to keep data organized. You have different versions of files in different places. Which is the most current, relevant version? In many cases nobody knows.

With any small business there comes a time when slow, neglected, misconfigured desktop or laptop computers simply doesn’t cut the mustard anymore. That’s when it’s time to consolidate, centralize, and share file storage across the network. This is when you need a professional network administrator – business computer expert or consultant to step in and help you.

Why consolidate? Why seek a professional network administrator? There are lots of reasons.

It’s more efficient PC based file storage of business critical data is naturally inflexible , inefficient and dangerous. Some of your PC’s may have huge amounts of storage to spare, but no way share it correctly, while others constantly run out of storage and require repeated internal storage upgrades or the addition of connected external hard drives which are also not redundant of a safe to store critical data. When you centralize and share storage, you get a single storage pool that you can slice, dice, and allocate to users and applications efficiently and easily without having to add internal or external hard drives to PCs with limited unused storage. Upgrades are less frequent and the storage you have is used much more efficiently and if configured correctly will be redundant and much safer then storing your companies data on a PC or external hard drive which will break down and crash sooner or later.

It’s more organized When all your files are stored in one place, they’re easier to find. It’s easier to keep track of which file is the most current. And since you don’t have to have multiple versions of the same files spread across the office network, you save on data storage space and prevent unnecessary headaches.

It’s easier to protect You know your employees should be backing up their files but, really, who does? It’s just a matter of time before files are lost with no way to get them back. Put all your storage in one redundant place and it’s easier to implement a single robust backup strategy that’s efficient and effective.
Ok, so now you know you should consolidate and share storage, but how do you do that?

There are three basic ways:

Direct-Attached Storage (DAS)

Direct attached storage refers to the storage “external hard-drive” attached directly to a PC or server. You can share files stored on one of your PC’s hard disks or buy a server running Microsoft Windows Server or Microsoft Windows Small Business Server and share its internal storage. As discussed earlier, you can also add storage to an internal bay of your server or add external storage via a USB cable. This is not the preferred way and is really cutting corners.
I don’t know about you but I value my data and take protecting it very seriously.

These are viable solutions if you have a high quality backup system in place, but if you haven’t yet made the leap to the world of servers, consider your other options carefully. Why?

Complexity – You have to do some research and investigation to find the right server for your needs. Then you must purchase, install, and configure the hardware and operating system for your network of computer users. If you’re new to server technology this can take a long time with the potential for a high level of frustration.This is the perfect time to call upon a professional network administrator – computer and network expert to do this for you.

Once your server is installed, its loosely integrated collection of hardware, operating system, and software require ongoing tuning and troubleshooting and maintenance. The server operating system and software are likely to require frequent patching and updates for continued security and performance and most importantly business continuity.

Availability – DAS storage can only be accessed through the server or PC to which it is attached. If that server goes down or is turned off for any reason, the storage and data will not be available to the network – computer users.

Upgrades – If you run out of storage you’ll probably have to shut down the server to install a new hard disk. This requires downtime and staff resources. Some servers and external storage solutions let you swap hard disks in and out while the server is up and running, but these tend to be at the high end for medium and large business use.

Performance – The typical server operating system (OS) is designed to run many different applications, provide many different types of services, and carry out many different tasks simultaneously. A full fledged OS such as Microsoft small business server can have an unnecessary impact on performance if all you really want to do is share files.
” A good network administrator – computer and networking expert” will help you choose the best hardware and software for your specific needs and budget. Avoid the high pressure pushy IT sales guy that tries to sell you expensive hardware and software without fully explaining the pros, cons and different recommended options with you.
While high quality comes with a price  make sure you understand whats going on before open your wallet.

Flexibility – You can run into similar inefficiencies with server attached DAS drives just as you did with your PC attached DAS drives. As your business grows and you add more storage capacity to your network, heavily used servers and DAS units will run out of storage frequently, requiring upgrades, and have higher potential to break down or crash if you will.

Despite these concerns, DAS can be an inexpensive viable solution for many networks, particularly those that also want to run server applications like email, CRM, and other database solutions.

Storage Area Network (SAN)

An alternative to using DAS is to separate storage from your servers and put it on its own specialized, high performance storage network called a storage area network (SAN). With a SAN, storage is no longer enslaved to a single server but sits independently on the SAN where it can be shared, sliced, diced, and allocated to servers, users and applications from a single pool.

For years, SANs ran on a complex technology called Fibre Channel that was too expensive for small businesses.
Fibre channel SAN systems are popular in data center, server farm and other mission critical server environments commonly found with fortune 500 companies, banks, web hosting companies and other high end computing environments. However a fairly new SAN technology called iSCSI offers very good performance, uses the same equipment as your Ethernet network, and is relatively simple to use.

Like DAS, however, SAN storage uses a low level, block based storage architecture that requires a server with an operating system to present files to users. Each server needs its own iSCSI host adapter or initiator software to communicate with the SAN. That’s why if you only intend to share files and printers on your network, a full fledged SAN can be an overkill. SANs are most appropriate where higher network performance is desired.
If you intend to host a database or perhaps multiple databases or computer users share and access large files then higher performance is going to benefit you.

Network-Attached Storage (NAS)

Small businesses looking for extra storage to share files and print services should take a close look at network attached storage (NAS). Like a server, a NAS device sits directly on the network. And like a server, a NAS device serves files not bare blocks of storage to users and applications. However, unlike a server, a NAS device does not require installing, configuring, tuning, and updating a server operating system. And unlike a SAN, a NAS doesn’t need a separate server to serve up its blocks of data as files. Instead, a NAS comes preconfigured with just the parts of an operating system necessary to serve files to users and applications.

Most NAS devices serve files using either the Network File System (NFS), which is an open source file system, or the Common Internet File System (CIFS), which is the system used by Windows to serve files to the user. Many can use both. The growing popularity of Apple desktops and laptops has pushed many network storage devices to also support the Apple File Protocol (AFP).

NAS devices have several advantages:

Independence – A NAS can sit anywhere on the network, independent of servers, and serve files to any network connected computer or server. If a server or PC goes down, the NAS is still functional. If power goes down, there’s no need for complex reconfiguration. With its simple architecture and setup, a NAS can be up and running again in minutes providing there is no major damage to the unit or drives.

Ease of Use – NAS devices typically come as preconfigured, almost turnkey solutions. There’s no need to install a host adapter or operating system. You simply plug the NAS into the network and, depending on the ease of use of the user interface, you do some very light configuration using a Web browser. There may be a little more configuration to do on PC’s and servers accessing the device, but in most cases you’re up and running in minutes. Compared to traditional servers, NAS units require little maintenance, few updates, and little troubleshooting. Whatever administration is necessary can usually be done via a simple Web browser interface.

Easy Upgrades – Adding storage to a server usually requires shutting down the server, replacing a drive or adding a new one and then booting up the server again. To get more storage with NAS, you simply plug another NAS device into the network and are up and running with additional shared file storage in minutes. Or some NAS devices allow swapping of hard drives or adding internal or external storage while they are in operation (commonly known as “hot swap”).

Flexibility – Many NAS devices can share their files easily among Windows, APPLE – Mac, UNIX, and Linux based computers. Some are also flexible enough to be used as a NAS, as DAS for a single server, or, as a storage device on a SAN. Many also come with capabilities for sharing printers.

Easy Backup – NAS devices can be a great storage medium for PC based backups. Many of these devices come with backup software that is easy to configure and use, both for backing up user computers to the NAS and backing up the NAS to another storage device, tape, or an external backup service. When all your files are in one place, backup is inherently easier than when they are spread around the office. Some NAS’s also come with easy tools for migrating data to the device and replicating data over the network from storage device to storage device.

In summary, depending on the needs of your small business and your technical expertise, you may be best off with DAS, a SAN, or NAS solution. If simple file and print sharing is your goal and your staff has little networking technical expertise, a NAS is often the best solution. Regardless of which solution you have or are questioning don’t skip out on having a professional network administrator – computer – networking expert help you choose the best solution for YOU!

You are special, your business is special, your data is literally priceless  and preserving and protecting it should be taken seriously.

Posted in Computers, Data Storage, Servers0 Comments

Data storage solutions for small to medium sized business in and around Orlando Florida.

Not very long ago there was a crystal clear distinction between the data storage needs of small businesses and larger companies.  Small businesses depended mostly on storage contained in PCs scattered around the office, with maybe a small Windows PC or server used to share files and print services. Most small businesses lack having a network administrator or anyone with the expertise necessary to configure a server correctly or maintain even a small network.  If files are unavailable for a few hours or even a day, it is an inconvenience, not a disaster. Most large enterprises have complex networking infrastructures, with stringent security and performance requirements maintained by professional network administrators and computer experts.

The line between the storage needs of small and larger businesses has grown dramatically in the past decade. Today’s small businesses and organizations find themselves tackling many of the same storage issues as their larger counterparts, including:

How to Meet Growing Storage Requirements – The storage needs of small businesses have dramatically grown thanks to the digitization of formerly paper documents; increased use of voice, video, and other rich media; the Internet; and regulations requiring years of data and file retention. Many businesses have seen their storage requirements double and triple year after year. They need efficient ways to store and share much larger volumes of data without busting their budget or hiring an IT department.

How to Protect Your Mission Critical Data – As with larger companies, many small businesses rely on being able to access data quickly and efficiently and can barely function without data access even for a few hours. Small businesses need to find a low cost way to backup and protect their data.

How to Reduce or Eliminate Computer Downtime – Small businesses increasingly partner with larger enterprises in a global environment, or work with customers across time zones. They need simple, low cost, efficient ways to keep their data accessible on a 24 by 7 basis without sacrificing backup and server maintenance.

How to Fulfill Stringent Audit and Regulatory Requirements – It’s not just large businesses that are affected by audits and federal regulations such as HIPAA and the Sarbanes Oxley Act. Many of the smaller businesses such as lawyers and doctors need simple, workable strategies for storing and protecting sensitive data with a level of effectiveness and sophistication equivalent to that of their enterprise counterparts.

How to use storage effectively in a Virtual Server Environment – As we take advantage of the Web and server based applications such as email, shared calendars,  and CRM – customer relationship management, not many small businesses have migrated to server virtualization yet due the cost associated with paying a professional to design, build, configure, deploy, migrate data and other highly technical  tasks required to get the job done right. The long term benefits of a virtualized server environment in a small to medium size organization are savings in electricity and physical footprint space.

While these issues can seem daunting to small organizations with a limited IT budget, the good news is that Central Florida Computer Engineering has extensive experience designing, deploying and supporting the IT and data storage needs of small and medium sized businesses all over Central Florida.

Posted in Computers, Data Storage, Hardware, Servers0 Comments