Server Guide Part 1: Introduction to the Server World
by Johan De Gelas on August 17, 2006 1:45 PM EST- Posted in
- IT Computing
What makes a server different?
This is far from an academic or philosophical question as it allows you to see the difference between a souped up desktop with the label "server" which is being sold with a higher profit margin and a real server configuration that will be offering reliable services for years.
A few years ago, the question above would have been very easy to answer for a typical hardware person. Servers used to distinguish themselves on first sight from a normal desktop pc: they had SCSI disks, RAID controllers, multiple CPUs with large amounts of cache and Gigabit Ethernet. In a nutshell, servers had faster and more CPUs, better storage and faster access to the LAN.
PCI-e (black) has still a long way to go before it
will replace PCI-X (white) in the server world
It is clear that is a pretty simplistic and wrong way to understand what servers are all about. Since the introduction of new SATA features (SATA revision 2.5) such as staggered spindle spin up, Native Command Queuing and Port Multipliers, servers are equipped with SATA drives just like desktop PCs. A high end desktop pc has two CPU cores, 10,000 rpm SATA drives, gigabit Ethernet and RAID-5. Next year that same desktop might even have 4 cores.
So it is pretty clear that the hardware gap between servers and desktops is shrinking and not really a good way to judge. What makes a server a server? A server's main purpose is to make certain IT services (database, web, mail, DHCP...) available to many users at the same time, and concurrent access to these services is an important build criteria. Secondly, a server is a business tool, therefore it will be evaluated on how much it costs to deliver those services during each year or semester. The focus on Total Cost of Ownership (TCO) and concurrent access performance is what really sets a server apart from a typical desktop at home.
Basically, a server is different on the following points:
The three last points all are part of lowering TCO. So what is TCO anyway?
This is far from an academic or philosophical question as it allows you to see the difference between a souped up desktop with the label "server" which is being sold with a higher profit margin and a real server configuration that will be offering reliable services for years.
A few years ago, the question above would have been very easy to answer for a typical hardware person. Servers used to distinguish themselves on first sight from a normal desktop pc: they had SCSI disks, RAID controllers, multiple CPUs with large amounts of cache and Gigabit Ethernet. In a nutshell, servers had faster and more CPUs, better storage and faster access to the LAN.
PCI-e (black) has still a long way to go before it
will replace PCI-X (white) in the server world
It is clear that is a pretty simplistic and wrong way to understand what servers are all about. Since the introduction of new SATA features (SATA revision 2.5) such as staggered spindle spin up, Native Command Queuing and Port Multipliers, servers are equipped with SATA drives just like desktop PCs. A high end desktop pc has two CPU cores, 10,000 rpm SATA drives, gigabit Ethernet and RAID-5. Next year that same desktop might even have 4 cores.
So it is pretty clear that the hardware gap between servers and desktops is shrinking and not really a good way to judge. What makes a server a server? A server's main purpose is to make certain IT services (database, web, mail, DHCP...) available to many users at the same time, and concurrent access to these services is an important build criteria. Secondly, a server is a business tool, therefore it will be evaluated on how much it costs to deliver those services during each year or semester. The focus on Total Cost of Ownership (TCO) and concurrent access performance is what really sets a server apart from a typical desktop at home.
Basically, a server is different on the following points:
- Hardware optimized for concurrent access
- Professional upgrade slots such as PCI-X
- RAS features
- Chassis format
- Remote management
The three last points all are part of lowering TCO. So what is TCO anyway?
32 Comments
View All Comments
Whohangs - Thursday, August 17, 2006 - link
Yes, but multiply that by multiple cpus per server, multiple servers per rack, and multiple racks per server room (not to mention the extra cooling of the server room needed for that extra heat) and your costs quickly add up.JarredWalton - Thursday, August 17, 2006 - link
Multiple servers all consume roughly the same power and have the same cost, so you double your servers (say, spend $10000 for two $5000 servers) and your power costs double as well. That doesn't mean that the power catches up to the initial server cost faster. AC costs will also add to the electricity cost, but in a large datacenter your AC costs don't fluctuate *that* much in my experience.Just for reference, I worked in a datacenter for a large corporation for 3.5 years. Power costs for the entire building? About $40-$70,000 per month (this was a 1.5 million square foot warehouse). Costs of the datacenter construction? About $10 million. Costs of the servers? Well over $2 million (thanks to IBM's eServers). I don't think the power draw from the computer room was more than $1000 per month, but it might have been $2000-$3000 or so. The cost of over 100,000 500W halogen lights (not to mention the 1.5 million BTU heaters in the winter) was far more than the costs of running 20 or so servers.
Obviously, a place like Novel or another company that specifically runs servers and doesn't have tons of cubicle/storage/warehouse space will be different, but I would imagine places with a $100K per month electrical bills probably hold hundreds of millions of dollars of equipment. If someone has actual numbers for electrical bills from such an environment, please feel free to enlighten.
Viditor - Friday, August 18, 2006 - link
It's the cooling (air treatment) that is more important...not just the expense of running the equipment, but the real estate required to place the AC equipment. As datacenters expand, some quickly run out of room for all of the air treatment systems on the roof. By reducing heating and power costs inside the datacenter, you increase the value for each sq ft you pay...TaichiCC - Thursday, August 17, 2006 - link
Great article. I believe the article also need to include the impact of software when choosing hardware. If you look at some bleeding edge software infrastructure employed by companies like Google, Yahoo, and Microsoft, RAID, PCI-x is no longer important. Thanks to software, a down server or even a down data center means nothing. They have disk failures everyday and the service is not affected by these mishaps. Remember how one of Google's data center caught fire and there was no impact to the service? Software has allowed cheap hardware that doesn't have RAID, SATA, and/or PCI-X, etc to function well and no down time. That also means TCO is mad low since the hardware is cheap and maintenance is even lower since software has automated everything from replication to failovers.Calin - Friday, August 18, 2006 - link
I don't thing google or Microsoft runs their financial software on a big farm of small, inexpensive computers.While the "software-based redundancy" is a great solution for some problems, other problems are totally incompatible with it.
yyrkoon - Friday, August 18, 2006 - link
Virtualization is the way of the future. Server admins have been implimenting this for years, and if you know what you're doing, its very effective. You can in effect segregate all your different type of servers (DNS, HTTP, etc) in separate VMs, and keep multiple snapshots just incase something does get hacked, or otherwise goes down (not to mention you can even have redundant servers in software to kick in when this does happen). While VMWare may be very good compared to VPC, Xen is probably equaly as good by comparrison to VMWare, the performance difference last I checked was pretty large.Anyhow, I'm looking forward to anandtechs virtualization part of the article, perhaps we all will learn something :)
JohanAnandtech - Thursday, August 17, 2006 - link
Our focus is mostly on the SMBs, not google :-). Are you talking about cluster fail over? I am still exploring that field, as it is quite expensive to build it in the lab :-). I would be interested in what would be the most interesting technique, with a router which simply switches to another server, or with a heartbeat system, where one server monitors the other.I don't think the TCO is that low for implementing that kind of software or solutions, and that hardware is incredibly cheap. You are right when you are talking about "google datacenter scale". But for a few racks? I am not sure. Working with budgets of 20.000 Euro and less, I 'll have to disagree :-).
Basically what I am trying to do with this server guide is give the beginning server administrators with tight budgets an overview of their options. Too many times SMBs are led to believe they need a certain overhyped solution.
yyrkoon - Friday, August 18, 2006 - link
Well, if the server is in house, its no biggie, but if that server is acrossed the country (or world), then perhaps paying extra for that 'overhyped solution' so you can remotely access your BIOS may come in handy ;) In house, alot of people actually use in-expencive motherboards such as offered by Asrock, paired with a celeron / Sempron CPU. Now, if you're going to run more than a couple of VMs on this machine, then obviously you're going to have to spend more anyhow for multiple CPU sockets, and 8-16 memory slots. Blade servers IMO, is never an option. 4,000 seems awefully low for a blade server also.schmidtl - Thursday, August 17, 2006 - link
The S in RAS stands for sevicability. Meaning when the server requires maintainance, repair, or upgrades, what is the impact? Does the server need to be completely shut down (like a PC), or can you replace parts while it's running (hot-pluggable).JarredWalton - Thursday, August 17, 2006 - link
Thanks for the correction - can't say I'm a server buff, so I took the definitions at face value. The text on page 3 has been updated.