Tuesday, December 17, 2013

Virtual Servers - 1

Virtual environments have intrigued me for the last 5 - 7 years so I decided to look into virtualizing servers.  This turned out to be a much more difficult choice than I first anticipated.  Many factors had to be considered including but not limited to:

  • Server Provider
  • SAN provider
  • In-Lin SAS drives vs standard SAS drives
  • Number of switches to use and what type
  • and the BIG one vmWare or the new Hyper-v 2012
With much deliberation and hours of conversations with vmWare and Microsoft coupled with extensive communication with people already using those products, I ultimately chose the Hyper-V option.  I honestly just could not get past the "free" part, and the advancements made by Microsoft are so phenomenal I just could not see vmWare being worth the price tag.  
Before I go on let me say that this is not a cheap project by any means, but I believe in the long run I will see a great amount of savings and my server room will be much more efficient.  With quotes from many vendors including HP, Lenovo, and Dell, we ultimately decided on Dell.  Here are the specs:
Three PowerEdge 520s
  • 64gb Ram
  • 3 1500 RPM SAS drives  
  • 3 licenses of Microsoft Server Datacenter 2012
  • 8 NIC cards
Two PowerConnect 6224 L3 switches (you have to have switches that support jumbo frames
One Dell EqualLogic PS4100

This is a great entry level system but has a price of around $38,000.  I would say the week link here is the EqualLogic.  While I am happy with the purchse and I feel it will meet and exceed my needs for about 20 - 50 virtual servers, it is an in-line SAS which basically means industry standard SATA drives running on a SAS controller.  This gives you the benefits of SAS with standard 7200 RPM SATA drives.  While they will out perform standard SATA arrays, it will not meet the standards of SAS and this needs to be a primary consideration when looking into VMs.  Another option we considered was the NETAPP FAS2220A which has wonderful reviews and great performance.  

Why all the servers and switches?

When running VM servers over a hypervisor like vmWare or Hyper-V you need to have redundancy.  Ideally my system will use the resources from 2 servers and have the 3rd pop in if either 1 or both the other servers ever go down.  I also have each server splitting their power connections between various APC battery backups.  The Servers each have 8 NICs, 6 of these ports will be used to transfer data between the SAS controller and the 3 servers.  1 port will be used for management and the final one will be used to gain primary network access.  I have two switches for the same reason.  All network ports on the servers and SAS controller are split between the two switches, and each switch is on a different battery backup system.  This lets everything stay running even if one switch goes down which is very nice.  Lastly my SAN has two SAS controllers built in that are in an ACTIVE / PASSIVE state which means the first one to get power will be active and the second one to get power will remain passive unless the first one goes off.  Again each of these controllers are wired to their APC.  It is very important that you have extreme redundancy when working with VMs because you don't want a single point of failure anywhere in the system.
This should be an interesting process and I will either add to this post or make a new post as I work through this, feel free to ask questions or express concerned about this process.  I hope we can all learn something through it!