VMworld 2011 Review
So for those that didn't get the opportunity to go to VMworld 2011 this year in Las Vegas, you missed a very informative and entertaining conference. The week long event at the Venitian was one of the largest in the events history hosting somwhere in the range of 15000-20000 registered individuals.
The event housed a large lab environment spanning four data centers and nearly 1000 thin clients connecting to a lab session via VMware View, numerous general sessions covering every corner of the virutal infrastructure, cloud computing concepts, virtual desktops, and best practices to execute in these environments.
During my time at the event I looked at infrastructure best practices when utilizing HA and DRS as well as some of the public sector issues we are encountering trying to keep up with trending virutalization and cloud computing technolgies. Additionally, I completed a few labs using the new VMware vSphere infrastructure on networking and performance configurations and optimization.
Like many of these IT conferences, there was a very large vendor area where companies from all over the country (and world) had setup shop that included simulated demos, games, and new technologies and features within products. Some of the vendors I really enjoyed meeting with were Dell, VMware, NetApp, EMC, LG (yeah that LG - they had a really neat phone app that tied into a View environment to allow corporate phone use on a personally owned smart phone), World Wide Technology, and WYSE.
Next years event will be in San Francisco and like previous events will be hosted the very last week of August going into early September. I highly recommend going if you have never been before. There is a lot to learn here.
VMware vSphere and View 5
VMware announced at VMworld the newest version of the vSphere and View product line. Version 5 introduces new features and optimized capabilities over its previous versions.
With vSphere 5 there were new features introduced (most notably) Storage DRS and vMotion over the WAN. Additionally, there are new maximums to allow for scalability within VM provisioning and a stateless firewall engine within the ESXi server. Of course with this release introduces a new VM version (8) and this allows for 3D graphics processing (better for Aero in Windows 7) and USB 3.0 support.
A full list of details can be found here...
http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere5.pdf
On the View side of the house I've spent a lot of my time looking at these features and benefits because I'm currently engineering a VDI concept design for my customer. This has been my first look at the desktop side of virtualization but it has been very interesting nonetheless.
Looking at how the PCoIP protocol has evolved since View 4 was released is really making VDI hosting over a WAN link more possible and with robust storage architecture with linked clones hosted on SSD makes these environments boot quickly during log-in storms as well as perform optimally during peak hours.
EMC's VNX solution did a case study running View 5 on vSphere 5 and booted 500 virtual desktops that were stored on an EMC VNX NFS mount and they booted up into a ready state in five minutes. These are promisiing statistics as one would look at hosting clones on a SAN (of faster storage platform).
A full list of benefits of View 5 are listed below. Most notably are the new optimized PCoIP controls, 3D graphics procesing capabilities, and View mobile clients (like my LG booth example at VMworld).
http://blogs.vmware.com/euc/2011/09/view-5-in-the-house.html
The EMC study details and other benefits of running VMware View 5 and vSphere 5 on an EMC VNX tiered storage architecture. The article can be viewed below...
http://www.emc.com/about/news/press/2011/20110830-01.htm
All in all there are some very promising features released with the VMware 5 platform of products. Next time I post something, I may discuss some findings from my possible trip to Dell World in Austin, TX in October and my experience from my upcoming VMware Advanced Fast Track training course in DC. Stay Tuned and thanks for reading!
Thursday, September 22, 2011
Friday, August 12, 2011
Update to VMware vSphere 5 Licensing Fiasco...
After a barrage of comments and negative feedback surrounding the new vRAM entitlement restriction (among other things) to vSphere 5 licenses, VMware came back and changed the model.
In a nutshell...
For Enterprise and Enterprise Plus, your entitlement has doubled. All other models are capped at 32GB.
There's also a cap on consumed vRAM counted per VM at 96GB.
Additionally, they reduced the high watermark and did a calculation on a 12 month average of consumed vRAM. This eliminates any penalties incurred due to short lived vRAM usage spikes in test and development environments.
Lastly, they've commented regarding concerns on VDI deployments and stated that this model does not apply to VDI environments. vSphere Desktop Addition covers VDI and there currently is no vRAM entitlement associated with that product.
Full details from VMware below...
http://blogs.vmware.com/rethinkit/2011/08/changes-to-the-vram-licensing-model-introduced-on-july-12-2011.html
In a nutshell...
For Enterprise and Enterprise Plus, your entitlement has doubled. All other models are capped at 32GB.
There's also a cap on consumed vRAM counted per VM at 96GB.
Additionally, they reduced the high watermark and did a calculation on a 12 month average of consumed vRAM. This eliminates any penalties incurred due to short lived vRAM usage spikes in test and development environments.
Lastly, they've commented regarding concerns on VDI deployments and stated that this model does not apply to VDI environments. vSphere Desktop Addition covers VDI and there currently is no vRAM entitlement associated with that product.
Full details from VMware below...
http://blogs.vmware.com/rethinkit/2011/08/changes-to-the-vram-licensing-model-introduced-on-july-12-2011.html
Friday, July 15, 2011
vSphere 5 Licensing - Talk about a step in the wrong direction...
Recently, I've been involved in different discussions internally and with VMware and I've had the recent privilege to see the new roadmap for vSphere 5. Though I can't get into details on the technical front, one area that I've recently learned about is the licensing changes that will occur with vSphere 5.
Like v4 the licenses will be by per physical processor. The difference is where v4 had a physical limitation on memory and processor cores. v5 does not. However, v5 has adopted what may very well cause people from upgrading from v4 to v5 and that's the vRAM entitlement setting on each processor that is licensed. This entitlement (24GB vRAM/proc for Standard, 32GB vRAM/proc for Enterprise, and 48GB/proc vRAM for Enterprise Plus) causes concerns for large businesses like my customer that I support at my job; and here's why...
Say I have a robustly built hardware environment that consists of rack and blade servers (10 for example) that each has 512GB of physical RAM and 4 processors each. Say all 10 of those servers are centrally managed via vCenter and my licensing model is Enterprise Plus (because I like vDS and Host Profile use)... Under v4 I had few limitations with a physical memory maximum of 1TB per host and a VM vRAM limitation of 255GB. With v5 that goes away but I’m now restricted even more because of the vRAM entitlement.
10 hosts x 4 procs = 40 procs that need to be licensed at Enterprise Plus
40 licenses x 48GB vRAM entitlement = 1920GB vRAM entitlement.
However, I have purchased 5120GB of physical RAM for my environment. Therefore, I’m only able to allocate approximately 27% of my memory (5120/1920=.2667) before I’ve reached this entitlement capacity. Once reached, I have to purchase additional licenses (or upgrade my licenses if I have that option). This drives up front costs significantly in order to allow us to utilize RAM that we purchased. In my example my costs for licensing alone (not factoring in support or vCenter) would be approximately 140K to license all 40 procs (MSRP). If I were running vSphere 4, I would be able to utilize all 5120GB of RAM on my hosts. With v5 I have to purchase approximately 30 lienses
5120 * .75 / 48GB (vRAM entitlement for ENT +) = 30 licenses
That drives my costs up by over 100K.
Thus my concern… While VMware has partnerships with literally everyone (Dell, HP, Cisco, EMC, etc.) one would think that if I’m a hardware vendor I would express my dislike at this limitation that my customers now face and thus reducing revenues on my organization (I know an ugly marketing angle). We’ve come so far from servers back in the early 2000s that had 1-4 GB capacity to servers that can house 1TB of RAM per server. VMware has proclaimed for years about consolidation onto robust hardware and though the vRAM capacity is supposed to be agnostic from the hardware and be presented as a shared pool of vRAM resources (think cloud computing here but on the local infrastructure). The limitations should have been increased or eliminated for those with Enterprise Plus licensing. It’s because of this limitation that concepts like over commitment of memory is no longer needed but with robustly built hardware (like my example) goes to waste (thus organizations that have already purchased this equipment now have wasted money on something they will doubtedly ever use.
Though I’m not the only one with this concern or expressed dislike in this new design feature, I know it’s too late to pull it back as vSphere 5 is being announced at VMworld in Las Vegas in about six weeks. However, I hope the feedback VMware is getting forces this idea to virtually disappear as revision patches are released for the product. This can’t be a good step in the right direction. I like the concept of a collection of resources in a pool but there has to be a better way of tying this in without having to involve licensing models. Here’s the White Paper from VMware on the product as well as a comparable maximum White Paper on vSphere 4.x. Enjoy and let me know what you think...
VMware vSphere 5 Licensing
VMware vSphere 4.x Configuration Maximums
Like v4 the licenses will be by per physical processor. The difference is where v4 had a physical limitation on memory and processor cores. v5 does not. However, v5 has adopted what may very well cause people from upgrading from v4 to v5 and that's the vRAM entitlement setting on each processor that is licensed. This entitlement (24GB vRAM/proc for Standard, 32GB vRAM/proc for Enterprise, and 48GB/proc vRAM for Enterprise Plus) causes concerns for large businesses like my customer that I support at my job; and here's why...
Say I have a robustly built hardware environment that consists of rack and blade servers (10 for example) that each has 512GB of physical RAM and 4 processors each. Say all 10 of those servers are centrally managed via vCenter and my licensing model is Enterprise Plus (because I like vDS and Host Profile use)... Under v4 I had few limitations with a physical memory maximum of 1TB per host and a VM vRAM limitation of 255GB. With v5 that goes away but I’m now restricted even more because of the vRAM entitlement.
10 hosts x 4 procs = 40 procs that need to be licensed at Enterprise Plus
40 licenses x 48GB vRAM entitlement = 1920GB vRAM entitlement.
However, I have purchased 5120GB of physical RAM for my environment. Therefore, I’m only able to allocate approximately 27% of my memory (5120/1920=.2667) before I’ve reached this entitlement capacity. Once reached, I have to purchase additional licenses (or upgrade my licenses if I have that option). This drives up front costs significantly in order to allow us to utilize RAM that we purchased. In my example my costs for licensing alone (not factoring in support or vCenter) would be approximately 140K to license all 40 procs (MSRP). If I were running vSphere 4, I would be able to utilize all 5120GB of RAM on my hosts. With v5 I have to purchase approximately 30 lienses
5120 * .75 / 48GB (vRAM entitlement for ENT +) = 30 licenses
That drives my costs up by over 100K.
Thus my concern… While VMware has partnerships with literally everyone (Dell, HP, Cisco, EMC, etc.) one would think that if I’m a hardware vendor I would express my dislike at this limitation that my customers now face and thus reducing revenues on my organization (I know an ugly marketing angle). We’ve come so far from servers back in the early 2000s that had 1-4 GB capacity to servers that can house 1TB of RAM per server. VMware has proclaimed for years about consolidation onto robust hardware and though the vRAM capacity is supposed to be agnostic from the hardware and be presented as a shared pool of vRAM resources (think cloud computing here but on the local infrastructure). The limitations should have been increased or eliminated for those with Enterprise Plus licensing. It’s because of this limitation that concepts like over commitment of memory is no longer needed but with robustly built hardware (like my example) goes to waste (thus organizations that have already purchased this equipment now have wasted money on something they will doubtedly ever use.
Though I’m not the only one with this concern or expressed dislike in this new design feature, I know it’s too late to pull it back as vSphere 5 is being announced at VMworld in Las Vegas in about six weeks. However, I hope the feedback VMware is getting forces this idea to virtually disappear as revision patches are released for the product. This can’t be a good step in the right direction. I like the concept of a collection of resources in a pool but there has to be a better way of tying this in without having to involve licensing models. Here’s the White Paper from VMware on the product as well as a comparable maximum White Paper on vSphere 4.x. Enjoy and let me know what you think...
VMware vSphere 5 Licensing
VMware vSphere 4.x Configuration Maximums
Friday, July 8, 2011
Design Workshop Review, VMWorld 2011, and Understanding HA and DRS
It's been a while since I've updated the blog but things have been really busy at home and at work so I apologize for not getting this out sooner.
Design Workshop
I recently attended VMware's Design Workshop for vSphere 4.1 and I have to say I was surprised at how open the course was. I really expected more structured learning but it was very interactive and I really enjoyed the course. I was also surprised at how many different engineers had so many different interpretations on the design of their environments (we did scenario based designs with mock requirements).
Most everyone did a UCS (Cisco) architecture while I (more knowledgeable of Dell) went with Dell blades due to their density and back plane capabilities with the M1000e chassis. However, as I read more about UCS and it's capabilities with the fabric connect switching (to include future capabilities like FCoE) I am thinking that UCS is really being designed with virtualization in mind (from the start).
All in all, the class was great and I have some great reference materials now on the design aspect of vSphere and I can leverage that and my data center administration exam studying and experience to develop a good platform (from start to finish) on future deliveries.
VMworld 2011
So I am lucky enough to have my employer fund my trip to VMworld 2011 in Las Vegas, NV this year and I am really excited to be able to attend. I've had the privilege to meet with VMware to discuss road maps on future virtualization/cloud computing deliveries and let's just say that there will be a lot of announcements on a lot of product lines at VMworld this year. If you haven't made plans, I would high recommend that you try to attend this year. It's not cheap but well worth the investment. More information can be found here...
http://www.vmworld.com
Understanding HA and DRS
Now onto some technical discussion about HA and DRS. When you define this setting in your cluster, you assume some default settings that can be modified that pertain to certain things like restart priority, aggressiveness of your automation settings (priority levels determine level of automation), and recommendation settings.
DRS is relatively easy as you set how conservative or aggressive you want the resource utilization to be and what priority settings you want to set on VMs. Higher priority VMs may have a higher migration threshold setting so in the event of resource utilization increasing on a certain host, you don't run into performance issues on the individual VMs. Distributed Power Management can be incorporated here as well if you wish to save power on underutilized hosts. This coupled with resource pool utilization, you should have well balanced hosts throughout your cluster(s).
HA is tricky and I have a funny example that shows how some people really need to read on how HA works before assuming it's a truly automated solution. As with anything you have to assume that there are points of failure that will force certain redundancies to fail. In this example, I'm talking power... Never assume that if your data center loses power that your VMs will automatically restart just because your hosts restart... With restart priorities defined in your HA cluster, you have to have a host online to facilitate those requests. Additionally, that host has to detect host failures and respond by restarting VM's on available hosts. HA elects up to 5 primary nodes per cluster (in the order they were added) and if all hosts fail due to a power event, no primary node is available to facilitate VM restart priorities. Therefore, when power is restored and hosts restarted, they are simply restarted and the VMs will have to manually be powered on. This is assumed that vCenter is virtualized as well since HA events are facilitated and coordinated via vCenter. There is a great read on this on YellowBricks and I highly recommend you read this and check out Duncan Epping and Frank Denneman's book on the HA and DRS Technical Deepdive
Book - http://www.amazon.com/gp/product/1456301446?ie=UTF8&tag=yellowbricks-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1456301446
HA Deep Dive - http://www.yellow-bricks.com/vmware-high-availability-deepdiv/
DRS Deep Dive - http://www.yellow-bricks.com/2009/10/21/drs-deepdive/
Design Workshop
I recently attended VMware's Design Workshop for vSphere 4.1 and I have to say I was surprised at how open the course was. I really expected more structured learning but it was very interactive and I really enjoyed the course. I was also surprised at how many different engineers had so many different interpretations on the design of their environments (we did scenario based designs with mock requirements).
Most everyone did a UCS (Cisco) architecture while I (more knowledgeable of Dell) went with Dell blades due to their density and back plane capabilities with the M1000e chassis. However, as I read more about UCS and it's capabilities with the fabric connect switching (to include future capabilities like FCoE) I am thinking that UCS is really being designed with virtualization in mind (from the start).
All in all, the class was great and I have some great reference materials now on the design aspect of vSphere and I can leverage that and my data center administration exam studying and experience to develop a good platform (from start to finish) on future deliveries.
VMworld 2011
So I am lucky enough to have my employer fund my trip to VMworld 2011 in Las Vegas, NV this year and I am really excited to be able to attend. I've had the privilege to meet with VMware to discuss road maps on future virtualization/cloud computing deliveries and let's just say that there will be a lot of announcements on a lot of product lines at VMworld this year. If you haven't made plans, I would high recommend that you try to attend this year. It's not cheap but well worth the investment. More information can be found here...
http://www.vmworld.com
Understanding HA and DRS
Now onto some technical discussion about HA and DRS. When you define this setting in your cluster, you assume some default settings that can be modified that pertain to certain things like restart priority, aggressiveness of your automation settings (priority levels determine level of automation), and recommendation settings.
DRS is relatively easy as you set how conservative or aggressive you want the resource utilization to be and what priority settings you want to set on VMs. Higher priority VMs may have a higher migration threshold setting so in the event of resource utilization increasing on a certain host, you don't run into performance issues on the individual VMs. Distributed Power Management can be incorporated here as well if you wish to save power on underutilized hosts. This coupled with resource pool utilization, you should have well balanced hosts throughout your cluster(s).
HA is tricky and I have a funny example that shows how some people really need to read on how HA works before assuming it's a truly automated solution. As with anything you have to assume that there are points of failure that will force certain redundancies to fail. In this example, I'm talking power... Never assume that if your data center loses power that your VMs will automatically restart just because your hosts restart... With restart priorities defined in your HA cluster, you have to have a host online to facilitate those requests. Additionally, that host has to detect host failures and respond by restarting VM's on available hosts. HA elects up to 5 primary nodes per cluster (in the order they were added) and if all hosts fail due to a power event, no primary node is available to facilitate VM restart priorities. Therefore, when power is restored and hosts restarted, they are simply restarted and the VMs will have to manually be powered on. This is assumed that vCenter is virtualized as well since HA events are facilitated and coordinated via vCenter. There is a great read on this on YellowBricks and I highly recommend you read this and check out Duncan Epping and Frank Denneman's book on the HA and DRS Technical Deepdive
Book - http://www.amazon.com/gp/product/1456301446?ie=UTF8&tag=yellowbricks-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1456301446
HA Deep Dive - http://www.yellow-bricks.com/vmware-high-availability-deepdiv/
DRS Deep Dive - http://www.yellow-bricks.com/2009/10/21/drs-deepdive/
Tuesday, April 26, 2011
VCAP-DCA Examination Feedback
So I'll start by saying that I've been a little busy, I've had some work from my masters degree program get published, work has been a little busier than normal, and I was studying for the VCAP-DCA.
Like David Davis, Sean Crookston, and Damian Karlson with their posts regarding the exam experience or the notes associated with VMware's blueprint guide, my experience was similar.
1. You get 3.5 hours to do anwhere between 30-40 interactive (hands on labs) there are no MC questions.
2. About halfway through my exam I realized how much time had really flown by. In the end I had to skip a few of the harder questions and move on to save time.
3. The exam collectively was very difficult and by far the hardest I've taken in a very long time.
It took VMware 12 business days to process my scores and I got my results today and unfortunately I didn't pass but I didn't do as bad as I thought. If I studied another month I think I would have passed. Some things I can strongly suggest is that you take the VMware course on Performance and Scripting with PowerCLI. There are interactive labs in these courses that I am sure would have helped me with prepping for this exam (I've only taken Troubleshooting and the Fast Track (which is what helped me get my VCP4).
Regardless, I felt like I had really learned alot in studying for this exam (for example MSCS clusters, iSCSI and FC storage, vSwitching vs vDSwitching architectures, vCenter Hearbeat, Update Manager, Orchestrator, and others. I plan on taking the exam once my schedule frees up at the end of this year and I graduate with my masters degree. Additionally, I'll be going to VMworld and hope to learn additional things that may help me prep for the exam in addition to trying to get into those before-mentioned courses as well. For now I've included some information from Crookston/Karlson's VCAP-DCA study guide material and the blueprint.
Additionally there are a series of mock labs which can be found on his site here http://www.seancrookston.com/?s=mock+lab
The below is the web version of the guide.
Like David Davis, Sean Crookston, and Damian Karlson with their posts regarding the exam experience or the notes associated with VMware's blueprint guide, my experience was similar.
1. You get 3.5 hours to do anwhere between 30-40 interactive (hands on labs) there are no MC questions.
2. About halfway through my exam I realized how much time had really flown by. In the end I had to skip a few of the harder questions and move on to save time.
3. The exam collectively was very difficult and by far the hardest I've taken in a very long time.
It took VMware 12 business days to process my scores and I got my results today and unfortunately I didn't pass but I didn't do as bad as I thought. If I studied another month I think I would have passed. Some things I can strongly suggest is that you take the VMware course on Performance and Scripting with PowerCLI. There are interactive labs in these courses that I am sure would have helped me with prepping for this exam (I've only taken Troubleshooting and the Fast Track (which is what helped me get my VCP4).
Regardless, I felt like I had really learned alot in studying for this exam (for example MSCS clusters, iSCSI and FC storage, vSwitching vs vDSwitching architectures, vCenter Hearbeat, Update Manager, Orchestrator, and others. I plan on taking the exam once my schedule frees up at the end of this year and I graduate with my masters degree. Additionally, I'll be going to VMworld and hope to learn additional things that may help me prep for the exam in addition to trying to get into those before-mentioned courses as well. For now I've included some information from Crookston/Karlson's VCAP-DCA study guide material and the blueprint.
VDCA 410
For a PDF version of Crookston's study notes can be downloaded from http://www.seancrookston.com/vcap-dca-studyguide-ver3.pdfAdditionally there are a series of mock labs which can be found on his site here http://www.seancrookston.com/?s=mock+lab
The below is the web version of the guide.
Section 1 – Implement and Manage Storage
Section 2 – Implement and Manage Networking
Section 3 – Deploy DRS Clusters and manage Performance
Section 4 – Manage Business Continuity and Protect Data
Section 5 – Perform Operational Maintenance
Section 6 – Perform Advanced Troubleshooting
Section 7 – Secure a vSphere Environment
Section 8 – Perform Scripting and Automation
Section 9 – Perform Advanced vSphere Installations and Configurations
Other Great Resources for studying for the VCAP-DCA exam
Download the latest blueprint from http://communities.vmware.com/docs/DOC-12751
Saturday, January 29, 2011
VMware Troubleshooting v4 Course Plug and knowing the difference between ESX and ESXi (let's put this one to bed shall we?)
Recently, I had the privilege to travel to San Francisco to take the v4 troubleshooting course (4 days). It’s about time that they have a training program that is less lecture and more labs (about 65% lab to 35% lecture). What was even more appealing was the fact that about 10 of the labs in this course were strict troubleshooting labs (I know this may not sound surprising being it’s a troubleshooting course). The issue I have with some labs in a training lab is that they are very well structured and don’t necessarily hit major issues encountered in many production environments. However, the troubleshooting labs were outlining issues that are commonly occurring in many production environments. It was developed by VMware experts that polled their own FAQ page as well as forums and support calls. From there they developed scripts that broke the training environment in multiple ways and we had free reign (and a little instructor help if needed) to fi x the issues.
Reading into what the requirements are on the VMware Certified Advanced Professional-Data Center Administration (VCAP-DCA), this course is a must take for those wanting to upgrade their VCP to the next level. Considering that the VCAP-DCA exam is 100% lab based, I highly recommend that you take this course before looking to build a lab in your house or workplace. This will also give you design ideas as well as you start to think down the advanced certification road… Okay enough plugging the VMware education, now to discuss something more technical.
Another issue I encountered recently (and it seems I have to bang my head against the wall with my security people every few months) is the utilization of ESXi over ESX. For those that know VMware’s market, the ESX hypervisor is going to be discontinued and ESXi will be the only hypervisor delivered by VMware. This isn’t a huge ordeal considering the development and evolution of the virtual management assistant (vMA) or the vCLI toolset. However, I seem to continue to have discussions with our security people because they don’t seem to understand the differences between the two hypervisors (no matter how many times I can explain it or illustrate it). However, I’ll try my best to explain it here and show you what exactly the key differences are and why your virtual environment needs to be designed (or upgraded) with ESXi.
"Improve Reliability and Security. The older architecture of VMware ESX relies on a Linux-based console operating system (OS) for serviceability and agent-based partner integration. In the new, operating-system independent ESXi architecture, the approximately 2 GB console OS has been removed and the necessary management functionality has been implemented directly in the core kernel. Eliminating the console OS drastically reduces the codebase size of ESXi to approximately 100 MB improving security and reliability by removing the security vulnerabilities associated with a general purpose operating system." (VMware, 2011).
Look at the image below regarding patch levels between ESX and ESXi (you would think that this alone would illustrate to the security professionals why use of ESXi is a better choice.
(VMware, 2011).
When looking at this diagram again, I still shake my head in disbelief. However, for those that work in small businesses where selling new products to security is easier, for larger organization it sometimes doesn't necessarily depend on what's more secure, but what has routinely been done in the past. Similar to the "if it ain't broke, don't fix it" mentality.
The other item I found very interesting and I think this is what finally got through to the security engineering team was the following.
(VMware, 2011).
Additionally, this layout on the key compatibility differences between ESX and ESXi (both 4.0 and 4.1). If you notice at 4.1 (which was a major patch level release for ESX/ESXi) you'll notice that the only major difference between ESX 4.1 and ESXi 4.1 is serial cable connectivity to hosts through a serial port. I find this negligible because of the increased support of IP KVM and the use of fastpass or session based authentication against a host using the vMA in ESXi 4.1. For those that want direct cable connection to the host, get a nice KVM or invest in some sort of web console connectivity via the server (iLO, DRAC, etc...).
| Capability | ESX 4.0 | ESX 4.1 | ESXi 4.0 | ESXi 4.1 |
|---|---|---|---|---|
| Service Console | Present | Present | Removed | Removed |
| Admin/config CLIs | COS + vCLI | COS + vCLI | PowerCLI + vCLI | PowerCLI + vCLI |
| Advanced Troubleshooting | COS | COS | Tech Support Mode | Tech Support Mode |
| Scripted Installation | Supported | Supported | Not Supported | Supported |
| Boot from SAN | Supported | Supported | Not Supported | Supported |
| SNMP | Supported | Supported | Supported (limited) | Supported (limited) |
| Active Directory | 3rd party in COS | Integrated | Not Supported | Integrated |
| HW Monitoring | 3rd party agents in COS | 3rd party agents in COS | CIM providers | CIM providers |
| Web Access | Supported | Not Supported | Not Supported | Not Supported |
| Serial Port Connectivity | Supported | Supported | Not Supported | Not Supported |
| Jumbo Frames | Supported | Supported | Supported | Supported |
Hopefully, this puts the core difference issue between ESX and ESXi to bed but for those that need additional information. The links to VMware's site are below.
References:
VMware. (2011). Understanding the Difference between ESX and ESXi
VMware. (2011). Benefits of VMware ESXi Hypervisor Architecture
Tuesday, December 21, 2010
Cost versus Performance... The battle continues...
Over the past couple of weeks, I've had various meetings discussing upcoming features and roadmaps for both the latest versions of VMware 4.x and vCenter, as well as Hyper-V 2008 R2 and SCVMM.
The battle continues between Microsoft's debate on cost savings versus VMware's claim to fame that their product is not only more cost effective (per app), but their expertise in virtualization performance coupled with their advanced capabilities that Hyper-V has yet to match, keeps them on top as an enterprise virtualization solution. The question is... How close is Microsoft to either A. Matching the capability that VMware has (if ever)? and B. Which solution is more important to your organization, cost or performance?
Microsoft in June 2010 conducted a very extensive study (though I have to assume that the metrics are slightly skewed) on the cost benefit analysis of Hyper-V to VMware.
The key points in the cost analysis was that Microsoft has a cheaper solution when you break it down per guest operating system as well as a cheaper solution when looking at annual labor costs within a VI... Though the one thing that it doesn't seem to discuss, is not the performance of the VI test model, nor did it discuss existing capabilities within the products that were tested. Knowing that this was a cost analysis and only a cost analysis I think that Microsoft could in fact be a cheaper (bottom line dollar amount) product... Meaning in test lab environments it's an ideal solution (as you can see below)...
Workload Comparison – Average Cost per Guest VM by Platform
The respondents were asked to identify the workloads their company had virtualized. Across workloads (App, DB, DR, Email, Test & VDI) the Hyper-V costs were less expensive for all workloads except test – in all other workloads the Hyper-V customer saw significant savings.
(Microsoft, 2010).
The bottom line is looking at the above, that most platforms are utilizing VMware versus Hyper-V (with the exception of test/lab environments). If you look at some of the noteworthy platforms (VDI, E-mail, and App), you can see that VMware is even more heavily utilized (though costing more). I believe this to be true because these environments are extensive to configure in a VI (especially a VDI solution). Also, because VMware has more redundancy measures you can configure within those environments, the labor associated with the design and operations may also be increased.
Bottom line, it's the organization that decides the benefit of their VI design and layout (be it cost, features, performance, or a combination of all of the above based on the requirements). However, I think David Davis a contributor to SearchVMware.com nailed it on the head...
Why VMware ESX beats Microsoft Hyper-V, hands-down
In the end, it is my opinion that VMware "wins the war" for several reasons. Perhaps most obvious, is that Microsoft is already incredibly behind VMware in terms of virtualization know-how and may never catch up. In 2007 alone, VMware announced ESXi, Site Recovery and Update Manager in an effort improve ESX Server. As they will continue to improve their product year after year to provide more value, Microsoft is fighting an uphill battle.
In addition, VMware will continue to see a huge surge in revenue thanks to Virtual Desktop Infrastructure (VDI). VDI, in my opinion, will be the next wave of virtualization after server consolidation. Also, VMware will be able to compete in the price war with Microsoft because of ESXi. The company has the option of giving it away for free, but even if they don't they're still offering it at $495 as opposed to Windows Server 2008 Hyper-V at $999.
Finally, and most importantly, ESXi can still win in efficiency. Compare the ESXi hypervisor at 32MB to the size of Hyper-V at about 2GB. Who will win at boot up? Who will win at security? Who will win when it comes to a lack of troubleshooting and solid reliability? My guess is that 32MB worth of code is going to be inherently faster, more secure and more reliable. In terms of reliability and uptime, it has been stated that ESX will run for 1000+ days without a reboot, whereas Windows Server 2008 Hyper-V will need to be rebooted every 30 days due to Windows Updates.
In the end, if you look at just a single license of VMware Infrastructure Suite Enterprise at $6950, and compare that to a single license of Windows Server 2008 Enterprise or Standard, the cost for the VMware solution will undoubtedly cost more. However, it still beats Microsoft's Hyper-V in terms of performance hands-down.
(Davis, D., 2008).
Regardless, of your environment and budget, requirements are going to define whether or not you utilize Hyper-V or VMware. If you have no requirement to deploy a highly available solution and you have a limited budget, then Hyper-V is the answer. If you have a large environment with a complex storage and network implementation and you want to utilize redundant solutions within your host, guest OS's, and network implementation then VMware with vCenter is the answer.
On a personal note, I hope everyone has a Happy Holiday and a Happy New Year... More blogs to come in 2011. Hope you enjoy!
References:
Davis, D. (2008). How VMware ESX performance trumps Hyper-V's price. Retrieved on December 21, 2010 from How VMware ESX performance trumps Hyper-V's price
Microsoft. (2010). Microsoft Hyper-V vs. VMware ESX & vSphere Operations & Management Cost Analysis. Retrieved on December 21, 2010 from Microsoft Hyper-V vs. VMware ESX & vSphere Operations & Management Cost Analysis
Subscribe to:
Comments (Atom)


