Friday, July 15, 2011

vSphere 5 Licensing - Talk about a step in the wrong direction...

Recently, I've been involved in different discussions internally and with VMware and I've had the recent privilege to see the new roadmap for vSphere 5. Though I can't get into details on the technical front, one area that I've recently learned about is the licensing changes that will occur with vSphere 5.

Like v4 the licenses will be by per physical processor. The difference is where v4 had a physical limitation on memory and processor cores. v5 does not. However, v5 has adopted what may very well cause people from upgrading from v4 to v5 and that's the vRAM entitlement setting on each processor that is licensed. This entitlement (24GB vRAM/proc for Standard, 32GB vRAM/proc for Enterprise, and 48GB/proc vRAM for Enterprise Plus) causes concerns for large businesses like my customer that I support at my job; and here's why...


Say I have a robustly built hardware environment that consists of rack and blade servers (10 for example) that each has 512GB of physical RAM and 4 processors each. Say all 10 of those servers are centrally managed via vCenter and my licensing model is Enterprise Plus (because I like vDS and Host Profile use)... Under v4 I had few limitations with a physical memory maximum of 1TB per host and a VM vRAM limitation of 255GB. With v5 that goes away but I’m now restricted even more because of the vRAM entitlement.


10 hosts x 4 procs = 40 procs that need to be licensed at Enterprise Plus


40 licenses x 48GB vRAM entitlement = 1920GB vRAM entitlement.


However, I have purchased 5120GB of physical RAM for my environment. Therefore, I’m only able to allocate approximately 27% of my memory (5120/1920=.2667) before I’ve reached this entitlement capacity. Once reached, I have to purchase additional licenses (or upgrade my licenses if I have that option). This drives up front costs significantly in order to allow us to utilize RAM that we purchased. In my example my costs for licensing alone (not factoring in support or vCenter) would be approximately 140K to license all 40 procs (MSRP). If I were running vSphere 4, I would be able to utilize all 5120GB of RAM on my hosts. With v5 I have to purchase approximately 30 lienses

5120 * .75 / 48GB (vRAM entitlement for ENT +) = 30 licenses

That drives my costs up by over 100K.

Thus my concern… While VMware has partnerships with literally everyone (Dell, HP, Cisco, EMC, etc.) one would think that if I’m a hardware vendor I would express my dislike at this limitation that my customers now face and thus reducing revenues on my organization (I know an ugly marketing angle). We’ve come so far from servers back in the early 2000s that had 1-4 GB capacity to servers that can house 1TB of RAM per server. VMware has proclaimed for years about consolidation onto robust hardware and though the vRAM capacity is supposed to be agnostic from the hardware and be presented as a shared pool of vRAM resources (think cloud computing here but on the local infrastructure). The limitations should have been increased or eliminated for those with Enterprise Plus licensing. It’s because of this limitation that concepts like over commitment of memory is no longer needed but with robustly built hardware (like my example) goes to waste (thus organizations that have already purchased this equipment now have wasted money on something they will doubtedly ever use.


Though I’m not the only one with this concern or expressed dislike in this new design feature, I know it’s too late to pull it back as vSphere 5 is being announced at VMworld in Las Vegas in about six weeks. However, I hope the feedback VMware is getting forces this idea to virtually disappear as revision patches are released for the product. This can’t be a good step in the right direction. I like the concept of a collection of resources in a pool but there has to be a better way of tying this in without having to involve licensing models. Here’s the White Paper from VMware on the product as well as a comparable maximum White Paper on vSphere 4.x. Enjoy and let me know what you think...


VMware vSphere 5 Licensing

VMware vSphere 4.x Configuration Maximums




Friday, July 8, 2011

Design Workshop Review, VMWorld 2011, and Understanding HA and DRS

It's been a while since I've updated the blog but things have been really busy at home and at work so I apologize for not getting this out sooner.

Design Workshop

I recently attended VMware's Design Workshop for vSphere 4.1 and I have to say I was surprised at how open the course was. I really expected more structured learning but it was very interactive and I really enjoyed the course. I was also surprised at how many different engineers had so many different interpretations on the design of their environments (we did scenario based designs with mock requirements).

Most everyone did a UCS (Cisco) architecture while I (more knowledgeable of Dell) went with Dell blades due to their density and back plane capabilities with the M1000e chassis. However, as I read more about UCS and it's capabilities with the fabric connect switching (to include future capabilities like FCoE) I am thinking that UCS is really being designed with virtualization in mind (from the start).

All in all, the class was great and I have some great reference materials now on the design aspect of vSphere and I can leverage that and my data center administration exam studying and experience to develop a good platform (from start to finish) on future deliveries.

VMworld 2011

So I am lucky enough to have my employer fund my trip to VMworld 2011 in Las Vegas, NV this year and I am really excited to be able to attend. I've had the privilege to meet with VMware to discuss road maps on future virtualization/cloud computing deliveries and let's just say that there will be a lot of announcements on a lot of product lines at VMworld this year. If you haven't made plans, I would high recommend that you try to attend this year. It's not cheap but well worth the investment. More information can be found here...

http://www.vmworld.com

Understanding HA and DRS

Now onto some technical discussion about HA and DRS. When you define this setting in your cluster, you assume some default settings that can be modified that pertain to certain things like restart priority, aggressiveness of your automation settings (priority levels determine level of automation), and recommendation settings.

DRS is relatively easy as you set how conservative or aggressive you want the resource utilization to be and what priority settings you want to set on VMs. Higher priority VMs may have a higher migration threshold setting so in the event of resource utilization increasing on a certain host, you don't run into performance issues on the individual VMs. Distributed Power Management can be incorporated here as well if you wish to save power on underutilized hosts. This coupled with resource pool utilization, you should have well balanced hosts throughout your cluster(s).

HA is tricky and I have a funny example that shows how some people really need to read on how HA works before assuming it's a truly automated solution. As with anything you have to assume that there are points of failure that will force certain redundancies to fail. In this example, I'm talking power... Never assume that if your data center loses power that your VMs will automatically restart just because your hosts restart... With restart priorities defined in your HA cluster, you have to have a host online to facilitate those requests. Additionally, that host has to detect host failures and respond by restarting VM's on available hosts. HA elects up to 5 primary nodes per cluster (in the order they were added) and if all hosts fail due to a power event, no primary node is available to facilitate VM restart priorities. Therefore, when power is restored and hosts restarted, they are simply restarted and the VMs will have to manually be powered on. This is assumed that vCenter is virtualized as well since HA events are facilitated and coordinated via vCenter. There is a great read on this on YellowBricks and I highly recommend you read this and check out Duncan Epping and Frank Denneman's book on the HA and DRS  Technical Deepdive

Book - http://www.amazon.com/gp/product/1456301446?ie=UTF8&tag=yellowbricks-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1456301446

HA Deep Dive - http://www.yellow-bricks.com/vmware-high-availability-deepdiv/

DRS Deep Dive - http://www.yellow-bricks.com/2009/10/21/drs-deepdive/

Tuesday, April 26, 2011

VCAP-DCA Examination Feedback

So I'll start by saying that I've been a little busy, I've had some work from my masters degree program get published, work has been a little busier than normal, and I was studying for the VCAP-DCA.

Like David Davis, Sean Crookston, and Damian Karlson with their posts regarding the exam experience or the notes associated with VMware's blueprint guide, my experience was similar.

1. You get 3.5 hours to do anwhere between 30-40 interactive (hands on labs) there are no MC questions.

2. About halfway through my exam I realized how much time had really flown by. In the end I had to skip a few of the harder questions and move on to save time.

3. The exam collectively was very difficult and by far the hardest I've taken in a very long time.

It took VMware 12 business days to process my scores and I got my results today and unfortunately I didn't pass but I didn't do as bad as I thought. If I studied another month I think I would have passed. Some things I can strongly suggest is that you take the VMware course on Performance and Scripting with PowerCLI. There are interactive labs in these courses that I am sure would have helped me with prepping for this exam (I've only taken Troubleshooting and the Fast Track (which is what helped me get my VCP4).

Regardless, I felt like I had really learned alot in studying for this exam (for example MSCS clusters, iSCSI and FC storage, vSwitching vs vDSwitching architectures, vCenter Hearbeat, Update Manager, Orchestrator, and others. I plan on taking the exam once my schedule frees up at the end of this year  and I graduate with my masters degree. Additionally, I'll be going to VMworld and hope to learn additional things that may help me prep for the exam in addition to trying to get into those before-mentioned courses as well.  For now I've included some information from Crookston/Karlson's VCAP-DCA study guide material and the blueprint.

VDCA 410
For a PDF version of Crookston's study notes can be downloaded from http://www.seancrookston.com/vcap-dca-studyguide-ver3.pdf
Additionally there are a series of mock labs which can be found on his site here http://www.seancrookston.com/?s=mock+lab

The below is the web version of the guide.
Section 1 – Implement and Manage Storage
Section 2 – Implement and Manage Networking
Section 3 – Deploy DRS Clusters and manage Performance
Section 4 – Manage Business Continuity and Protect Data
Section 5 – Perform Operational Maintenance
Section 6 – Perform Advanced Troubleshooting
Section 7 – Secure a vSphere Environment
Section 8 – Perform Scripting and Automation
Section 9 – Perform Advanced vSphere Installations and Configurations

Other Great Resources for studying for the VCAP-DCA exam

Download the latest blueprint from http://communities.vmware.com/docs/DOC-12751

Saturday, January 29, 2011

VMware Troubleshooting v4 Course Plug and knowing the difference between ESX and ESXi (let's put this one to bed shall we?)

Recently, I had the privilege to travel to San Francisco to take the v4 troubleshooting course (4 days). It’s about time that they have a training program that is less lecture and more labs (about 65% lab to 35% lecture). What was even more appealing was the fact that about 10 of the labs in this course were strict troubleshooting labs (I know this may not sound surprising being it’s a troubleshooting course). The issue I have with some labs in a training lab is that they are very well structured and don’t necessarily hit major issues encountered in many production environments. However, the troubleshooting labs were outlining issues that are commonly occurring in many production environments. It was developed by VMware experts that polled their own FAQ page as well as forums and support calls. From there they developed scripts that broke the training environment in multiple ways and we had free reign (and a little instructor help if needed) to fi x the issues.
Reading into what the requirements are on the VMware Certified Advanced Professional-Data Center Administration (VCAP-DCA), this course is a must take for those wanting to upgrade their VCP to the next level. Considering that the VCAP-DCA exam is 100% lab based, I highly recommend that you take this course before looking to build a lab in your house or workplace. This will also give you design ideas as well as you start to think down the advanced certification road… Okay enough plugging the VMware education, now to discuss something more technical.
Another issue I encountered recently (and it seems I have to bang my head against the wall with my security people every few months) is the utilization of ESXi over ESX. For those that know VMware’s market, the ESX hypervisor is going to be discontinued and ESXi will be the only hypervisor delivered by VMware. This isn’t a huge ordeal considering the development and evolution of the virtual management assistant (vMA) or the vCLI toolset. However, I seem to continue to have discussions with our security people because they don’t seem to understand the differences between the two hypervisors (no matter how many times I can explain it or illustrate it). However, I’ll try my best to explain it here and show you what exactly the key differences are and why your virtual environment needs to be designed (or upgraded) with ESXi.
"Improve Reliability and Security. The older architecture of VMware ESX relies on a Linux-based console operating system (OS) for serviceability and agent-based partner integration. In the new, operating-system independent ESXi architecture, the approximately 2 GB console OS has been removed and the necessary management functionality has been implemented directly in the core kernel. Eliminating the console OS drastically reduces the codebase size of ESXi to approximately 100 MB improving security and reliability by removing the security vulnerabilities associated with a general purpose operating system." (VMware, 2011).

Look at the image below regarding patch levels between ESX and ESXi (you would think that this alone would illustrate to the security professionals why use of ESXi is a better choice.

                                                        (VMware, 2011).

When looking at this diagram again, I still shake my head in disbelief. However, for those that work in small businesses where selling new products to security is easier, for larger organization it sometimes doesn't necessarily depend on what's more secure, but what has routinely been done in the past. Similar to the "if it ain't broke, don't fix it" mentality.

The other item I found very interesting and I think this is what finally got through to the security engineering team was the following.


                                                                                               (VMware, 2011).

Additionally, this layout on the key compatibility differences between ESX and ESXi (both 4.0 and 4.1). If you notice at 4.1 (which was a major patch level release for ESX/ESXi) you'll notice that the only major difference between ESX 4.1 and ESXi 4.1 is serial cable connectivity to hosts through a serial port. I find this negligible because of the increased support of IP KVM and the use of fastpass or session based authentication against a host using the vMA in ESXi 4.1. For those that want direct cable connection to the host, get a nice KVM or invest in some sort of web  console connectivity via the server (iLO, DRAC, etc...).

Capability ESX 4.0ESX 4.1ESXi 4.0ESXi 4.1
Service ConsolePresentPresentRemovedRemoved
Admin/config CLIsCOS + vCLICOS + vCLIPowerCLI + vCLIPowerCLI + vCLI
Advanced TroubleshootingCOSCOSTech Support ModeTech Support Mode
Scripted InstallationSupportedSupportedNot SupportedSupported
Boot from SANSupportedSupportedNot SupportedSupported
SNMPSupportedSupportedSupported (limited)Supported (limited)
Active Directory3rd party in COSIntegratedNot SupportedIntegrated
HW Monitoring3rd party agents in COS3rd party agents in COSCIM providersCIM providers
Web AccessSupportedNot SupportedNot SupportedNot Supported
Serial Port ConnectivitySupportedSupportedNot SupportedNot Supported
Jumbo FramesSupportedSupportedSupportedSupported


Hopefully, this puts the core difference issue between ESX and ESXi to bed but for those that need additional information. The links to VMware's site are below.

References:



Tuesday, December 21, 2010

Cost versus Performance... The battle continues...

Over the past couple of weeks, I've had various meetings discussing upcoming features and roadmaps for both the latest versions of VMware 4.x and vCenter, as well as Hyper-V 2008 R2 and SCVMM. 

The battle continues between Microsoft's debate on cost savings versus VMware's claim to fame that their product is not only more cost effective (per app), but their expertise in virtualization performance coupled with their advanced capabilities that Hyper-V has yet to match, keeps them on top as an enterprise virtualization solution. The question is... How close is Microsoft to either A. Matching the capability that VMware has (if ever)? and B. Which solution is more important to your organization, cost or performance?

Microsoft in June 2010 conducted a very extensive study (though I have to assume that the metrics are slightly skewed) on the cost benefit analysis of Hyper-V to VMware.

The key points in the cost analysis was that Microsoft has a cheaper solution when you break it down per guest operating system as well as a cheaper solution when looking at annual labor costs within a VI... Though the one thing that it doesn't seem to discuss, is not the performance of the VI test model, nor did it discuss existing capabilities within the products that were tested. Knowing that this was a cost analysis and only a cost analysis I think that Microsoft could in fact be a cheaper (bottom line dollar amount) product... Meaning in test lab environments it's an ideal solution (as you can see below)...

Workload Comparison – Average Cost per Guest VM by Platform
The respondents were asked to identify the workloads their company had virtualized. Across workloads (App, DB, DR, Email, Test & VDI) the Hyper-V costs were less expensive for all workloads except test – in all other workloads the Hyper-V customer saw significant savings.

                                                                                                  (Microsoft, 2010).

The bottom line is looking at the above, that most platforms are utilizing VMware versus Hyper-V (with the exception of test/lab environments). If you look at some of the noteworthy platforms (VDI, E-mail, and App), you can see that VMware is even more heavily utilized (though costing more). I believe this to be true because these environments are extensive to configure in a VI (especially a VDI solution). Also, because VMware has more redundancy measures you can configure within those environments, the labor associated with the design and operations may also be increased.

Bottom line, it's the organization that decides the benefit of their VI design and layout (be it cost, features, performance, or a combination of all of the above based on the requirements). However, I think David Davis a contributor to SearchVMware.com nailed it on the head...

Why VMware ESX beats Microsoft Hyper-V, hands-down
In the end, it is my opinion that VMware "wins the war" for several reasons. Perhaps most obvious, is that Microsoft is already incredibly behind VMware in terms of virtualization know-how and may never catch up. In 2007 alone, VMware announced ESXi, Site Recovery and Update Manager in an effort improve ESX Server. As they will continue to improve their product year after year to provide more value, Microsoft is fighting an uphill battle.

In addition, VMware will continue to see a huge surge in revenue thanks to Virtual Desktop Infrastructure (VDI). VDI, in my opinion, will be the next wave of virtualization after server consolidation. Also, VMware will be able to compete in the price war with Microsoft because of ESXi. The company has the option of giving it away for free, but even if they don't they're still offering it at $495 as opposed to Windows Server 2008 Hyper-V at $999.
 
Finally, and most importantly, ESXi can still win in efficiency. Compare the ESXi hypervisor at 32MB to the size of Hyper-V at about 2GB. Who will win at boot up? Who will win at security? Who will win when it comes to a lack of troubleshooting and solid reliability? My guess is that 32MB worth of code is going to be inherently faster, more secure and more reliable. In terms of reliability and uptime, it has been stated that ESX will run for 1000+ days without a reboot, whereas Windows Server 2008 Hyper-V will need to be rebooted every 30 days due to Windows Updates.
 
In the end, if you look at just a single license of VMware Infrastructure Suite Enterprise at $6950, and compare that to a single license of Windows Server 2008 Enterprise or Standard, the cost for the VMware solution will undoubtedly cost more. However, it still beats Microsoft's Hyper-V in terms of performance hands-down.

                                                                                             (Davis, D., 2008).

Regardless, of your environment and budget, requirements are going to define whether or not you utilize Hyper-V or VMware. If you have no requirement to deploy a highly available solution and you have a limited budget, then Hyper-V is the answer. If you have a large environment with a complex storage and network implementation and you want to utilize redundant solutions within your host, guest OS's, and network implementation then VMware with vCenter is the answer.

On a personal note, I hope everyone has a Happy Holiday and a Happy New Year... More blogs to come in 2011. Hope you enjoy!

References:

Davis, D. (2008). How VMware ESX performance trumps Hyper-V's price. Retrieved on December 21, 2010 from How VMware ESX performance trumps Hyper-V's price

Microsoft. (2010). Microsoft Hyper-V vs. VMware ESX & vSphere Operations & Management Cost Analysis. Retrieved on December 21, 2010 from Microsoft Hyper-V vs. VMware ESX & vSphere Operations & Management Cost Analysis

Wednesday, December 1, 2010

Finally nice to see some serious competition in the virtualization marketplace...

Watch out VMware, because Microsoft is going to catch up and provide some serious competition in the virtualization and cloud computing marketplace!!!

I've had the recent privilege of playing with System Center Virtual Machine Manager (SCVMM) from Microsoft (which is part of the System Center Suite) and got to manage some physical Hyper-V hosts. I have to say that with R2, the functionality of SCVMM is comparable to vCenter. They are working on developing the methods to match VMware's capability of amazing functions like Storage vMotion, vMotion, DRS, and HA.

What's even crazier is that SCVMM can manage vCenter instances (limited capabilities as opposed to using vCenter) and ESX/ESXi hosts.

Now these features (almost all of them) are new to R2. Meaning six months ago, VMware was the only decent virtualization product out there that supported serious clustering models for a larger enterprise. Witih the release of SCVMM 2008 R2, there is definitely some competition brewing between Microsoft and VMware.

One area that is difficult to negotate (for many businesses) is the cost of licensing. Microsoft has VMware beat (to a certain degree) with their Datacenter licensing model. Server Management Suite Datacenter (SMSD) costs about 1,500 per 2 procs and you have to purchase it at a 2 proc minimum; whereas vCenter alone is about triple the costs and Enterprise Plus licenses is about 3,900 MSRP per proc.

However, as it stands right now, VMware still offers more networking and advanced storage options. Though this leaves me to believe that with R2's included migration, networking, and current storage options now, Microsoft isn't too far behind in being a viable competing organization in the virtualization and cloud computing market. Especially, given SCVMM's ability to integrate with the entire System Center productivity suite (Operations Manager and Configuration Manager). I hope to receive more formal training on SCVMM and VMware in the next few months so I'll be able to provide a more educated comparison of the two products in later blogs. For now, check out the new features of SCVMM 2008 R2.

What's New in VMM 2008 R2

System Center Virtual Machine Manager 2008 (VMM 2008) is a comprehensive management solution for managing virtualized infrastructure running on Windows Server 2008 with Hyper-V, Virtual Server 2005 R2 and VMware ESX through Virtual Center. Recently, Windows Server 2008 R2 RC was released, which includes significant feature improvements to Hyper-V—the underlying hypervisor platform. VMM 2008 R2 – the next version of VMM – has recently been released. VMM 2008 R2 leverages the new platform enhancements and extends the feature set of VMM 2008. This overview highlights the most important new and significantly enhanced features in the VMM 2008 R2:

Support for new features of Windows Server 2008 R2

  • Live Migration: Seen through the VMM console, this enables administrators to move a virtual machine between clustered hosts in a way that is completely transparent to the users connected to the virtual machine. This allows administrators greater flexibility in responding to planned downtime and provides higher machine availability. The basic requirements for Live Migration are that all hosts must be part of a Windows Server 2008 R2 failover cluster and host processors must be from the same manufacturer. Additionally all hosts in the cluster must have access to shared storage. No changes are required to existing virtual machines, network, or storage devices in moving from Quick Migration to Live Migration other than upgrading to Windows Server 2008 R2 and VMM 2008 R2.
  • Hot addition/removal of Storage: Allows the addition and removal of storage to virtualized infrastructure without interruption. Additionally, "live” management of virtual hard disk (VHDs) or iSCSI pass through disks, allows administrators to take advantage of additional backup scenarios and readily use mission critical and storage-intensive applications.
  • New optimized networking technologies: VMM 2008 R2 supports two new networking technologies – Virtual Machine Queue (VMQ) and TCP Chimney – providing increased network performance while creating less of a CPU burden. NICs that support VMQ, create a unique virtual network queue for each virtual machine on a host that can pass network packets directly from the hypervisor to the virtual machine. This increases throughput as it bypasses much of the processing normally required by the virtualization stack. With TCP Chimney, TCP/IP traffic can be offloaded to a physical NIC on the host computer reducing CPU load and improving network performance.

Enhanced storage and cluster support

  • Clustered Shared Volumes (CSV): Provides a single, consistent storage space that allows hosts in a cluster to concurrently access virtual machine files on a single shared logical unit number (LUN). CSV eliminates the previous one virtual machine per LUN restriction and coordinates the use of storage with much greater efficiency and higher performance. CSV enables the Live Migration of virtual machines without impacting other virtual machines sharing the same LUN. Enabling CSV on failover clusters is straightforward; many storage configuration complexities prior to CSV have now been eliminated.
  • SAN migration into and out of clustered hosts: This allows virtual machines to migrate into and out of clusters using a SAN transfer, which saves the time required for copying the virtual machine file over the network.
  • Expanded Support for iSCSI SANs: Previously, only one LUN could be bound to a single iSCSI target whereas now – with support now built into VMM 2008 R2 – multiple LUNS can be mapped to a single iSCSI target. This provides broader industry support for iSCSI SANs allowing customers more flexibility in choosing storage providers and iSCSI SAN options.
  • Storage Migration: Quick Storage Migration enables migration of a VM’s storage both within the same host and across hosts while the VM is running with a minimum of downtime, typically less than 2 minutes. VMM 2008 R2 also supports VMware storage vMotion which allows the storage of a VMware VM to be transferred while the VM remains on the same host with no downtime.
  • Rapid Provisioning: Allows administrators to take advantage of SAN provider technologies to clone a LUN containing a VHD and present it to the host while still utilizing the VMM template so the OS customization and IC installation can be applied.
  • Support for third party CFS: For users requiring a true clustered file system, VMM 2008 R2 supports third party file systems by detecting CFS disks and allows for deploying multiple VMs per LUN.
  • Support for Veritas Volume Manager: VMM 2008 R2 recognizes Veritas Volume Manager disks as a cluster disk resource.

Streamlined process for managing host upgrades

  • Maintenance Mode: Allows administrators to apply updates or perform maintenance on a host server by safely evacuating all virtual machines to other hosts on a cluster. Maintenance mode can be configured to use Live Migration to move the virtual machines or can put the workloads into a saved state to be safely reactivated when maintenance or upgrades are complete. Maintenance mode is enabled for all supported hypervisor platforms on Windows Server 2008 R2.

Other VMM 2008 R2 enhancements

  • Support of disjoint domains: Reduces the complexity of reconciling host servers with differing domain names in Active Directory and DNS. In these situations, VMM 2008 R2 automatically creates a custom service principal name (SPN) configured in both AD and DNS allowing for successful authentication.
  • Use of defined port groups with VMware Virtual Center: On installation, VMM 2008 R2 will present available port groups for VMM’s use with VMware vCenter thus allowing administrators to maintain control over which port groups are used.
  • Queuing of Live migrations: This feature enables users to do multiple Live Migrations without needing to keep track of other Live Migrations that are happening within the cluster. Detects when a Live Migration will fail due to another Live Migration already in progress and queues the request for later.
  • Host compatibility checks: VM migration requires host hardware to be compatible; this feature provides a deep check for compatibility using Hyper-V and VMware compatibility check APIs. Administrators can check if the source host is compatible with the destination host before performing a migration and finding out the VM cannot start on the new host. A related feature makes a VM compatible by turning off certain CPU features which makes the VM compatible with the hosts in the cluster.
                                                                                      (Microsoft, 2010).


Reference

Microsoft (2010). What's New in VMM 2008 R2. Retrieved on December 1, 2010 from
           What's New in VMM 2008 R2 - Microsoft Corp.

Friday, November 12, 2010

A very in depth comparison when choosing virtualization platforms...

I've been getting into discussions with my Microsoft Premier Field Engineers at work about VMware and Hyper-V. I won't argue that Hyper-V is cheaper (requires just the OS license from Microsoft and then you can load Hyper-V onto it). However, I get caught up in features and the old saying that Microsoft never gets anything right until the third time...Regardless, I always try to give any vendor the benefit of the doubt and always approach a new delivery from not just a technical perspective (comparison of features, expertise, etc...) but also a business one (costs versus time). I won't disagree that VMware isn't expensive Enterprise Plus is about 3,800 per processor not including costs for vCenter. However, I stumbled across the ultimate comparison guide between the four major vendors (VMware, Microsoft, Citrix, and Redhat). If this doesn't provide you with enough information about which virtualization vendor to choose from, I'm not sure what will... Enjoy




















Now if there was ever a reason to purchase VMware over the other, the below information shows features and options offered by VMware that aren't offered (at least fully featured) by the other vendors. Of course this comparison was conducted by VMware so take it with a grain of salt. However, no comparison offered by a particular vendor would be complete without a selling point about why you should go with their product.






As with anything you are building into your infrastructure, you need to outline business and technical requirements and feasibility before engineering a vendor based solution, especially when it comes to virtualization.

Here is a link to the full PDF file with the above information...

VMware vSphere—The Best Platform for Building Cloud Infrastructures