Image Image Image Image Image Image Image Image Image Image
Scroll to top

Top

24 Comments

Calling Out the Phony War: vSphere & Hyper-V

Calling Out the Phony War: vSphere & Hyper-V
Paul Meehan, VCAP-DCA, CCNA
  • On October 22, 2013

With the release of vSphere 5.5, it’s inevitable there will be a lot of blogging by vendors comparing their products to vSphere. That is expected and not surprising. It’s the job of vendor “Evangelists” to do this.

Apologies for the length of this post in advance.

I am a freelance consultant – I have done quite a lot of work with vSphere and hope to do more. I also fully expect to be working with, and coming across Hyper-V a lot in the future. So the opinion expressed here is personal and not tied to any vendor – I do not rely on VMware directly for my work. In fact the last project I worked on was a new Hyper-V implementation where a customer decided to go that route.

I have used Hyper-V 2012 in my lab and while I was impressed with many of it’s features, and the improvements therein, I found installing and configuring it more difficult and time consuming than vSphere. I have one or two customers who have had similar experiences.

Part of this is probably learning curve, and part due to design. But I do assume I will be working with it again soon.

Scrutinizing Vendor-Provided Data Points

However, it annoys me when people, either directly, while representing vendors or with very close affiliations, present incorrect information. A lot of customers take information on the internet from Internet Powerhouses at face value so it should be accurate, even allowing for a little “bias”.

In my experience, by and large, VMware does not attack Microsoft – their product evangelists focus on providing information on their own products to support their customers. The VMware community is massive and it’s all about sharing information, with everyone, all the time. It’s one of the most impressive things I have seen. I recently attended the Dublin VMware User Group (VMUG) where a senior VMware Technical Marketing Architect provided a comparison of Hyper-V and vSphere. He was fair and honest about where both products were strong. It was refreshing to hear someone so well known (Top-10 vSphere blogger) being so up front.

Case in Point: Microsoft’s vSphere 5.5 vs. 2012 R2 Hyper-V Comparison

Let’s take a look at this following blog from Keith Mayer. Keith is a Senior Microsoft Evangelist. I just had to write a post on this having engaged with Keith who is a nice guy BTW. I’ve been in a back and fort conversation with Keith on Twitter. Keith has some very useful information on this site which I suggest you check out. However I wish to comment on this blog entry, entitled “VMware or Microsoft? Comparing vSphere 5.5 and Windows Server 2012 R2 Hyper-V At-A-Glance

I want to call out a few elements on this blog which I believe are not correct.  Ultimately make your own mind up and do your own homework rather than take my word for it.  The blog compares Hyper-V 2012 R2 and vSphere 5.5.

Hyper-V is on the 2nd column from the left and vSphere third from the left.

Monitoring & Management

First inaccuracy:

image13This neglects to mention one thing, and that is that vCenter (normally used by most customers using HA/clustering) is an “Operations Monitoring” and  “Management” platform.

In fact most people manage ALL of their vSphere environment through vCenter – not vCOPs. Well, not yet anyway.

vCenter has all the performance and monitoring data, since installation, for the entire vSphere environment being managed, The separately mentioned product (vCOPs) extracts data from vCenter to build it’s performance and inventory schema/view. For advanced event correlation/Analytics you can use vCenter Operations Management (vCOPS), but this is also offered as part of a bundled offering: vSphere with vCenter Operations Management – so it is not strictly speaking always seen as a “separate” license.

Secondly, are Microsoft giving away System Center 2012 (VMM) for free ?. I tried to download it just now, and it only offered me an “Evaluation”. See below.

Memory Management

Next one:

image14I don’t agree with this. To say TPS is useful on “legacy” server hardware platforms is nonsense. TPS is very effective and results in very significant memory savings, for customers. It also helps to reduce resource duplication. Large page support needs to be manually setup in vSphere, which shows how it is not a “standard” feature. However vSphere supports both scenarios, and Vmware guidance is to use it depending on the application.

Application Load Balancing

Next One:

image15

Is System Center 2012 R2 VMM free ? This seems to indicate that you have to pay for vCNS but not for SCVMM. It would certainly appear from Microsoft’s website (just now) that this is not the case. Note the term “Evaluation”. If free – why is it only available for Evaluation ?

image16

Migration Options

Next one: This is a particular hobby horse of mine:

image17

Now people need to be very careful when reading things like this. I have engaged in conversation with Keith on this topic and he has admitted that this feature is designed to allow customers to create their own “cap” on the maximum number of concurrent operations. (He did change his blog to be fair.)

However, you should know that if you use DRS, vSphere performs a cost-benefit-risk-analysis calculation before migrations, and “allows” a cost of 30% of a CPU-core for a single vMotion Operation on 1Gb/s adapter, and 100% of a CPU core for a vMotion on 10Gb/s adapter.

That shows what DRS’s algorithm expects in terms of used cycles for a vMotion operation. This is conservative and based on engineering know-how and experience. Keith’s feels this is not so much of an issue due to the scalability of modern hardware and did accept that this is an “engineering” limit i.e. VMware has taken a more conservative approach whereas Microsoft allow you to run as many as you like. VMware could take the handbrake off but chooses to bake limits like this to protect the stability of customers’ environments. Otherwise why don’t they say “unlimited” too ?. Make your own mind up.

There is quite a bit of commentary on Microsoft Cluster support on vSphere and Hyper-V superiority in that regard. That’s true, and it is what you would expect. Same-vendor interoperability is always greater than interoperability BETWEEN vendors.

However… Remember that 99.99% of customers don’t use MSCS on vSphere or Hyper-V, and most of them want to move away from the complexity of MSCS, Veritas Cluster and other clustering frameworks to platforms like vSphere (and Hyper-V), for the inherent features in the platform i.e. HA. That was one of the reasons for the success of ESX in the first place, as well as vMotion.

A customer I worked with had a functional requirement for their virtualization project, which was to decommission all existing physical Microsoft Clusters due to higher management complexity. Not because they weren’t reliable, or that the customer wasn’t happy – They were, but MSCS did introduce complexity, like any clustering solution.

So this one is a little bit irrelevant in my view. Even products like Exchange doesn’t use MSCS (it uses DAG).

Make up your own mind on this one too.

New VM Workload Placement & Load Balancing

Next one:image26

This is a 0.01% use case. Pretty uncommon and not really that relevant for most customers.

Next one:image19

Again that Microsoft Cluster qualification: not that relevant for a lot of customers.

Fault Tolerance

Now we move to Fault Tolerance:image20

Why is this not just a ‘No’ for Hyper-V and a Yes for vSphere. To achieve this functionality is a difficult engineering challenge but here, it is only the limitations that are pointed out. There are plenty of “qualifications” with Fault Tolerance, and, mainly due to the single-vCPU limitation, not a huge amount of people use it, but better to stick to the Yes/No.

Next one:

image.png

image22

VSAN is an experimental product ? VSAN is Beta and will be launched soon. Calling it “experimental” is a bit cheeky. See VMware’s website just now:

image27

Next One:image23

Looks like Storage DRS-like functionality is not available for regular block/NFS storage for Hyper-V, only SMB3.0 and with storage spaces. So if you don’t use storage spaces is it supported ?. It’s not clear to me but make your own mind up.

I think to comment on the frequency of Storage DRS is disingenuous. This is a vSphere design feature to ensure minimal impact on customer environments, and more accurate calculations. It is not an engineering “Limit”.  I think it makes a lot of sense that it bases recommendations on data over an extended period of time.

As per Frank Denneman’s blog below.

http://frankdenneman.nl/2012/05/07/storage-drs-load-balance-frequency/

Storage Provisioning & Network Virtualization

Next One:image24

There are plenty of storage provisioning “operations” supported in vCenter and also plenty of fully integrated plugins (Hitachi/Netapp/EMC) for vCenter which allow array based activities, like LUN provisioning. And these are fully integrated plugins, that appear as a tab in vCenter.

Next One:image25

It depends on your definition of “network virtualization”.  This is a complex topic but this is rather a simplistic commentary re: what is and is not available.

The List Could Go On…

The reason I felt compelled to write this post is that so-called Evangelists are painting a picture which in some cases is not the whole picture.

I think this is a fairer comparison which does call out some benefits of Hyper-V and some benefits of VMware.

Ultimately like all solution design and procurement, it depends on business requirements always.

The big selling point regarding Hyper-V being “free” is not real in my opinion. VMware engages in enterprise license agreements too. Don’t just believe the list price.

Ed. Note:  Be sure to check out Scott & David’s comments and side-by-side comparison on vSphere vs. Hyper-V (Free and paid versions).

It is only of late that Microsoft has stopped replicating vSphere features (to achieve feature parity), and has started to create their own. From this point forward, all of us benefit form this competition and increased innovation. Competition is good for the the entire IT industry. I have no problem with it.

My take is that hypervisors are becoming commodity and articles like this are simplistic and biased and don’t help the customer. It’s not really that important that one vendor supports 4000 of something, and the other one supports 8000 as typically those limits are not hit. It would be folly were a Virtualization architect to create such a dense design where if one thing falls over 8000 VMs disappear.

Update:

Ed. Note: This post has generated a lot of productive discussion, including feedback from both sides. Paul has posted a followup response to the feedback received, don’t miss it!

What Say You?

Make your own mind up and share your take in the comments!

Comments

  1. Thank you. I think it’s better for us engineers/architect to present the full picture, and let customers decide. Presenting only the marketing truth means drops the credibility of technical community. Perhaps an independant party can start a “site qualification standard”, and only sites that present pure technical material can be “certified” as fair.
    ’till then, have a great day in the virtual world :-)

  2. Scott D. Lowe, vExpert, MVP Virtual Machine, MCSE

    I agree with you wholeheartedly. The “skew” that is inherent in so many sources does no service to the customer and only succeeds in muddying the waters, which are already confusing enough!

  3. Paul Meehan, VCAP-DCA, CCNA

    Thanks Scott. As architects, or whatever out title is today, our duty should be to qualify our statements so they are accurate, whether verbally in front of customers, or in written word on the web. If we are deploying a solution or advising a customer we would (and should) be careful to always assess the impact of what we say, and ultimately ensure when we walk away we can do so with peace of mind.

  4. Paul Meehan, VCAP-DCA, CCNA

    Hi Iwan,
    Thanks for your positive comments. I agree with you – I think if a vendor really believes their product is the best they shouldn’t need to present marketing “spin” – let it stand on it’s own feet. I’m a bit disappointed, as Microsoft is doing some really fantastic things right now, in many areas, and should focus on their USPs. We have the “Blog with Integrity” stamp that a lot of blogs use but I’m not sure whether it is entirely convincing or properly policed. I suggest you look at Scott and David’s comparison as it is much more black and white. However, beware of list price as no customer ever pays list price – do they ?. In my view, it’s probably best for customers to carry out their own stringent Proof of Concept, and see who wins….

    Having spoken to some vendors/implementers at VMworld last week in Barcelona it’s the OPEX costs that seem to be making the difference right now, with vSphere having lower TCO/management overhead. I spoke to two independent vendors who are Microsoft and VMware partners and both told me that support for Hyper-V is much greater, in terms of resource effort to support their customers. I think this will definitely change, but is inevitable for the moment, when Microsoft rushes features to the market to achieve parity. It will take some time for the gremlins to be ironed out. That’s just my view so I suggest talk to your peers and see what they have to say.

  5. I saw the post before and it’s not the first one. It was so completely wrong that just pointing to one mistake wasn’t even worth for me to do. It is completely written in a biased way to make vSphere look bad. This is not written to give an objective comparison. For that reason alone I refused to go in there.

    I can get truly mad about this tasteless vendor behaviour. So well done Paul! You managed to take the subjectivity out without making it a fight.

  6. Jesper Jensen

    While I do agree that Microsoft is pushing this to the edge, and sometimes further, your comments about pricing on the System Center products is incorrect. In the very top of each table it states that the comparison is between “Windows Server 2012 R2 + System Center 2012 R2″ vs “vSphere 5.5 Enterprise Plus + vCenter Server 5.5″.

    I don’t know much about VMware pricing or the included features in the different VMware products, so I can’t figure out if it’s fair to compare these bundles :-)

    Btw – VMware has been comparing vSphere with Hyper-V 2008 R2, although 2012 was out, and I’ve seen quite a lot comparisons like that from vExperts and the like. They’re not that innocent :-)

  7. Scott D. Lowe, vExpert, MVP Virtual Machine, MCSE

    Jesper – I agree with you 100%. Neither side is innocent in this ridiculous battle of the fringes (oh, yeah, well I can get 10,000 VMs per host and you can only get 9,000!) I’m both a vExpert and MVP Virtual Machine and have seen both sides and cringe every single time.

    Scott

  8. Paul Meehan, VCAP-DCA, CCNA

    Hi Hans,
    thanks for your comments. I think there is a lot in the post which is fine, and Microsoft did not invent this approach. I could name another company who employ teams and teams of people to make this stuff up. That battle has now moved to the web though, so I feel the obligation to argue the case. You could argue whether I am always right but I dislike the “grey” area. My interest is fairness and I genuinely believe a lot of customers – at least in Ireland believe articles like this – and similar from other vendors. So while a lot of the content is fine I’m just pointing out a few small points of interest.

  9. Paul Meehan, VCAP-DCA, CCNA

    Hi Jesper,
    I have seen comparisons like you mention from VMware against Hyper-V 2008R2 which is also unfair and would fully agree with you. We had the same thing for years in storage between Hitachi and EMC due to delays in release cycles. The key point for me is that none of this stuff is as simple as it’s being made out and list pricing if often used – how real is that ?. Bottom line – let’s have a fair discussion. A personal goal is MCSE private cloud when I have the time and recognise many of the great features MSFT is bringing to the table, so let’s get past this stage.

  10. Great article, always good to put marketing speak in perspective.
    Btw, you missed out with the copy&paste on your VSAN picture, there’s 2 FT comparisons in there.

  11. Paul Meehan, VCAP-DCA, CCNA

    Thanks for the feedback Wil. See what you mean about the VSAN paste. Thanks for pointing that one out.

  12. admin

    All Fixed thanks for the heads up

  13. NVGRE is a full SDN and not (I believe) managed as a separate SKU like “NSX for vSphere” or “NSX for everything else”. Don’t take my word for it: http://blog.ipspace.net/2012/12/hyper-v-network-virtualization-wnvnvgre.html <- Note this post is dated on Nicera / NSX features.

    Totally on point for everything else.

  14. Paul Meehan, VCAP-DCA, CCNA

    Hi Andrea, many thanks for the comment. You are correct in stating that NSX is a separate product SKU, and not part of vSphere. The reason I called it out is that I’m not sure they can be compared simplistically in a table. NSX is a full blown network virtualization suite that allows complete decoupling of all aspects of the network from the hardware using scale-out architecture. I read some of those NVGRE posts – excellent – thanks. Here’s a nice one here too: http://technet.microsoft.com/en-us/library/jj134174.aspx. But am I wrong in thinking NVGRE is an overlay network and “alternative” to VXLAN/STT. Whereas NSX Is a full blown suite including Management stack for heterogeneous hypervisors & cloud management platforms, and also includes L2/L3 vSwitch, Gateway services, VPN, F/W, LB, controller cluster, RESTful APIs, as well as full support for VTEP on OEM switch hardware like Arista, HP etc…What’s your view ?

  15. Ben Conrad

    Re: “I don’t agree with this. To say TPS is useful on “legacy” server hardware platforms is nonsense. TPS is very effective and results in very significant memory savings, for customers”

    This statement is true if you are running on hardware pre-Nehalem. Anything Nehalem and newer is using large pages and by -default- vSphere uses large pages which means that TPS is pretty useless unless the host is maxing out memory (92-95%). I have a host with 144GB of RAM, using 125GB of RAM and the server is only saving 3.1GB via TPS, that’s not efficient. So the TPS vs non-TPS battle has no purpose for most cases.

    Ben

  16. Hey Paul,

    Thanks very much for all the feedback and input above on my original technical comparison article! Truly one of the great things about social collaboration is the benefits we derive by expanding the scope of our own experience by leveraging the experiences of others as well. As such, I have a few pieces of feedback of my own to share on the above statements that you may wish to consider:

    Bias – While you may not agree with all of my points, I clearly state in my original article that my conclusions are based on my own field experiences – experiences as an infrastructure consultant in designing and implementing datacenter solutions for hundreds of customer organizations and, most recently, experiences as a Microsoft employee collaborating with thousands of the best IT Pros in the industry on their virtualization scenarios across broad customer segments. Of course, you may have different experiences with your customer organizations, and I certainly respect your point-of-view. Hopefully, you can respect my point-of-view as well, without characterizing them as incorrect or unfair bias.

    For each comparison in my original article, I’ve provided my conclusions based on my experience in the field and in the lab, with comments and linked resources where I’ve seen significant advantages, disadvantages or additional considerations impact the projects on which I’ve been assigned. Throughout my career, I’ve probably implemented an equal dose of VMware and Windows network environments although certainly over the last few years, my production work has been primarily oriented towards environments with enterprise Windows and Linux workloads.

    Management and Monitoring – This topic relates to the common features of enterprise-grade operations monitoring and management of both the virtualization hosts and the guest VMs provided by vCOPS and System Center 2012 R2. While both hypervisor platforms, of course, include basic management and monitoring infrastructure, the focus of this comparison area was looking beyond the basic capabilities. I’ll update the description on this comparison topic to attempt to clarify that point.

    System Center 2012 R2 Licensing – You are correct that System Center 2012 R2 is not a free-of-charge product. However, the comparison article clearly states that the I am comparing Windows Server 2012 R2 Datacenter + System Center 2012 R2 Datacenter, vs vSphere 5.5 Enterprise Plus + vCenter Server. I selected these comparisons, because these represent the most common configurations that I’ve seen in the field in customer environments. My comments around increased licensing costs on the VMware side for the topics called out, because these areas are not included in the compared product configurations and require additional licensed products. In terms of System Center 2012 R2 – a single System Center 2012 R2 Datacenter edition license enables management of each virtualization host for up to two-physical processors for ALL of the System Center 2012 R2 capabilities I’ve called out, and thus does not require additional licensing over and above the compared product configurations. This may have confused you, because System Center licensed changed in the 2012 product release to roll all management capabilities under a single product SKU.

    Memory Management – Ben has done a great job in the comments above of further elaborating on the point I was making in my original comparison article around TPS. In addition, you may find the following article helpful to better understand the considerations around Large Pages and TPS for modern server hardware and software – http://www.boche.net/blog/index.php/2013/03/19/large-memory-pages-and-shrinking-consolidation-ratios/

    Unlimited Live Migration – As we discussed during Twitter – with modern enterprise hardware, we’re seeing the ability to concurrently live migrate significantly more VMs than the 4-to-8 that VMware is capped at. You call out some good points in terms of CPU overhead when performing Live Migrations using traditional solutions. However, Windows Server 2012 R2 Hyper-V provides the ability to perform Live Migration over RDMA with 10GbE and faster NIC hardware that supports RDMA. This provides the ability to offload much of the CPU overhead into the NIC and blast VM memory state transfers quickly between hosts. As an extreme test, we’re seeing the internal memory bus on servers actually being the bottleneck now – we’re able to max that out when using 3 Infiniband adapters per server for Live Migration. Clearly in configurations using modern high-speed server NICs with RDMA support, CPU is not the bottleneck, and the ability to set Hyper-V Live Migrations and Live Storage Migrations to higher caps can be advantageous – particularly in scale-up virtualization host environments.

    Microsoft Clustering – I agree that in the “old days” clustering could be quite complex, but significant improvements have been made in the 2012-era to simplify clustering configurations considerably. In my field experiences, enterprise customers leverage Microsoft Clustering extensively for their mission-critical Windows Server workloads – such as Exchange and SQL Server. In your article, you incorrectly characterize Exchange DAGs as not using Microsoft Clustering. Quite the contrary – as the Exchange DAG concept is built on-top of Microsoft Clustering. You may be confusing Microsoft Clustering with the need to provide shared cluster disk resources – they are separate concepts, as some applications require shared disks while others such as Exchange and SQL Server AlwaysOn clusters do not. By leveraging Microsoft Clustering in a virtualized environment, customers benefit from the ability to scale-out application workloads and provide application-aware resiliency. As a result, I view the limitations imposed by vSphere on Microsoft Clustering workloads as significant for enterprise customers to consider if they are virtualizing cluster-aware applications.

    VMware FT – I may be somewhat critical of VMware FT, because in my experience, I have never had a customer productively use it after they understood the limitations it imposes. Normally for customers, the level of availability delivered by FT is something they consider for their most important mission critical apps – however, the limitations imposed also significantly limit their ability to scale-up, scale-out and manage those applications. For these reasons, all of my customers that had considered VMware FT had chosen not to leverage it. I’d be interested to hear the scenarios where you’ve had first-hand experience productively leveraging VMware FT for a customer. Don’t get me wrong – FT is certainly an admirable engineering feat, but the limitations have made the practical use of FT improbable for my customers.

    VSAN – Actually, I was quoting VMware’s official product documentation when I called VSAN an “experimental feature” – I noticed that you didn’t include that link in your article, so here it is from my original article: http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc%2FGUID-18F531E9-FF08-49F5-9879-8E46583D4C70.html

    Storage DRS – Actually, Storage DRS was a great solution in the old days before automated storage tiering was a feature commonly available on enterprise storage solutions. The most common use case that I saw for Storage DRS was optimizing VM placement across different classes of storage based on IO needs. However, in my conversations with customers, they’re looking a more granular level of storage optimization – at the block level, rather than the VM level – nowadays. So, they’re either already investing in SANs that provide automated storage tiering between SSD and HDD, or they are leveraging software capabilities that provide automated storage tiering at a block-level across SSD and HDD – such as the automated tier built into Windows Server 2012 R2 Storage Spaces. BTW – in R2, we’ve also optimized Storage Spaces for many different IO block sizes and IO patterns and with high-speed 10GbE or faster adapters with RDMA support, we are seeing disk performance that rivals more expensive fibre channel solutions. I’ll send the results along for your review when they are published. So, it’s not that Storage DRS is a bad solution – it just doesn’t meet the storage load balancing needs of the customers I’m talking with today.

    Network Virtualization – Windows Server 2012 R2 + System Center 2012 R2 implements complimentary capabilities around network virtualization to what you’ve described in your comments above – arguably NSX has the potential to support multi-hypervisor environments, whereas today the Microsoft solution is integrated into the Hyper-V hypervisor, making the Hyper-V simpler to deploy without additional network virtualization components to license and deploy, but not yet extensible to mixed hypervisor environments. Windows Network Virtualization is much more than an NVGRE overlay technology, although that is the core overlay protocol, and I’d encourage you to investigate more deeply.

    All-in-all – these are very exciting times for infrastructure professionals! Understanding the advantages and considerations for each solution is important to us all to ensure that we are presenting and designing the best solutions for customers. I wish you the best of luck on your continued path through VMware certification and look forward to hearing you consider pursuit of the MCSE Private Cloud certification when you are done! :-)

    Best regards,

    Keith

  17. irsan

    Nice articles (both of Keith and this one ) . I learned and get information on both Vmware and Hyper-V :)

    For me, I think it’s OK to express and say that each of the product has it’s own advantage, because each person has different background.

    I think writing something based on experience can’t be classified as bias view. it’s just pure an experience.

    For the critics and comments by Paul, i also think that it’s valid, and his good will to clarify and provide your own point of view, it’s much appreciated.

    For me myself, i was once a VMWare fan boy, especially when Hyper-V still in it’s early days, and perform behind VMWARE.

    Many of my friends also use VMWare here, because it’s the 1st impression that got with Hyper-V back then .. now, reading this articles, i think they need to start open their eyes to an alternative – Hyper-V + System Center 2012.

    Many thanks, both of your writings, enhances me a lot!

  18. Paul Meehan, VCAP-DCA, CCNA

    Hi,
    Thanks so much for your comments. I would agree it is a vendors job to express where they feel their product is superior. As per my second article, read anything a vendor writes with care. Take the good bits and look for where you think they are not being genuine. My personal point of view is that I take with a pinch of salt, the “experience” that a Microsoft Evangelist has, working with VMware. I think it is a way of claiming “domain” knowledge of an area where we cannot validate what the persons actual expertise is. It’s easy to say something is better “in my experience”, without any evidence to back it up. It’s just more FUD if you ask me. One thing I would like to say, seeing as we’re talking about experience, is that I purposely omitted my recent “experience” where I talked to dual customers and partners of Microsoft, and VMware, of late. The feedback I have received from 2 independent partners is that the support overhead for Hyper-V customers is 20:1 to that required for VMware. I did not really discuss TCO/management overhead at all in my posts, as I feel this is “subjective”, and just based on my “experience”. This is obviously a different approach to Microsoft but rather than include information I can’t prove, I thought better to leave it out for now.

  19. Scott D. Lowe, vExpert, MVP Virtual Machine, MCSE

    Keith –

    I’ve been intrigued by Storage Spaces for a while and look forward to seeing the information you mentioned.

    Scott

  20. Daniel

    Hi, I have been working with VMWARE for 7 years, and now I have To work with Hyper-v 2012, everyday working is very hard with this environment, three different consoles to do the same in VMWARE with 2 clicks.
    New features like virtual San in hyper-v are so new that we have had a lot of problems trying To implement this, and Microsoft support they have no idea about their product, no best practices manual like VMWARE .

    If you have worked with VMWARE and suddenly you work with hyper-v you are wondering why you have To spend so much time To do a simple thing. It is very hard if you suffer it everyday….

  21. Very interesting post Paul! I’ve read so many articles on Hyper-V vs vSphere and each and every one one of them offers an interesting perspective. Of course this is not counting the articles that are completely biased, those are easy to spot.

    I’ve always believed in trying out the product for yourself rather than just regurgitating what everyone else says. For that reason alone, I pretty much ran a full deployment of Hyper-V 2012 R2 with SCVMM and SCOM for 3 months. I’ve deployed Hyper-V in an HA configuration with clustering and performed/tested the same features that I would normally use in vSphere. With my own experience and conclusion, I can say that both products have their advantages and disadvantages. For that reason alone, at the end of the day, we cannot say that one is better than the other. There are so many factors to consider that it’s just not practical to always recommend one hypervisor, whether it’s Hyper-V or vSphere.

    Also, like you mentioned earlier in your post, these are two completely different products from an architectural perspective, this also implies that an Hyper-V administrator would have a bit of a different skill-set than a vSphere administrator and that can also imply that the management aspect of these hypervisors are different to some extent. It’s not coincidence that I’ve always found it interesting that Microsoft shops tend to stick with Hyper-V and others who are not all Microsoft tend to have various technologies from different vendors like VMware.

    Whatever works for the customer, is what matters at the end, so my recommendation has always been, if it meets your needs, then it is the right product for you, no matter how popular it is.

    Great write up!

  22. Paul Meehan, VCAP-DCA, CCNA

    Hi Andrey,
    many thanks for you kind comments but more so for your practical and excellent advice. I think you’ve described it very well and I’d agree. Skill set and manageability have a lot to do with it. Some customers don’t like to use multiple vendors, some do. And you’re right … it depends.

    That’s why these ridiculous comparisons drive me crazy. In some instances where I have heard recently that customers “went with” Hyper-V (strange term), the comment has been made that is was because it was free. Firstly, I just don’t believe that would be the case and it is highly likely you would need to use more advanced management features available, that might not be free.

    But that’s not the main reason – I’ve never heard such a bad reason to implement a product that could support an entire company’s workload. I’m not suggesting it would be wrong to use Hyper-V because it’s free, just that IMHO that’s not a valid business requirement. If I was a company CIO and a colleague selected a product to run my company’s workload because it was free, I could not stand over that decision.

    If it were my money I would be very comfortable using VMware or Microsoft. Both have lived in the Enterprise for many years. Since I wrote the article we’ve seen Microsoft release Office for iPad which has been a great success. I’d love to get Internet Explorer for my Mac. Why not ?. These type on initiatives gain all of our respect, rather than articles that in my opinion are designed to engender FUD.

    All the best,
    Paul

  23. I think both articles are inclined to their respective products; so both are totally wrong and do not deliver 100% transparent comparison.

    For true facts, VMware made a mistake in their licensing tactics and plainly lost lots, if not all, market-share to Hyper-V which at that time, was just coming into life.

    This move if I am not mistaking cost vmware CEO’s job?? I remember reading something about that….

    Take this fact from the most transparent IT consultant you could ever meet: I tell my customers that both products are good. But then they ask. “please help us decide..” I then say NO and instead focus on their needs and mainly on their budget. Conclusion: once I’m done presenting my case about BOTH players being good and able to deliver the end result, the deciding factor then becomes simple math for my customers… so they go for Hyper-V

    I always do a simple math: if VMware, because of the most silliest move I’ve ever seen on a company strategy regarding licensing, has already lost close to 40 million in revenue (I came up with this # last time I did the numbers regarding this in my last 12 months of consulting… and it does bite me a bit that I dont even get commission from anyone regarding this :(… ) just on me exposing both products as good, I’d like to know how much money VMware have really lost altogether because of such horrible move?

    Both products are good guys…. if you want my take on this, i’ll give you another one: I think vmware got greety and also underestimated hyper-v…… but didn’t they realized that whoever was on their rearview mirror was from seattle? history serves as evidence and this is where vmware made a huge mistake… talk about business people… and vmware was a great product… just throwing in my 2-cents common sense on business… In this field, don’t you know history as well??? Go ask what happened to novel, lotus, navigator, and the list goes on… and those products were great products are vmware

    Another take I have: vmware is lucky that they had lots of cushings, meaning that lots of data centers/customers had already deployed vmware in their networks way too deep in their infrastructure… Otherwise, they would have gone the MS route long time ago… I consult for SMBs but I have been approached by few huge companies already to perform preliminary analysis on redesign costs from going from vmware to hyper-v… there are smart people out there and they can see the benefits short and obviously long term and at this level, we are talking about benefits on the millions of dollars…

    That is why I say vmware is lucky to have been such a great product (and still is) but I believe the people driving that company are either limited or just plain stupid… I also believe that vmware has the ability to continue to make their product as robust as they are, but same is for MS therefore the difference is not in this department, as they are BOTH great products, but instead the deciding factor becomes money…

Submit a Comment