Max Concurrent API reached alert in SCOM 2012…

Over the last couple of months I’ve implemented a couple of Operations Manager solutions at customers where they all use the Windows Server Management Pack version 6.0.7026.0
This management pack includes a new monitor where it monitors how many secure channels are created to a domain controller when authenticating users using NTLM pass-through.

In Windows 2008 and R2 the monitor can however create false positives, which can make this monitor quite noisy. This is a confirmed bug in this version of the management pack which can be seen here at Kevin Holmans Technet blog here.

First off, you need to ascertain whether this is an actual issue on the server in question or if it’s a false positive. To do this you need to monitor the performance counters for NETLOGON.
The default values to expect are as follows:

  • Windows Server, pre-Windows 2012: 2 concurrent threads
  • Windows Server 2012: 10
  • Windows client: 1
  • Domain controllers, pre-Windows-2012: 1
  • Domain controllers, Windows-2012: 10

If you do not bump against these values, then you are most likely struck by above mentioned bug and could turn off the monitor if you don’t want it to be noisy. If you decide to do this, then remember to check whether this problem is resolved in an upcoming update and delete the overrides.

If you however bump against these values, then you can increase it by editing this registry value:
HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\MaxConcurrentApi

The maximum value is however 150, which Windows Server 2012 and beyond already has set at the default value. In that case you would consider scaling out unless you are willing to accept the user experience degradation of slower validation and possibly additional validation prompts.

Debunking the myths of why Vmware is better than Hyper-V – Disk footprint…

I’m continuing my posts tackling the myths that I meet when talking to customers about Hyper-V. I’ve previously written about Transparent Page Sharing and Memory overcommit.
For today’s post, I’m gonna talk about hypervisor disk footprint.

One of the things I hear mentioned from customers when comparing ESXi to Hyper-V is disk footprint. I’ve heard this mentioned numerous times at various customers, but none has gotten the terms right hence I can only speculate that this argument comes from the marketing machine at a certain vendor 😉

The argument stands as this:

ESXi only consumes 144MB of disk space, while Hyper-V consumes 5GB of disk space.

So, is this true?

Well, yes it is but only if you do the comparison of the two hypervisor structures as you would compare apples to bananas. While it is true that the hypervisor kernel in Vmware only consumes 144MB of disk space it is not true for Hyper-V. Here the kernel only consumes about 600KB of space. So why the claim that Hyper-V uses 5GB of disk space.
Well, Vmware counts the management partition in this calculation while forgetting to do the same for their own operating system.

To compare the two, we need to have some understanding of the two hypervisor structures.
Vmware uses a hypervisor type called monolithic, like shown in this picture:
ESXi kernel typeIn this kernel type, drivers are included in the hypervisor kernel.

If we take a look at Hyper-V, it uses a kernel type called micro-kernelized:
Hyper-V kernel typeIn this type, only the hypervisor runs in the kernel and all drivers, management and so forth are located in the parent (management) partition.

So, as shown above Vmware is both right and wrong when claiming that ESXi consumes 144MB of disk space and Hyper-V uses 5GB. It depends on how you look at it. But to do the comparison on a fair basis, then when Vmware claims that their hypervisor only takes 144MB of disk space, then they should tell that Hyper-V uses only 600KB.

Furthermore, when comparing these two designs there are some distinct differences that are worth mentioning.

  • As drivers are not loaded in the hypervisor kernel, the need for specialized drivers are removed in Hyper-V. All drivers working with Windows Server will work with Hyper-V as opposed to Vmware where they need to be specifically written for it.
  • All drivers in Hyper-V run in the parent partition, thus “isolating” them from acting directly in the hypervisor layer. Looking at the Vmware approach where drivers run in the kernel, this could cause a malfunctioning driver to impact virtual machines or allow a malicious driver to gain access to the virtual machines (for example through the vShield API).
  • The amount of code running in the kernel is 600KB as opposed to 144MB in ESXi

Lastly, another selling point that derives from this footprint discussion is the matter of security. Vmware states that their product is more safe due to the so-called smaller disk footprint, based on the argument that a smaller code base equals a more secure product.
If that statement was to hold up, then Windows 95 is to be considered more secure than Windows 7 as the first one only consumes about 80MB of disk space while the last one uses several GB.
Today, most attackers focus on getting in the same ways at the admins of the given product does. This is a side-effect of the products getting more and more secure, and as so it’s your security policies and processes that keep your infrastructure secure and not the amount of disk space (lines of code).

Debunking the myths of why Vmware is better than Hyper-V – Memory overcommit…

As I wrote previously, I’ve decided to tackle some of the myths and lies that surround Hyper-V as I hear them from either customers or Vmware sales reps. Previously, I’ve written about Transparent Page Sharing and why it isn’t useful anymore.

For this article, I’m going to talk about the Memory Overcommit feature from the vSphere Hypervisor.
Vmware description of this:

VMware’s ESX® Server is a hypervisor that enables competitive memory and CPU consolidation ratios. ESX allows users to power on virtual machines (VMs) with a total configured memory that exceeds the memory available on the physical machine. This is called memory overcommitment.

So, a really cool feature by its description.
Microsoft off course has a feature which accomplishes the same goal but in a very different way, called Dynamic Memory. More on that in a bit.

To go a little bit in depth of this feature, I’ll snip some from Vmware documentation:

For each running virtual machine, the system reserves physical memory for the virtual machine’s reservation (if any) and for its virtualization overhead.

Because of the memory management techniques the ESXi host uses, your virtual machines can use more memory than the physical machine (the host) has available. For example, you can have a host with 2GB memory and run four virtual machines with 1GB memory each. In that case, the memory is overcommitted.

To compare Vmware memory overcommit to Microsoft’s Dynamic Memory, on the surface they both operate towards the same end goal of providing more memory than available in the assumption that the virtual machines actually never goes beyond this boundary.
Dynamic Memory however works a bit different. Where in Vmware you just assign the maximum amount of memory you wish to have available, in Hyper-V you define 3 parameters:

  • Startup Memory
  • Minimum Memory
  • Maximum Memory

The startup memory is somewhat self-explanatory. It is the amount available for the machine at boot. Minimum memory is the minimum amount you wish to have available to the virtual machine and it will never drop below this. Maximum is again self-explanatory, as it is the maximum amount available to the virtual machine. Hyper-V then assigns and removes memory from the guest OS using hot-add and memory ballooning.
However, the key difference is not in these settings but in the fact that you CANNOT overcommit memory in Hyper-V. This however requires some explaining…

Let’s take the above example. You are running a host with 2 GB of memory and create 4 virtual machines with 1 GB of memory each. Your environment is then running and something happens that requires that the virtual machines to use all of their memory (could also just be that 3 of the virtual machines did, but this is just to illustrate the scenario where you need more memory than is available).
In ESX all the machines believe they have the memory available and will use it, but the underlying ESX hypervisor cannot do magic and come up with the extra memory so it starts swapping pages to disk = performance goes haywire.
If this where to happen in Hyper-V, the virtual machines would be aware of the fact that they did not have all that memory as Hyper-V will not assign more memory to the servers than what is available. So what will happen in this scenario? Well, like above swapping will occur but this time not at the hypervisor layer but at the virtual machine layer.

And this is a major difference and why it works so much better in Hyper-V. Who knows better which data can be swapped to disk and which can’t than the machine running the workload? An example could be a SQL server, where you would prefer that data related to SQL databases stayed in memory and for example pages relating to background processing going to the swap file.
In Vmware should swapping occur you run the risk of swapping out the wrong data and thereby decreasing performance even more, but in Hyper-V the virtual machine decides for itself what is best suited for this.

Now, as I’ve had this discussion a couple of times before I know the answer from the Vmware guys are that you can decide where to place the memory swap file in Vmware and do this on SSD.
Well, not so much an argument as first off SSD is still slower than RAM and second this is also a possibility in Hyper-V so the virtual machine places its swap file on SSD (and this is the actual swap file of the virtual machine, so it stays there constantly).

So to sum up, having less memory available than needed is not a desired configuration as it reduces performance. Should you however encounter the scenario where you have to little memory, Hyper-V solves the problem better for you…

Debunking the myths of why Vmware is better than Hyper-V – Transparent Page Sharing

When I visit my customers and talk about Hyper-V I get a lot of these “…but Vmware is better because…” and it always ticks me off when I know it isn’t true.
So, I’ve decided to go through these myths and argue why they either don’t matter/isn’t relevant any more or if they are outright untrue.

For this first post about this topic, I’ve chosen to talk about the transparent page sharing (TPS) feature from vSphere Hypervisor.

Vmware describes the feature like this:

When multiple virtual machines are running, some of them may have identical sets of memory content. This presents opportunities for sharing memory across virtual machines (as well as sharing within a single virtual machine).
For example, several virtual machines may be running the same guest operating system, have the same applications, or contain the same user data.
With page sharing, the hypervisor can reclaim the redundant copies and keep only one copy, which is shared by multiple virtual machines in the host physical memory. As a result, the total virtual machine host memory consumption is reduced and a higher level of memory overcommitment is possible.
So, that sounds neat doesn’t it?
Well, in reality this feature isn’t that useful as it was back in the days of Windows 2000/2003. And this is because of a “new” feature that was introduced with Windows Server 2008 called Large Pages. This means that where a memory page in the previous versions of Windows was 4KB in size it can now be 2MB in size.
To describe why Large Pages are better to use, I’ll snip a bit from an article from AMD:

Why is it [Large Pages] better? Let’s say that your application is trying to read 1MB (1024KB) of contiguous data that hasn’t been accessed recently, and thus has aged out of the TLB cache. If memory pages are 4KB in size, that means you’ll need to access 256 different memory pages. That means searching and missing the cache 256 times—and then having to walk the page table 256 times. Slow, slow, slow.

By contrast, if your page size is 2MB (2048KB), then the entire block of memory will only require that you search the page table once or twice—once if the 1MB area you’re looking for is contained wholly in one page, and twice if it splits across a page boundary. After that, the TLB cache has everything you need. Fast, fast, fast.

It gets better.

For small pages, the TLB mechanism contains 32 entries in the L1 cache, and 512 entries in the L2 cache. Since each entry maps 4KB, you can see that together these cover a little over 2MB of virtual memory.

For large pages, the TLB contains eight entries. Since each entry maps 2MB, the TLBs can cover 16MB of virtual memory. If your application is accessing a lot of memory, that’s much more efficient. Imagine the benefits if your app is trying to read, say, 2GB of data. Wouldn’t you rather it process a thousand buffed-up 2MB pages instead of half a million wimpy 4KB pages?

So Large Pages are a good thing to use, as it reduces the reads needed to get data from memory. But why is it then a problem for Transparent Page Sharing?
Well, let’s assume you have a bunch of servers running on your ESX host. These all contain data in memory, which is scanned by the TPS feature (which by they way uses CPU resources on this, but that’s another story). If you are running Windows 2003 the servers write 4KB pages to memory and the chances that 2 pages are similar and you thereby save memory is off course present.
But if you are running Windows 2008 or newer, then here comes the 2MB pages. If TPS is to be useful here it would mean that you had to have 16.777.216 bits (that is almost 17 million bits)) that are EXACTLY the same for TPS to kick in and work. And that’s not very likely to happen…
So to summarize, Transparent Page Sharing which is a selling feature from Vmware (and one I for a fact know they use to badmouth Hyper-V) isn’t really relevant any more. You just don’t need it anymore…

Vmware – possible data corruption in virtual machine…

I came across this article on the Vmware support forums, and even though I haven’t encountered the error myself I though i’d post it anyways so as many people get this information.

Symptoms

On a Windows 2012 virtual machine using the default e1000e network adapter and running on an ESXi 5.0 or 5.1 host, you experience these symptoms:

  • Data corruption may occur when copying data over the network.
  • Data corruption may occur after a network file copy event.

Cause

The root cause of this issue is currently under investigation.

Please read this KB from Vmware on how to avoid this issue in case you are running ESXi 5.0 or 5.1 and have WIndows 2012 vm’s.

Hyper-V server 2012 R2 – or the free hypervisor that can do it all…

You probably already know about vSphere Hypervisor (ESXi), the free hypervisor product from Vmware. Some of you may already be using it, either for testing or in smaller scale production deployments.
Looking at the feature set compared to the larger versions, there is a distinct lack of features in this version:

  • No live migrations (vMotion or Storage vMotion)
  • No High Availability
  • No Replication
  • No integration to management tools (ex. vCenter)

So basically, it is meant to run a single server with non-critical virtual machines (which usually describes your typical test environment).

But what if there was another option… Well there is.

Microsoft Hyper-V Server (which has been around since Windows 2008) has just been updated and released in a new version, 2012 R2. With this you get all the features of Windows Server 2012 R2 Datacenter FOR FREE. A quick rundown on some of the new features are seen here on Technet.

And all the limitations of the vSphere Hypervisor is gone, since you also get:

  • High Availability in the form of Failover Clustering
  • Live migration of virtual machines, both “normal” and storage migration
  • Shared nothing live migration, where you can migrate a virtual machine between 2 non-clustered hosts without incurring downtime
  • Replication of virtual machines using Hyper-V Replica
  • Full integration with the System Center portfolio, in case you have those products

So, are you looking to provision some virtualization hosts but doesn’t like the feature set in the vSphere Hypervisor, Hyper-V Server is the product for you.

Swing by Technet and grab your copy of it.

And for those of you virtualizing Linux VM’s, there is off course also a wide range of these supported in Hyper-V. See the list here.

Hyper-V vs. Vmware – quick specs…

When talking to customers about Hyper-V I usually get the statement that Hyper-V doesn’t even come close to matching Vmware on specs nor capabilities, which usually leads to a discussion where I have to pitch these two against each other.
There are a lot of misconceptions out there about what Hyper-V is and what it can do, so even though Hyper-V matches and even surpasses Vmware in some areas there is still some way to go in getting Vmware admins to realize this.

To aid this, I’ve found this chart which compares the two as an overview:

Licensing: At-A-Glance

Microsoft
Windows Server 2012 R2
+ System Center 2012 R2 Datacenter Editions
VMware
vSphere 5.5 Enterprise Plus + vCenter Server 5.5
Notes
# of Physical CPUs per License 2 1 With Microsoft, each Datacenter Edition license provides licensing for up to 2 physical CPUs per Host.  Additional licenses can be “stacked” if more than 2 physical CPUs are present.

With VMware, a vSphere 5.5 Enterprise Plus license must be purchased for each physical CPU.  This difference in CPU licensing is one of the factors that can contribute to increased licensing costs.  In addition, a minimum of one license of vCenter Server 5.5 is required for vSphere deployments.

# of Managed OSE’s per License Unlimited Unlimited Both solutions provide the ability to manage an unlimited number of Operating System Environments per licensed Host.
# of Windows Server VM Licenses per Host Unlimited 0 With VMware, Windows Server VM licenses must still be purchased separately. In environments virtualizing Windows Server workloads, this can contribute to a higher overall cost when virtualizing with VMware.

VMware does include licenses for an unlimited # of VMs running SUSE Linux Enterprise Server per Host.

Includes Anti-virus / Anti-malware protection Yes System Center Endpoint Protection agents included for both Host and VMs with System Center 2012 R2 Yes – Includes vShield Endpoint Protection  which deploys as EPSEC thin agent in each VM + separate virtual appliance.
Includes full SQL Database Server licenses for management databases Yes – Includes all needed database server licensing to manage up to 1,000 hosts and 25,000 VMs per management server. No – Must purchase additional database server licenses to scale beyond managing 100 hosts and 3,000 VMs with vCenter Server Appliance. VMware licensing includes an internal vPostgres database that supports managing up to 100 hosts and 3,000 VMs via vCenter Server Appliance. See VMware vSphere 5.5 Configuration Maximums for details.
Includes licensing for Operations Monitoring and Management of hosts and guest VMs. Yes – Included in System Center 2012 R2 No – Operations Monitoring and Management requires separate license for vCenter Operations Manager or upgrade to vSphere with Operations Management
Includes licensing for Private Cloud Management capabilities – pooled resources, self-service, delegation, automation, elasticity, chargeback/showback Yes – Included in System Center 2012 R2 No – Private Cloud Management capabilities require additional cost of VMware vCloud Suite.
Includes management tools for provisioning and managing VDI solutions for virtualized Windows desktops. Yes – Included in the RDS role of Windows Server 2012. No – VDI management requires additional cost of VMware Horizon View.

Virtualization Scalability: At-a-Glance

Microsoft
Windows Server 2012 R2
+ System Center 2012 R2 Datacenter Editions
VMware
vSphere 5.5 Enterprise Plus + vCenter Server 5.5
Notes
Maximum # of Logical Processors per Host 320 320 With vSphere 5.5 Enterprise Plus, VMware has “caught up” to Microsoft in terms of Maximum # of Logical Processors supported per Host.
Maximum Physical RAM per Host 4TB 4TB With vSphere 5.5 Enterprise Plus, VMware has “caught up” to Microsoft in terms of Maximum Physical RAM supported per Host.
Maximum Active VMs per Host 1,024 512
Maximum Virtual CPUs per VM 64 64 When using VMware FT, only 1 Virtual CPU per VM can be used.
Hot-Adjust Virtual CPU Resources to VM Yes – Hyper-V provides the ability to increase and decrease Virtual Machine limits for processor resources on running VMs. Yes – Can Hot-Add virtual CPUs for running VMs on selected Guest Operating Systems and adjust Limits/Shares for CPU resources. VMware Hot-Add CPU feature requires supported Guest Operating System. Check VMware Compatibility Guide for details.

VMware Hot-Add CPU feature not supported when using VMware FT

Maximum Virtual RAM per VM 1TB 1TB When using VMware FT, only 64GB of Virtual RAM per VM can be used.
Hot-Add Virtual RAM to VM Yes ( Dynamic Memory ) Yes  Requires supported Guest Operating System.
Dynamic Memory Management Yes ( Dynamic Memory ) Yes ( Memory Ballooning ) Note that memory overcommit is not supported for VMs that are configured as an MSCS VM Guest Cluster. VMware vSphere 5.5 also supports another memory technique: Transparent Page Sharing (TPS).  While TPS was useful in the past on legacy server hardware platforms and operating systems, it is no longer effective in many environments due to modern servers and operating systems supporting Large Memory Pages (LMP) for improved memory performance.
Guest NUMA Support Yes Yes NUMA = Non-Uniform Memory Access.  Guest NUMA support is particularly important for scalability when virtualizing large multi-vCPU VMs on Hosts with a large number of physical processors.
Maximum # of physical Hosts per Cluster 64 32
Maximum # of VMs per Cluster 8,000 4,000
Virtual Machine Snapshots Yes – Up to 50 snapshots per VM are supported. Yes – Up to 32 snapshots per VM chain are supported, but VMware only recommends 2-to-3.

In addition, VM Snapshots are not supported for VMs using an iSCSI initiator.

Integrated Application Load Balancing for Scaling-Out Application Tiers Yes – via System Center 2012 R2 VMM No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
Bare metal deployment of new Hypervisor hosts and clusters Yes – via System Center 2012 R2 VMM Yes – VMware Auto Deploy and Host Profiles supports bare metal deployment of new hosts into an existing cluster, but does not support bare metal deployment of new clusters.
Bare metal deployment of new Storage hosts and clusters Yes – via System Center 2012 R2 VMM No
Manage GPU Virtualization for Advanced VDI Graphics Yes – Server GPUs can be virtualized and pooled across VDI VMs via RemoteFX and native VDI management features in RDS role. Yes – via vDGA and vSGA features, but requires separate purchase of VMware Horizon View to manage VDI desktop pools.
Virtualization of USB devices Yes – Client USB devices can be passed to VMs via Remote Desktop connections. Direct  redirection of USB storage from Host possible with Windows-to-Go certified devices.  Direct redirection of other USB devices possible with third-party solutions. Yes – via USB Pass-through support.
Minimum Disk Footprint 800KB – Micro-kernelized hypervisor ( Ring -1 )

5GB – Drivers + Management ( Parent Partition – Ring 0 + 3 )

Microsoft Hyper-V uses a modern micro-kernelized hypervisor architecture, which minimizes the components needed within the hypervisor running in Ring -1, while still providing strong scalability, performance, VM security, Virtual Disk security and broad device driver compatibility.

155MB – Monolithic hypervisor w/ Drivers( Ring -1 + 0 )

4GB – Management  ( vCenter Server Appliance – Ring 3 )

VMware vSphere uses a larger classic monolithic hypervisor approach, which incorporates additional code, such as device drivers, into the hypervisor.  This approach can make device driver compatibility an issue in some cases, but offers increased compatibility with legacy server hardware that does not support Intel-VT / AMD-V hardware-assisted virtualization.

Microsoft and VMware each use different approaches for hypervisor architecture.  Each approach offers different advantages as noted in the columns to the left.

See When it comes to hypervisors, does size really matter? for a more detailed real-world comparison.

Frequently, patch management comes up when discussing disk footprints.  See Orchestrating Patch Management for more details on this area.

Boot from Flash Yes – Supported via Windows-to-Go devices. Yes
Boot from SAN Yes – can leverage included iSCSI Target Server or 3rd party iSCSI / FC storage arrays using software or hardware boot providers. Yes – can leverage 3rd party iSCSI / FC storage arrays using software or hardware boot providers.

VM Portability, High Availability and Disaster Recovery: At-a-Glance

Microsoft
Windows Server 2012 R2
+ System Center 2012 R2 Datacenter Editions
VMware
vSphere 5.5 Enterprise Plus + vCenter Server 5.5
Notes
Live Migration of running VMs Yes – Unlimited concurrent Live VM Migrations.  Provides flexibility to cap at a maximum limit that is appropriate for your datacenter architecture. Yes – but limited to 4 concurrent vMotions per host when using 1GbE network adapters and 8 concurrent vMotions per host when using 10GbE network adapters.
Live Migration of running VMs without shared storage between hosts Yes – Supported via Shared Nothing Live Migration Yes – Supported via Enhanced vMotion.
Live Migration using compression of VM memory state Yes – Supported via Compressed Live Migration, providing up to a 2X increase in Live Migration speeds. No
Live Migration over RDMA-enabled network adapters Yes – Supported via SMB-Direct Live Migration, providing up to a 10X increase in Live Migration speeds. No
Live Migration of VMs Clustered with Windows Server Failover Clustering (MSCS Guest Cluster) Yes – by configuring relaxed monitoring of MSCS VM Guest Clusters. No based on documented vSphere MSCS Setup Limitations
Highly Available VMs Yes – Highly available VMs can be configured on a Hyper-V Host cluster.  If the application running inside the VM is cluster aware, a VM Guest Cluster can also be configured via MSCS for faster application failover times. Yes – Supported by VMware HA, but with the limitations listed above when using MSCS VM Guest Clusters.
Failover Prioritization of Highly Available VMs Yes – Supported by clustered priority settings on each highly available VM. Yes
Affinity Rules for Highly Available VMs Yes – Supported by preferred cluster resource owners and anti-affinity VM placement rules. Yes
Cluster-Aware Updating for Orchestrated Patch Management of Hosts. Yes – Supported via included Cluster-Aware Updating (CAU) role service. Yes – Supported by vSphere 5.5 Update Manager, but if using vCenter Server Appliance, need separate 64-bit Windows OS license for Update Management server.  If supporting more than 5 hosts and 50 VMs, also need a separate SQL database server.
Guest OS Application Monitoring for Highly Available VMs Yes Yes – Provided by vSphere App HA, but limited to only the following applications: Apache Tomcat, IIS, SQL Server, Apache HTTP Server, SharePoint, SpringSource tc Runtime.
VM Guest Clustering via Shared Virtual Hard Disk files Yes – Provided via native Shared VHDX support for VM Guest Clusters Yes – But only Single-Host VM Guest Clustering supported via Shared VMDK files.  For VM Guest Clusters that extend across multiple hosts, must use RDM instead.
Maximum # of Nodes per VM Guest Cluster 64 5 – as documented in VMware Guidelines for Supported MSCS Configurations
Intelligent Placement of new VM workloads Yes – Provided via Intelligent Placement in System Center 2012 R2 Yes – Provided via vSphere DRS, but without ability to intelligently place fault tolerant VMs using VMware FT.
Automated Load Balancing of VM Workloads across Hosts Yes – Provided via Dynamic Optimization in System Center 2012 R2 Yes – Provided via vSphere DRS, but without ability to load-balance VM Guest Clusters using MSCS.
Power Optimization of Hosts when load-balancing VMs Yes – Provided via Power Optimization in System Center 2012 R2 Yes – Provided via vSphere DRS, with the same limitations listed above for Automated Load Balancing.
Fault Tolerant VMs No – The vast majority of application availability needs can be supported via Highly Available VMs and VM Guest Clustering on a more cost-effective and more-flexible basis than software-based fault tolerance solutions.  If required for specific business applications, hardware-based fault tolerance server solutions can be leveraged where needed. Yes – Supported via VMware FT, but there are a large number of limitations when using VMware FT, including no support for the following when using VMware FT: VM Snapshots, Storage vMotion, VM Backups via vSphere Data Protection, Virtual SAN, Multi-vCPU VMs, More than 64GB of vRAM per VM. Software-based fault tolerance solutions, such as VMware FT, generally have significant limitations.  If applications require more comprehensive fault tolerance than provided via Highly Available VMs and VM Guest Clustering, hardware-based fault tolerance server solutions offer an alternative choice without the limits imposed by software-based fault tolerance solutions.
Backup VMs and Applications Yes – Provided via included System Center 2012 R2 Data Protection Manager with support for Disk-to-Disk, Tape and Cloud backups. Yes – Only supports Disk-to-Disk backup of VMs via vSphere Data Protection.  Application-level backup integration requires separately purchased vSphere Data Protection Advanced.
Site-to-Site Asynchronous VM Replication Yes – Provided via Hyper-V Replica with 30-second, 5-minute or 15-minute replication intervals. Minimum RPO = 30-seconds.

Hyper-V Replica also supports extended replication across three sites for added protection.

Yes – Provided via vSphere Replication with minimum replication interval of 15-minutes. Minimum RPO = 15-minutes. In VMware solution, Orchestrated Failover of Site-to-Site replication can be provided via separately licensed VMware SRM.

In Microsoft solution, Orchestrated Failover of Site-to-Site replication can be provided via included PowerShell at no additional cost. Alternatively, a GUI interface for orchestrating failover can be provided via the separately licensed Windows Azure HRM service.

Storage: At-a-Glance

Microsoft
Windows Server 2012 R2
+ System Center 2012 R2 Datacenter Editions
VMware
vSphere 5.5 Enterprise Plus + vCenter Server 5.5
Notes
Maximum # Virtual SCSI Hard Disks per VM 256 ( Virtual SCSI ) 60 ( PVSCSI )
120 ( Virtual SATA )
Maximum Size per Virtual Hard Disk 64TB 62TB vSphere 5.5 support for 62TB VMDK files is limited to when using VMFS5 and NFS datastores only.

In vSphere 5.5, VMFS3 datastores are still limited to 2TB VMDK files.

In vSphere 5.5, Hot-Expand, VMware FT , Virtual Flash Read Cache and Virtual SAN are not supported with 62TB VMDK files.

Native 4K Disk Support Yes – Hyper-V provides support for both 512e and 4K large sector-size disks to help ensure compatibility with emerging innovations in storage hardware. No
Boot VM from Virtual SCSI disks Yes ( Generation 2 VMs ) Yes
Hot-Add Virtual SCSI VM Storage for running VMs Yes Yes
Hot-Expand Virtual SCSI Hard Disks for running VMs Yes Yes but not supported with new 62TB VMDK files.
Hot-Shrink Virtual SCSI Hard Disks for running VMs Yes No
Storage Quality of Service Yes ( Storage QoS ) Yes ( Storage IO Control ) In VMware vSphere 5.5, Storage IO Control is not supported for RDM disks.

In Windows Server 2012 R2, Storage QoS is not supported for Pass-through disks.

Virtual Fibre Channel to VMs Yes ( 4 Virtual FC NPIV ports per VM ) Yes ( 4 Virtual FC NPIV ports per VM ) – but not supported when using VM Guest Clusters with MSCS. vSphere 5.5 Enterprise Plus also includes a software initiator for FCoE support for VMs.

While not included inbox in Windows Server 2012 R2, a no-cost ISV solution is available here to provide FCoE support for Hyper-V VMs.

Live Migrate Virtual Storage for running VMs Yes – Unlimited concurrent Live Storage migrations. Provides flexibility to cap at a maximum limit that is appropriate for your datacenter architecture. Yes – but only up to 2 concurrent Storage vMotion operations per host / only up to 8 concurrent Storage vMotion operations per datastore.  Storage vMotion is also not supported for MSCS VM Guest Clusters.
Flash-based Read Cache Yes – Using SSDs in Tiered Storage Spaces, limited up to 160 physical disks and 480 TB total capacity. Yes – but only up to 400GB of cache per virtual disk / 2TB cumulative cache per host for all virtual disks.
Flash-based Write-back Cache Yes – Using SSDs in Storage Spaces for Write-back Cache. No
SAN-like Storage Virtualization using commodity hard disks. Yes – Included in Windows Server 2012 R2 Storage Spaces. No VMware provides Virtual SAN which is included as an experimental feature in vSphere 5.5.  You can test and experiment with Virtual SAN, but VMware does not expect it to be used in a production environment.
Automated Tiered Storage between SSD and HDD using commodity hard disks. Yes – Included in Windows Server 2012 R2 Storage Spaces. No VMware provides Virtual SAN which is included as an experimental feature in vSphere 5.5.  You can test and experiment with Virtual SAN, but VMware does not expect it to be used in a production environment.
Can consume storage via iSCSI, NFS, Fibre Channel and SMB 3.0. Yes Yes – Except no support for SMB 3.0.
Can present storage via iSCSI, NFS and SMB 3.0. Yes – Available via included iSCSI Target Server, NFS Server and Scale-out SMB 3.0 Server support.  All roles can be clustered for High Availability. No VMware provides vSphere Storage Appliance as a separately licensed product to deliver the ability to present NFS storage.
Storage Multipathing Yes – via MPIO and SMB Multichannel Yes – via VAMP
SAN Offload Capability Yes – via ODX Yes – via VAAI
Thin Provisioning and Trim Storage Yes – Available via Storage Spaces Thin Provisioning and NTFS Trim Notifications. Yes – but trim operations must be manually processed by running esxcli vmfs unmap command to reclaim disk space.
Storage Encryption Yes – via BitLocker No
Deduplication of storage used by running VMs Yes – Available via included Data Deduplication role service. No
Provision VM Storage based on Storage Classifications Yes – via Storage Classifications in System Center 2012 R2 Yes – via Storage Policies, formerly called Storage Profiles, in vCenter Server 5.5
Dynamically balance and re-balance storage load based on demands Yes – Storage IO load balancing and re-balancing is automatically handled on-demand by both SMB 3.0 Scale Out File Server and Automated Storage Tiers in Storage Spaces. Yes – Performed via Storage DRS, but limited in load-balancing frequency.  The default DRS load-balance interval only runs at 8-hour intervals and can be adjusted to run load-balancing only as often as every 1-hour.
Integrated Provisioning and Management of Shared Storage Yes System Center 2012 R2 VMM includes storage provisioning and management of SAN Zoning, LUNS and Clustered Storage Servers. No

Networking: At-a-Glance

Microsoft
Windows Server 2012 R2
+ System Center 2012 R2 Datacenter Editions
VMware
vSphere 5.5 Enterprise Plus + vCenter Server 5.5
Notes
Distributed Switches across Hosts Yes – Supported by Logical Switches in System Center 2012 R2 Yes
Extensible Virtual Switches Yes – Several partners offer extensions today, such as Cisco, NEC, Inmon and 5nine. Windows Server 2012 R2 offers new support for co-existence of Network Virtualization and Switch Extensions. Replaceable, not extensible – VMware virtual switch is replaceable, not incrementally extensible with multiple 3rd party solutions concurrently
NIC Teaming Yes – Up to 32 NICs per NIC Team.  Windows Server 2012 R2 provides new Dynamic Load Balancing mode using flowlets to provide efficient load balancing even between a small number of hosts. Yes – Up to 32 NICs per Link Aggregation Group
Private VLANs (PVLAN) Yes Yes
ARP Spoofing Protection Yes No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
DHCP Snooping Protection Yes No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
Router Advertisement Guard Protection Yes  No – Requires additional purchase of vCloud Network and Security (vCNS) or vCloud Suite.
Virtual Port ACLs Yes – Windows Server 2012 R2 adds support for Extended ACLs that include Protocol, Src/Dst Ports, State, Timeout & Isolation ID Yes – via new Traffic Filtering and Marking policies in vSphere 5.5 distributed switches
Trunk Mode to VMs Yes Yes
Port Monitoring Yes Yes
Port Mirroring Yes Yes
Dynamic Virtual Machine Queue Yes Yes
IPsec Task Offload Yes No
Single Root IO Virtualization (SR-IOV) Yes Yes – SR-IOV is supported by vSphere 5.5 Enterprise Plus, but without support for vMotion, Highly Available VMs or VMware FT when using SR-IOV.
Virtual Receive Side Scaling ( Virtual RSS ) Yes Yes ( VMXNet3 )
Network Quality of Service Yes Yes
Network Virtualization Yes – Provided via Hyper-V Network Virtualization based on NVGRE protocol and in-box Site-to-Site NVGRE Gateway. No – Requires additional purchase of VMware NSX
Integrated Network Management of both Virtual and Physical Network components YesSystem Center 2012 R2 VMM supports integrated management of virtual networks, Top-of-Rack (ToR) switches and integrated IP Address Management No

Guest Operating Systems: At-a-Glance

For this section, I’m defining Supported Guest Operating Systems as operating systems that are supported by both the virtualization platform vendor and by the operating system vendor.  Below, I’ve listed the latest common versions of major Windows and Linux operating systems that I’ve seen used in business environments of all sizes over the years, including SMB, Enterprise and hosting partner organizations.  I’ve included the support status for each operating system along with relevant notes where helpful.

If you’re looking for the full list of Guest Operating Systems supported by each platform, you can find the full details at the following locations:

Microsoft
Windows Server 2012 R2
+ System Center 2012 R2 Datacenter Editions
VMware
vSphere 5.5 Enterprise Plus + vCenter Server 5.5
Notes
Windows Server 2012 R2 Yes Yes
Windows 8.1 Yes Yes
Windows Server 2012 Yes Yes
Windows 8 Yes Yes
Windows Server 2008 R2 SP1 Yes Yes
Windows Server 2008 R2 Yes Yes
Windows 7 with SP1 Yes Yes
Windows 7 Yes Yes
Windows Server 2008 SP2 Yes Yes
Windows Home Server 2011 Yes No
Windows Small Business Server 2011 Yes No
Windows Vista with SP2 Yes Yes
Windows Server 2003 R2 SP2 Yes Yes
Windows Server 2003 SP2 Yes Yes
Windows XP with SP3 Yes Yes
Windows XP x64 with SP2 Yes Yes
CentOS 5.7, 5.8, 6.0 – 6.4 Yes Yes
CentOS Desktop 5.7, 5.8, 6.0 – 6.4 Yes Yes
Red Hat Enterprise Linux 5.7, 5.8, 6.0 – 6.4 Yes Yes
Red Hat Enterprise Linux Desktop 5.7, 5.8, 6.0 – 6.4 Yes Yes
SUSE Linux Enterprise Server 11 SP2 & SP3 Yes Yes
SUS Linux Enterprise Desktop 11 SP2 & SP3 Yes Yes
OpenSUSE 12.1 Yes Yes
Ubuntu 12.04, 12.10, 13.10 Yes Yes – Currently 13.04 in the 13.x distros
Ubuntu Desktop 12.04, 12.10, 13.10 Yes Yes – Currently 13.04 in the 13.x distros
Oracle Linux 6.4 Yes Oracle has certified its supported products to run on Hyper-V and Windows Azure YesHowever, per this Oracle article, Oracle has not certified any of its products to run on VMware. Oracle will only provide support for issues that are either known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.
Mac OS X 10.7.x & 10.8.x No Yes – However, see note to the right.  Based on current Apple EULA, this configuration may not be legally permitted in your environment. Note that according to the Apple EULA for Mac OS X, it is not permitted to install Mac OS X on any platform that is not Apple-branded hardware. If you choose to virtualize Mac OS X on non-Apple hardware platforms, it’s my understanding that you’re violating the terms of the Apple EULA.
Sun Solaris 10 No YesHowever, per this Oracle article, Oracle has not certified any of its products to run on VMware. Oracle will only provide support for issues that are either known to occur on the native OS, or can be demonstrated not to be as a result of running on VMware.

The complete article can be found on Technet at this link