28 November, 2014

Virtual Ethernet card "Virtual Ethernet Adapter" is of type which is not supported

Last week I was asked if I could solve a issue at a customer, they where patching all of their ESXi hosts. Only they ran into a host which refused to go in to maintenance mode, the reason for this was a VM which didn't vMotion to a other host successfully.
Basically, the vMotion task timed-out. When they investigated the error details a more specific error description.
It looked like there was something wrong with the network adapter installed in this specific VM. But when the configuration of this VM was checked it turned out that it had the customers default network adapter installed, VMXNET3.
Having a second look at the VM configuration, the Guest OS was set at Windows 2003 standard for some odd reason. After all this VM was running Windows 2008 R2 Enterprise.
After shutting down the VM and correcting the Guest OS, the VM was powered-on again. Now the VM could be vMotioned without any issue.
Already knowing that it is important to select the proper Guest OS to have VMtools install correctly and to have the right default virtual hardware presented to the VMs OS. There is a extra reason to pay extra attention to choosing the right Guest OS for a VM.
If you want to read more about the impact of a mismatched Guest OS selection, please read this blog post of Frank Denneman.
In this blog post you can also find a PowerCLI "one-liner" by Alan Renouf to quickly scan your environment for mismatched Guest OS.

05 November, 2014

A personal introduction to Diablo Technologies @VMworld

A couple of weeks before VMworld Europe, I got introduced to Diablo Technologies. Before that I had heard of the company but did not really know what they did.
I got the opportunity to have a meeting with Kevin Wagner, VP Marketing during VMworld to talk about Diablo Technologies, their current solutions and yet to be released solutions.
After I got introduced, I wanted to find out what Diablo Technologies is all about and what their relation is to virtualization. Looking into their website I found out they have a technology called Memory Channel Storage, which sounds great but I could not really figure out how this technology is actually used. I found it was Flash storage that sits in a DRAM slot, but it was not clear to me how you could leverage this Flash storage.
This changed quickly after meeting with Kevin, I was expecting a marketing talk which hopefully also would bring some technical details along. This was not the case at all, of course there was a fair amount of marketing info. After all I did not know the company nor their technologies really well, the meeting was for the largest part technical.
So what's Memory Channel Storage all about then, the technology is based on a combination of hardware and software (driver). The hardware, this looks somewhat like a DDR3 DRAM module with a additional heatsink attached to it.

Only this DRAM module there is no DDR3 RAM on it, but their is 19 nm MLC NAND (Flash) memory on it accompanied by some controllers, NAND and controllers are provided by Sandisk. These controllers make it possible to present this Flash memory as block-based storage within the servers OS, additionally a driver within the OS is needed. Currently drivers are available for Microsoft Windows, VMware ESXi and a variety of Linux distributions.
So two questions popped to mind, how much Flash is on such a module ? And Flash memory in a DRAM slot presented as block-based storage, how does the server (mainboard) recognise this correctly ?
The MCS modules currently are available in 2 sizes, 200 GB and 400 GB. There are ideas to also add a 800 GB version to this. So 200 GB and 400 GB Flash modules are adequate sized for using as a cashing solution, would it also be possible to use it as a real storage solution like you can do with SSDs ? More on this later, because the other question was how is it presented as block-based storage.
Well to be able to do this Diablo Technologies has incorporated controllers on the module which act as a disk controller, also a driver within the servers OS is needed to control the way the module(s) work. But you are still left with some module plugged into a DRAM slot on the mainboard, for this to be detected as MCS module instead of a DRAM module a minor change is needed to the mainboards UEFI (Unified Extensible Firmware Interface) which is the newer version of the main mainboards BIOS.
My two questions were answered, but a now other questions came up.
A change to the UEFI/BIOS of a server means you can not buy MCS modules and start plugging them in to your existing server hardware ! So what hardware do I need to buy, which OEM's does Diablo Technology work with ? And how do I fit these MCS modules together with DRAM modules in a server ?
The biggest OEM partners of Diablo Technology are IBM and SuperMicro, there are others and the OEM base is growing. It would be great to see the possibility for MCS in HP, Dell or even Fujitsu compute hardware if you ask me!
MCS module placement, as previously mentioned the modules are plugged into DDR3 DRAM slots and the way you can place them is exactly like you can place DRAM modules. This means a DRAM slot can either be occupied by a DRAM module or a MCS module, in other words you will need to have a server mainboard with enough DRAM slots to accommodate both types of modules.
There are a couple of things you need to take in account, so why would you choose a MCS solution over a SSD disk or PCIe SSD solution ? Simple, the memory bus is very close to the CPU and therefore performs with a higher speed / lower latency then on a PCIe or SATA / SAS bus connection. On top of that each MCS module uses only one memory channel, just like a DRAM module does. Because of this it can make use of parallelism, so instead of having 800 GB of flash available when you plug-in two 400 GB MCS modules you only have 400 GB of flash with both of the modules communicating through separate memory channels, guarantying a very low response times even when servicing heavy I/O loads.
 So SSD's result in low latencies when used in servers, we all know this. This is in the <1ms ranges in the most optimal situations. Latencies when using MCS technology are in the 4μs to 5μs range, but these numbers don't tell the complete story. The Diablo Technologies MCS is best at keeping ultra low latencies even with high throughput.
When looking at VM latencies running on a vSphere hypervisor it's not only the use of the memory bus that get's the very low latencies, when you look at how a traditional storage stack is buildup compared to a MCS storage stack you can easily see that the I/O path through the MCS storage stack is much shorter and passing less components. This reduction also reduces the total latency.



Now if we take a new storage technology based on local storage, VMware's VSAN. This uses SSDs for caching and spinning disks for persistent storage, so with MCS being presented as block-based storage you could use MCS for caching. This possibly could improve the already impressive VSAN performance.
But if we look a bit further down the road, VMware has announced it will probably support full flash VSAN configurations in the "2.0" release. Just imagine the performance you could get when you would use MCS in a full flash VSAN.
Also further down the road for Diablo Technologies, they have Carbon2 just around the corner this is their MCS solution based on DDR4 memory channel interface, which will bring MCS to the newest server architectures.

I must say I was really impressed with the technology and would recommend you to have a look at Diablo Technologies  if you are searching for a high throughput / ultra low latency storage solution. Diablo TTechnologies offers a possibility to "Test Drive" MCS, they will set you up with a time slot to try out MCS running in their own servers based at their HQ in Canada.
Both IBM and Sandisk have their own name for the MCS technology, at IBM it's called eXFlash DIMM and Sandisk runs with ULLtra DIMM.