18 December, 2015

A Christmas present from VMware; vRealize Automation 7.0 is GA as of today !!

VMware announced a new version of vRA (vRealize Automation) during the last VMworld. And as of today vRealize Automation is available for download.

For detailed information around compatibility, release notes as well as the download link please go to the vRealize Automation 7.0 Information Center

Below a highlight list of new features and enhancements:

Streamlined and Automated Wizard-based Installation

  • Introduces management agent to automate the installation of Windows components and to collect logs
  • Automates the deployment of all vRealize Automation components
  • Installation wizards based on deployment needs: Minimal (Express) and Enterprise (Distributed) Installations


Simplified Deployment Architecture and High Availability Configuration

  • Embedded authentication service by using VMware Identity Manager
  • Converged Application Services in vRealize Automation Appliance
  • Reduced minimal number of appliances for HA configuration
  • Automated embedded PostgreSQL clustering with manual failover
  • Automated embedded vRealize Orchestrator clustering


Enhanced Authentication Service

  • Integrated user interface providing a common look and feel
  • Enabled multiple features by new authentication service


Simplified Blueprint Authoring for Infrastructure and Applications

  • Single unified model for both machine and application blueprints and unified graphical canvas for designing machine and application blueprint with dependencies and network topology
  • Software component (formerly software service in Application Services) authoring on vSphere, vCloud Air, vCloud Director, and AWS endpoints)
  • Extend or define external integrations in the canvas by using XaaS (formerly Advanced Service Design)
  • Enable team collaboration and role segregation by enhancing and introducing fine-grain roles
  • Blueprint as code and human-readable which can be created in editor of choice and stored in source control or import and export in the same or multiple vRealize Automation 7.0 instances
  • Customer-requested machine and application blueprints provided
  • Additional blueprints available on the VMware Solutions Exchange


Simplified and Enhanced NSX Support for Blueprint Authoring and Deployment

  • Dynamically configure NSX Network and micro-segmentation unique for each application
  • Automated connectivity to existing or on-demand networks
  • Micro-segmentation for application stack isolation
  • Automated security policy enforcement by using NSX security policies, groups, and tags
  • On-demand dedicated NSX load balancer


Enhanced Cloud Support for vCloud Air and AWS

  • Software component authoring for vCloud Air, vCloud Director, and Amazon AWS
  • Simplified blueprint authoring for vCloud Air and vCloud Director
  • Improved vCloud Air endpoint configuration
  • Optional proxy configuration


vRealize Orchestrator 7 New Features

  • Introduce vRealize Orchestrator Control Center for easy monitoring and troubleshooting
  • Significant Smart Client improvements including Workflow tagging UI, Client reconnect options and enhanced search capabilities
  • vSphere 6.X  vAPI endpoint support

09 December, 2015

NLVMUG Usercon 2016

The NLVMUG is probably the biggest thing after VMworld (Europe) if you live in The Netherlands or Belgium and it is the biggest Usercon outside the US.


The location of this edition is the same as last year, 1931 Event Center in Den Bosch
Although the agenda is to be determent, you can expect a solution exchange, workshops and breakout sessions. Registration is already possible through this link. Once the agenda is available I will post an update to this blog post.

Or if you are up for it, submit your own presentation through this "call for papers" link.

For those who are attending this NLVMUG Usercon, see you March 27th 2016!

07 December, 2015

Updates failing on VSAN hosts

A while back one of my customers ran into a issue when they wanted to install Update 1 for ESXi 6.0. Initially the ESXi hosts updated as expected, except five hosts.
For these five hosts, the first difference was that the hypervisor is installed on rack servers instead of blade servers. Main reason for this was, these hosts needed to accommodate local storage. This local storage is used for VMware Virtual SAN.
The five hosts where in the same VSAN cluster, and this cluster is used as management cluster for the customers entire vSphere environment.
 So these five hosts needed to get updated with Update 1 for ESXi 6.0, but VUM (VMware Update Manager) failed to install this update with a somewhat strange error message. With the help of the VMware Knowledge Base it became clear it had something to do with staging the Update before installing it on the ESXi host.
Because this customer runs ESXi from a USB flash device, the scratch location is redirected to shared storage. This configuration is similar for both the blade and rack servers. My first thought was that there was something wrong with this redirection for these specific five hosts. But after reviewing the advanced settings and verifying the time and date stamps on the various logfiles of the hosts within the .locker folders located on shared, all looked fine.
To my believe the update and patch staging location moved along with the logfile location when the scratch location was changed. A excellent blog post on logfile redirection and VSAN considerations when booting of a USB flash device is written by Cormac Hogan.
So what do VSAN, ESXi hosts booting from a USB flash device and scratch folder redirection have to do with failing updates? I will come to that, please bear with me as first give you some background on the intended use and deployment method of the hosts concerned.
Because these hosts are used as resources to run the management cluster on, these hosts where the first ESXi hosts to be deployed within the customers data center. No shared storage was available at the time of deployment. This imposes a challenge, to be able to use VSAN you need vCenter and to be able to deploy vCenter (as appliance or Windows based) you need storage accessible by one or more ESXi host(s).
A solution is to bootstrap vCenter on a single VSAN node, yes a single ESXi hosts that runs VSAN with only it's own local storage!.
If you want more information on how to bootstrap vCenter on a single VSAN node, please have a look at this 2-part blog post of William Lam on his VirtuallyGhetto blog
With the use of bootstrapping the vCenter onto a single ESXi host using it's local storage to create the VSAN datastore I could build the VSAN cluster and add the remaining four ESXi hosts to complete the cluster.
When the shared NFS storage became available later on in the project the scratch folder was redirected just like with all of the customers other ESXi hosts.
And here is the catch, the time the ESXi hosts have been running without the redirection, they have been logging to the local flash device. Usually this is not a big issue, other then the risks mentioned in Cormacs' blog post. But when you use VSAN, there will be additional log or trace files written (vsantraces). And in this customers case there were also VSAN Observer files written to the local flash device, VSAN Observer is a tool used to monitor and capture performance statistics, originally only used by the VMware VSAN engineering team. More information on VSAN Observer can be found here.
Vsantrace files can grow quickly up to 500 Mb and VSAN Observer trace files even larger, as I explained previously the scratch folder redirection was done some time (days) after the VSAN cluster became operational. When the redirection is done, the various trace files that are on the local flash device are NOT removed, these files do take up a considerable amount of space. In fact they take up so much space that there is not enough space left for staging Update 1 for ESXi 6.0 on the host.
Manually removing the old VSAN related trace files from the local flash devices was what solved the VUM issue. After the files where deleted the remediation of the ESXi hosts using VSAN ran without any issue.

Joining VMware

A while ago I got the opportunity to join VMware, I always thought that working with a vendor would be a great chance to get as close to the "fire" as possible. I am sure that moving to a vendor will definitely have it's advantages, both professional and personal.

I joined VMware as of november 2nd 2015, I am now working as a senior consultant SDDC with VMWare Professional Services Organisation for the Northern EMEA region.

Currently I am in the middle of the onboarding process, which includes a lot of (mandatory) training. Especially the training around  (cloud) automation and network virtualization is very interesting, but leaves little time for blogging at the moment. I hope to start blogging more frequently again when I am finished onboarding and used to my new role at VMware.