VCP 6.5 Study Material

This has been cross posted from my own blog vGemba.net. Go check it out.

Introduction

At the recent Scottish VMUG vBeers event certification seemed to be a common topic of conversation. I recommended some study material to people so thought it would be useful to list them out here plus some others I have used.

VCP 6.5 DCV Overview

The full title for the exam is VMware Certified Professional 6.5 – Data Center Virtualization Exam. To gain the certification two exams are required.

The first is the vSphere 6.5 Foundations Exam. This exam needs to be completed before you can proceed to any of the actual certification exams such – Data Center Virtualization, Network Virtualization, Cloud Management and Automation or the Desktop and Mobility track. Passing the Foundations exam does not mean you have a certification, it means you are on the right path to you first VMware certification. This exam is available online through Pearson Vue.

Once you have completed this exam you can move onto the VCP DCV exam. The bible for you study should be the Exam Guide. This will spell out exactly all the things you may be questioned on in the exam. Make sure you know each topic inside and out! The exam has to be completed at a Pearson Vue testing center.

Continue reading

VMware Fling – DRS Dump Insight H5 Plugin

This is a cross post from my own site: www.cragdoo.co.uk

I was recently reading through fellow vExpert Wouter Kurten‘s “The VMware Labs flings monthly for September 2018” post and one of the flings in the post piqued my curiosity.

If you’re not familiar with VMware Flings, then head over to labs.vmware.com and have a look around.

“Flings are apps and tools built by our engineers and community that are intended to be played with and explored.”

The fling in question is the “DRS Dump Insight H5 Plugin”  , so I decide to get it up and running in my Ravello Cloud Lab.

Installation

a.  Head over to https://labs.vmware.com/flings/drs-dump-insight-h5-plugin

b. check out the requirements. Note this is fling is only compatible with VCSA 6.5/6.7 and not the HTML5 Client Fling. Continue reading

Free and Paid Learning Resources

This has been cross posted from my own blog vGemba.net. Go check it out.

Introduction

As IT professionals we are always learning. I thought I would highlight some free and paid for learning resources that I use to improve my skills.

vBrownbag – Free

vBrownbag is a community driven website and YouTube Channelthat provides weekly webcasts that teach new skills in under an hour. Topics have included certification tracks, automation, Cloud, careers, VMware technologies, Docker, networking, etc.

They also do Community sessions & recordings at VMworld which are excellent. They provide a great opportunity for community members who did not get to speak at the full conference a way to present a topic. For instance at VMworld US 2018 we had recordings such as the vExpert Daily, Powershell, Blogging, and many others.

I have presented on Terraform and it was a great experience. If you have an idea for a talk reach out to the vBrownbag Team.

Skylines Academy – Paid

A recent newcomer to the on demand video training model is Skylines Academy. This has been started by Nick Colyer and focuses on Azure training. The courses are low cost and once you purchase them you receive lifetime updates. Continue reading

Burnout in IT

This article has been cross posted from my own site www.cragdoo.co.uk.

At a recent Veeam User Group UK meeting I presented a session on burnout within IT. The title of my session was “And now for something completely different ….with Irn Bru“. I deliberately kept the title vague and non-descriptive, so the attendees/listeners could come into the session with pre-conceived ideas. The idea was to give every attendee a can of Irn Bru and open the session up to them to discuss a couple of slides about the subject. I have to say it felt like the session was well received, with some discussions and general nodding of heads all round.

Disclaimer

Let me start by putting a disclaimer in place. I am not a councillor, a psychologist or a qualified professional. The Content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition

Intro

About 5-6 months ago I was driving into work, very close to tears and to be honest I was asking myself “what’s the point of going there, I’m really not enjoying my work anymore“, That’s when I realised that I might have a problem. I have been working for the same company for over 15 and 1/2 years, and honestly this was the lowest I felt in those 15 and 1/2 years. I decided to start looking for some kind of understanding or information about what could be the reason behind all this. Now you may be reading this and saying “well 15 years in the same job will do that to you” and you would be correct, if it wasn’t for the fact my role had changed over the years and every week brings something new. So it had to be something else.

During my searches I came across 2 excellent videos :- Continue reading

vRO Composite Types

I have used vRO for quite some time, yet I have never really had a need to use Composite Types – until recently! This vRO feature is pretty cool and allows you to create arrays which can be polled by a Workflow and what is really a benefit for me is that allow you to minimise the amount of WFs needed or even the amount of input parameters into a Workflow.

I am a real advocate of using configuration files in vRO, and as anyone who attended my Glasgow VMUG session will (hopefully) remember that these are used for global settings which mean we don’t need to update individual Workflows if we point to these central configuration files. So where do Composite Types fit in here? Well recently I had a requirement around DNS information which had potential to impact manageability of Network Profiles in vRA. The requirement was around how DNS information would be added to a VM during deployment depending on things like location or operating system. Using vRA out the box IPAM meant that in order to achieve this I would have to create many profiles just to map different DNS info and then resolve complexity of splitting the IP ranges within the Profiles. An alternative way to meet this requirement along with making it easy to manage and fulfill any need to grow as more sites where added was composite types!

Lets have a look at how its done:

Continue reading

Scottish VMUG – Edinburgh October 4th 2018

Scottish VMUG – Edinburgh October 4th 2018 – Assembly Roxy

As per usual it’s set to be a cracking day, we’re honored to be welcoming Chad Sakac on his inaugural visit to Scotland and if you’ve never seen him speak then you’re in for a treat. We’ve got our usual mix of awesome sponsor’s on the day and some fantastic first timer’s and returning favourites from VMware

Yet again we’ve got two VCDX’s attending so it’s all shaping up to be a memorable day and of course there’s a special vBeers planned for afterwards. If you haven’t registered yet…what are you waiting for?
Registration

We will update the details as we get them so this blog page may change

Keynote – Chad Sakac – Pivotal

“Getting into fewer, smarter bar fights: a look at the debates that drag us down, and the dialogs that lift us up”

Sponsors –

Dell Technologies – Simplify your VDI solutions planning and management with Dell Technologies – Darren Oakley

IBM – How to Overcome the Challenges of moving VMware to the Cloud- Jim Mckay, IBM Cloud Solution Architect

Learn about the range of VMware offerings available on IBM Cloud including VCF with HXC. Demo will include live use of IBM cloud portal to deploy VMware cluster and will include overview of the many 3rd party offerings which can also be deployed.

Asystec – From Complex to Simple – An HCI Journey with VxRail Stretched Cluster – Victor Forde Asystec Solution Architect and Sandy Bryce Baillie Gifford Lead Technical Architect

Focussing on the Baillie Gifford’s recent VxRAIL implementation which is the first phase of their SDDC Strategic goal. You will get to understand the business objectives, the VxRail Stretched Cluster Deployment in all its phases, the criteria for success and how it was rigorously tested. Included will also be tips and tricks as well as lessons learned from deployment challenges encountered.

Zerto – Delivering IT Resilience in a Changing World – Nick Williams

– What is IT Resilience
– What changes are we seeing in the industry
– How an IT Resilience platform can help keep you on the right side of these.

SITS Group – Protecting your Office 365 data – What is your back-up strategy?’ – Ian Sanderson

VMware

Richard Machen – NSX SD-WAN by VeloCloud, how to simplify and improve wide area networking and access to the cloud.
Adam Bohle – VMware Cloud on AWS Update (including RDS)
Sam McGeown – Automating Micro-segmentation with vRealize Automation and NSX

vRealize Automation and NSX are both powerful tools in their own right, but together they really come alive. Based on real-world experience implementing micro-segmentation with vRealize Automation for all sorts of customers, learn the methods that work and how to avoid common mistakes. Compare out-of-the-box implementations with custom day-2 automation, and the challenges and benefits of both.

Matt Evans – To Re or not to Re(purpose) Desktops

The desktop market offers many desktop re-purposing solutions based on Windows, Linux and Chrome. In this session we will take a deep dive into those technologies, share our test results and present a comparison of the different vendor offerings to help you make an informed choice. Examples of our findings will cover costs, system requirements, performance, device management and limitations.

Robbie Jerrom – Cloud Native Apps Update

Coommunity –

Allan McAleavy– Hunting noisy neighbours using VROPS & Grafana.

When hunting for noisy neighbour workloads from a high level storage viewpoint we can only drill down as far as an ESX Host or VMFS Volume. This initially led to the development of dashboards within VROPS to allow top-down (ESX Host to VM) and bottom-up (VMFS to VM) analysis to identify these workloads. This approach worked well however teams had to use different tools & different dashboards to correlate the data. In this talk I will show how we use the VROPS python API to gather I/O data and correlate this with array data using Grafana to hunt down noisy workloads from an ESX node and VMFS view. I will also show how we can use this methodology to Identify high CPU workloads and also help us look at overall ESX and VM performance using this data.

Sponsors –

Dell Technologies
IBM
Asystec
Zerto
Capito / Pure Storage
Login VSI
SITS Group
Softcat

VMware Workstation Tech Preview 2018 REST API

This has been cross posted from my own blog vGemba.net. Go check it out.

Introduction

VMware recently announced the release of VMware Workstation Tech Preview 2018. This has a number of new features:

  • DirectX 10.1 support
  • REST API
  • Support for Windows 10 High DPI
  • Host level high DPI
  • The ESXi Host / cluster view when connecting to vCenter
  • USB Auto Connect functions for a virtual machine
  • Support for Wayland architecture for Linux hosts

The feature I am most interested in is the REST API. After listening to Craig Dalrymplepresent a session at the last Scottish VMUG called Making Your 1st Restful API call to VMware I wanted to try using an API. I had not tried it in my home lab or production at work as I didn’t want to screw anything up, so using Workstation is ideal.

Starting the REST API

The REST API needs to be started from the command line. The file you need to run is located in C:\Program Files (x86)\VMware\VMware Workstation. You can then vmrest.exe --help to see some basic help:

C:\Program Files (x86)\VMware\VMware Workstation>vmrest.exe --helpVMware Workstation REST APICopyright (C) 2018 VMware Inc.All Rights Reservedvmrest 1.1.0 build-8888902Usage of vmrest.exe:  -c, --cert-path <cert-path>        REST API Server certificate path  -C, --config        Configure credential  -d, --debug        Enable debug logging  -h, --help        Print usage  -i, --ip <ip>        REST API Server IP binding (default 127.0.0.1)  -k, --key-path <key-path>        REST API Server private key path  -p, --port <port>        REST API Server port (default 8697)  -v, --version        Print version information

To start the REST API you simply type:

C:\Program Files (x86)\VMware\VMware Workstation>vmrest.exeVMware Workstation REST APICopyright (C) 2018 VMware Inc.All Rights Reservedvmrest 1.1.0 build-8888902-Using the VMware Workstation UI while API calls are in progress is not recommended and may yield unexpected or unintended results.-Serving HTTP on 127.0.0.1:8697-Press Ctrl+C to stop.

You can see that there is a web based Swagger interface for browsing the API on http:\\127.0.0.1:8697:

Swagger Interface

Authentication

If you now try to do anything through the API it will not work as we are not authenticated. We need to configure some credentials. This is done using vmrest --config but remember if the API is running you will need to stop it by pressing CTRL+C:

C:\Program Files (x86)\VMware\VMware Workstation>vmrest --configVMware Workstation REST APICopyright (C) 2018 VMware Inc.All Rights Reservedvmrest 1.1.0 build-8888902Username:cwestwaterNew password:Retype new password:Processing...Credential updated successfullyC:\Program Files (x86)\VMware\VMware Workstation>

You simply enter a username and password for authentication to the API. Start the REST API again using vmrest.exe. Now in the web interface click the Authorize link in the top right, enter the username and password and then click the Authorize button:

Authorization

There does not seem to be any visual indication that you are logged in, the only way to check is to do something.

Swagger Interface

The Swagger interface is a great way to try the API. You don’t need to use things like Curl, PowerShell, Postman, etc. you can simply use the web interface to perform API operations. You can see that there are four main sections available with various operations available under them:

GET VMs

There are quite a few operations available in the API. Lets try out a few.

Simple GET Operation

A quick check to see if things are working right is to use the Swagger interface and try a GET operation. GET means reading something, no changes are made – safe!

Browse to VM Management...Show/Hide..GET /vms then click TRY IT OUT! The Response Body shows the two VM’s I have in Workstation:

GET VMs

So in the above screenshot you can see from the interface some useful information. We can see what the response should look like, the HTTP response codes (200 means we were successful), the Curl command, and what we actually want to see, the Response body:

[  {    "id": "3SFU5DH6CKR349853CVSG5T1E9TJCMEB",    "path": "C:\\Users\\a-cwestwater\\Documents\\Virtual Machines\\Linux-01\\Linux-01.vmx"  },  {    "id": "RG98SS5QSA90GAP42Q7M4IVAT1VOH2EV",    "path": "C:\\Users\\a-cwestwater\\Documents\\Virtual Machines\\Linux-02\\Linux-02.vmx"  }]

Now we have a list of the VM’s and their IDs we can try something else. Let’s get some VM setting information for a particular VM. To do this use GET /vms/{id}. In the web interface expand VM Management...Show/Hide..GET /vms/{id}. Under the parameter section we need to use one of the ids from above, in this case I will use "id": "RG98SS5QSA90GAP42Q7M4IVAT1VOH2EV".

In the parameters section it is looking for parameter of id. Copy and paste the id of the VM into the field:

GET VM settings

One this id is entered click TRY IT OUT!. The Response Body section gives us the details of the VM:

Get VM settings

The VM has a single CPU with 64MB of RAM.

Simple PUT Operation

A PUT operation updates something. In this case we want to add a CPU and some memory to a VM. This is under VM Management...Show/Hide..PUT /vms/{id}. We again need to define some details in the Parameters section. The first is the id of the VM like we did above.

Next we need to add some definition of what the VM needs to be changed too. This is in the parameters text box. There is an example shown just to the right:

PUT VM settings

Again click the TRY IT NOW! button and we get the response:

PUT VM settings

The VM now has 2 CPUs and double the amount of RAM. We can check using the API again:

PUT VM settings

DELETE Operation

Finally I want to delete the VM as I am done with it. This is found under VM Management...Show/Hide..DELETE /vms/{id}. There is a warning with DELETE operations. Unlike the GUI there is no confirmation or check you actually want to delete, it just does it. So be aware!

DELETE VM settings

Again we need to define the id of the VM we want to delete then click TRY IT OUT!. This time we get a Response Body and Response Code of:

DELETE VM settings

Not the usual response we have seen above. Usually we get a Response Code of 200, but this time it’s 204. That is 204? Scroll up in the web interface and you see:

DELETE VM settings

So 204 is the VM was deleted. We can confirm using GET /vms:

DELETE VM settings

The VM with the id of RG98SS5QSA90GAP42Q7M4IVAT1VOH2EV is gone.

Wrap Up

When I started with this blog post I had never used an API before, but within an hour I was using the Swagger interface to interface with VMware Workstation. 30 minutes later I was using Postman to do the same. I think using the ‘safety’ of Workstation to get used to the VMware API is a great way of learning how to using the API.

I plan to investigate further as I use Workstation as my lab, so being able to automate operations using the API could help me a great deal. I expect further development of the API as the Tech Preview progresses.

ESXi scripted builds via PXE/kickstart

Periodically we spin up a slew of new hypervisors. If like me you find yourself desiring more automation (and the uniformity that comes with it) but are somewhere between building ESXi by hand and the scale out auto-deploy tooling for hundreds of systems. You may find this useful, especially if you already maintain a kickstart server.

This is my experiences with scripted installations of ESXi 6.5 via kickstart and PXE booting. The beauty of this is it only requires two options set in your DHCP server, the rest of the configuration is static. The net result is, configure your DHCP, let the host PXE boot and within a few minutes you have a repeatable and dependable build process.

So, how to get there?

First up you need to  have the physical environmental basic’s taken care of, e.g. rack & stack, cable up NICs etc. Once that is in place, this process will require DNS entries populated for your new servers, plus explode the ISO on your tftpserver.

n.b. this scripted install uses the “–overwritevmfs” switch so it’s imperative that you do NOT have the host in a state where it can see any storage other than it’s local disks e.g. if you are rebuilding existing hypervisors or the HBA’s have been zoned into something else previously. This is imperative as this switch has the capacity to overwrite the existing filesystems with the new VMFS’d install, therefore it must ONLY see it’s own local disks 😉

Overview of PXE & Kickstart boot workflow:

" <a href=https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vsphere-esxi-vcenter-server-60-pxe-boot-esxi.pdf (pay attention to “/“ and UEFI !)

[1] the boot loader files on the tftpserver. This is an exemplar showing both UEFI and non UEFI sitting side by side on the same install base.

screenshot1

 

[2]  The contents of the CGI script that gleams the FQDN details

screenshot2

[3]  An exemplar of a kickstart file

screenshot3

 

The boot itself

So with that all in place all you need to do is determine if the server you’re booting is legacy BIOS boot or UEFI. The difference being in the boot loader we’ll point at & insert the relevant DHCP-fu for you;

legacy boot

66=your.tftpserver.com

67=esx-6.5/pxelinux.0

UEFI boot

66=your.tftpserver.com

67=esx-6.5/mboot.efi

then simply set the next boot to PXE reboot and watch the little birdy fly!

There you have it. Effectively two DHCP settings and you can rinse and repeat bringing up dozens of systems quickly with repeatable builds. There are of course many other options, all of which have their merits. This one is perhaps most useful if you are already invested in scripted builds/kickstarts.

 

VMware Integrated OpenStack

VMware Integrated OpenStack ‐ (Ninjas) In the Real World

At a recent VMUG I presented about our journey with VIO (VMware Integrated OpenStack) to-date, here is a short write up on that presentation.

So, firstly – what is VIO. Well at it’s core it’s a shrink wrapped flavour of OpenStack, shipped & supported by VMware, running atop the ESXi hypervisors. The VIO architecture connects vSphere resources to the OpenStack Compute, Networking, Block Storage, Image Service, Identity Service, and Orchestration components. You can deploy VMware Integrated OpenStack with either VDS, or NSX‐based networking.

Why use VIO and not a.n.other distro of OpenStack?

For us the biggest differential was enterprise stability. Basically VMware QA the deployment, ship you a virtual appliance, and release around every 12 months & support for two years. We also had a big concern about the cost of day two Ops of vanilla OpenStack in comparison, VIO hugely reduces this, but also leverages:

  • integration with existing VMware tooling/skills
  • treat it as a “shrink wrapped virtual appliance”
  • Supported Product ‐ GSS on the end of a phone (this has been invaluable as we take the first steps on our private cloud journey!

Our Key Uses for VIO

  • IaaS (self‐service)
  • Enterprise Automation
  • Developer Cloud
  • Production Burst Capacity
  • Kubernetes Integration
  • Cloud scaling (On‐prem)

VIO Logical Design:

VIO basically looks like this;

There are two major things to be aware of in the initial design stage

  1. It’s really important here to know the line in the sand is – e.g. where VMware’s bit stops & OpenStack starts? & how do you cope with that? In a nutshell be aware VMware will support you all the way up to the Horizon UI, once inside user land – that’s largely outwith the realms of GSS. So it’s not a panacea, and you do need to know some OpenStack. But it’s well worth it!
  2. The biggest design decision once you’ve decided to deploy (both fiscal and design wise 🙂 ) …  is whether to run with NSX or VDS! Be aware that all of the L3 networking functionality in the OpenStack arena basically needs NSX to work as you would expect – it’s also important to note you cannot migrate between a VDS and an NSX deployment, if you change your mind it’s a complete re-install!

Other than that you’ll need a separate management cluster, we chose to build ours on a vSphere Metro Storage Cluster (vMSC), there is also a hard requirement on a dedicated VC for this owing to the burn rate of new VMs/instances coming online and saturating your existing VC(s). It’s worth noting that since we launched there is now VIO-in-a-box, this is an entire deployment condensed onto one node – well worth a look!

Challenges with OpenStack

VMware has gone a long way, in their usual space of abstracting and simplifying the complex thing, however no matter how well VIO deals with the infrastructure and deployment in this space it cannot help in the running OpenStack arena. You have to be aware that the on-ramp to learning OpenStack can be complex, with a proliferation of projects and components where many come and go, & most are not enterprise ready! The rate of change in the open source world of OpenStack can be staggering and consequently documentation can sometimes lag behind.

Advantages of OpenStack

The biggest thing here for us is the self-service element and the empowering of developers, the other key things were:

  • Fault Domains & Availability Zones
  • Less person resource at Ops end via Self Service to devs
  • Multi‐Tennancy ‐ Containers alongside Legacy Stacks from a single IaaS platform
  • Standardised APIs & product ‐ e.g. not is not custom VMware or other vendor lock code
  • Very large global community

Core Advantages of VIO over vanilla OpenStack

There are three arena’s which have proved invaluable for us with VIO over vanilla OpenStack, they are

  • Enterprise grade stability
  • Support direct with VMware – GSS on the end of a phone has been brilliant for us!
  • Almost zero day two Ops costs compared to vanilla OpenStack

Those three above were reason enough but in addition to these advantages we’ve also gain from massively simplified upgrade paths. VMware release a new candidate via a new vApp, which installs parallel to the old, the config is ported over, then we flip Horizon over to the new installation. Once happy we just trash the old instance. There is also all the standard ESXi goodness under the hood like DRS et al, & finally backups – not to be sniffed at, VIO provides a mechanism to backup both the install and the state.

Install Pros & Cons

First up be aware of the requirement for a dedicated VC & PSC, this is owing to the burn rate of which new instances can be created and the capacity for that to overload and existing VC. You’ll need a separate cluster for management – recommended at 3 nodes – we spun up a small vMSC with block replicated storage to give us DC level resilience in the management tier. A standard looking deployment will contain something like 16x VMs: 3x MongoDBs 3x Compute, 3x DB, 2x Controller, 2x DHCPServers, 2x Load Balancers, 1x Ceilometer

How to use

So OpenStack is an API driven thing, as such the API is by far the most complete & powerful way to interact with VIO/OpenStack, followed by the CLI and then the Horizon UI. In that order of functionality!

We’ve also invested in some additional tooling from HashiCorp, namely Terraform (for the birthing of instances) and Vault (for the storing of secrets and retrieval via API calls). Integration with our Puppet stacks is ongoing & on the side we’ve also stood up a sizeable Swift Object Store spanning multiple DCs. All of these together makes for a fairly complete set of tooling. This is all just from the infrastructure side, our Devs are additionally using Vagrant and Packer.

Our in house modifications, stories and lessons learned

This could/should be some of the most interesting bits – so well done if you’re still reading! Below are my own experiences;

  • Do make sure you understand your use cases before you begin your journey e.g. NSX or not ‐ is it just containers you want?
  • Understand the workloads you want to stand up, ephemeral vs persistent
  • Networking ‐ layer2 is doable, but you do lose significant features such as NaaS ‐ Security Groups/segmentation ‐ load balancing
  • We got it back – we’ve had a few events which may have been terminal and required full rebuilds for a vanilla OpenStack deployment. I should hasten to add that these have all been events that we’ve done to VIO, not VIO going awry or failing. Exemplars of those are:
    • do NOT delete your core services project!  Itchy trigger fingers from an Asana “clean up projects” task caused every single project to be deleted, this included the core services project that has all of OpenStacks internal services in it, the equivalent of “rm -rf / *”  (this should now not be possible after a RFE in future releases beyond Mitaka)
    • Storage violently removed from the hypervisors – a combination of a VMware ESXi bug and a SAN platform migration caused storage to be violently taken away and this corrupted our VIO DB stacks. Again ‐ a Jenkins re‐deploy from base config brought it all back.
  • Be aware that whilst VMware can not support many things you can do on top of OpenStack (only the VIO components – that’s understandable) but that you can extend it like any other OpenStack installation, we’ve built, and are using, a SWIFT Object Store (cross datacenter) providing us a geographically stretched redundant filesystem.
  • Windows licensing is… tricky? Basically in pulling together their infrastructure if you intend to cater for Windows instances you should consider early on that footprint and make a determination as to whether to use datacenter licensing, or not.
  • VIO is a very transparent black box, there is much temptation to look, and poke, at the insides. This is not always the best idea – a little knowledge is a dangerous thing
  • VMware PSO was invaluable to us in getting a functioning OpenStack environment in a short amount of time!
  • Ongoing GSS Support has also been very beneficial  – even in the early days whilst it was under the emerging products banner we always received great support

Futures

As per above, we’ve already integrated with Terraform and Vault, but we’re pursuing further integrations with Consul & Puppet. Automated birthing, config and plumbing of netscaler configs & devolved DNS & auto‐service discovery & an upgrade to the incoming VIO 5 based on Queens

If you’d like to know more, please reach out to me through this site.

Setting an SRM VM Priority from vRO

Recently I was given a requirement to enhance a vRO workflow which added a VM to disaster Recovery policy in SRM. The existing workflow by default added all VMs added to Priority 3 (normal) start up order. My requirement was to allow the user to specify the start up order.

Having a quick look at the environment, I could see that the SRM plugin was used so felt this was a good start – however it soon became apparent that it wasn’t ideal for me given that the information we can get out of the plugin is limited never mind having to manipulate that data. Looking online , it seemed that using PowerShell was the common answer to automating this, but I also had a constraint of not introducing any new plugins. During my online hunt I found the SRM 6.5 API guide and this became a nice resource. By browsing this API guide it became apparent that SOAP API was my only option and I continued to refer to this guide in order to find a solution – https://www.vmware.com/support/developer/srm-api/site-recovery-manager-65-api.pdf.

I decided to write this blog because there seemed a sever lack of info on using SOAP for SRM. Continue reading