I have used vRO for quite some time, yet I have never really had a need to use Composite Types – until recently! This vRO feature is pretty cool and allows you to create arrays which can be polled by a Workflow and what is really a benefit for me is that allow you to minimise the amount of WFs needed or even the amount of input parameters into a Workflow.
I am a real advocate of using configuration files in vRO, and as anyone who attended my Glasgow VMUG session will (hopefully) remember that these are used for global settings which mean we don’t need to update individual Workflows if we point to these central configuration files. So where do Composite Types fit in here? Well recently I had a requirement around DNS information which had potential to impact manageability of Network Profiles in vRA. The requirement was around how DNS information would be added to a VM during deployment depending on things like location or operating system. Using vRA out the box IPAM meant that in order to achieve this I would have to create many profiles just to map different DNS info and then resolve complexity of splitting the IP ranges within the Profiles. An alternative way to meet this requirement along with making it easy to manage and fulfill any need to grow as more sites where added was composite types!
Lets have a look at how its done:
Recently I was given a requirement to enhance a vRO workflow which added a VM to disaster Recovery policy in SRM. The existing workflow by default added all VMs added to Priority 3 (normal) start up order. My requirement was to allow the user to specify the start up order.
Having a quick look at the environment, I could see that the SRM plugin was used so felt this was a good start – however it soon became apparent that it wasn’t ideal for me given that the information we can get out of the plugin is limited never mind having to manipulate that data. Looking online , it seemed that using PowerShell was the common answer to automating this, but I also had a constraint of not introducing any new plugins. During my online hunt I found the SRM 6.5 API guide and this became a nice resource. By browsing this API guide it became apparent that SOAP API was my only option and I continued to refer to this guide in order to find a solution – https://www.vmware.com/support/developer/srm-api/site-recovery-manager-65-api.pdf.
I decided to write this blog because there seemed a sever lack of info on using SOAP for SRM. Continue reading
Recently I sat the VCAP Design exam for Cloud Management and Automation based on vRA7.2. Previously I had sat the version 6 exam and this was based on the traditional split of visio based canvas scenarios and drag and drop questions. I learned that this version of exam has significant changes to it, and in fact there are no more canvas style questions. Indeed most questions are multiple choice with some drag and drop. The time allocation was also less than before, now only 130 minutes to answer 60 questions!
Going into study mode I felt confident having used vRA7.3 for some time now, however there are still slight differences between 7.2 and 7.3 that I had to brush up on. Additionally, due to the architecture of the product I work on, we don’t have a need to utilize all of what vRA can offer, so I also required a refresher on things like approval policies and the vRA portal.
So, where to start? I am lucky enough to have a lab in work where we develop, so I could use that for a “play around”. I created a new tenant and simply clicked everywhere and anywhere to get a feel for all vRA7 has to offer. I also completed some Hands on Labs from VMware. They are an excellent resource and cater for all levels. From here you can also click around – no need to follow the guide :). I did focus, however, on the vRA/NSX integration labs. I much prefer these labs to reading but I also brushed up on the design qualities that are always part of these types of exams. Having sat a few based on the DCV track, I always refer to Paul McSharrys official guide and also the DCD 5.5 Study Pack from Jason Grierson which is an excellent reference. I also should point out that the official exam guide here contains some really important references. Continue reading
Recently I had the pleasure of presenting on vBrownbag with my colleague Konrad Clapa. Konrad is a double VCDX in DCV and CMA and I am very proud that we where able to speak about our vRO and vRA best practices. We have worked together for around 3 years now developing and architecting the Service Catalog for Atos DDC and DPC products. See our session below and get in touch with any questions ……
<iframe src=”https://www.youtube.com/embed/iY_S8h5Ryio” width=”560″ height=”315″ frameborder=”0″ allowfullscreen=”allowfullscreen”></iframe>
Recently I wanted to learn more about AWS, mainly for career progression but also because of the noise made this year with VMware and AWS joining forces and the shift towards Hybrid Cloud.
As usual, for motivation, I decided to set the exam date as a focal point to aim for. But uncharacteristically I pushed and pushed this exam and lacked the motivation to study. It was something that was also on my work development plan and had to be achieved and I soon found myself in the start of December without achieving this. Thankfully I was able to successfully pass the exam and am now AWS-SAA.
In order to begin study for this, I started where it seems everyone does and purchased the “acloudguru” course. I bought this of udemy for around $10 around march 2017 – that shows you the levels of my procrastination (albeit I had a failed VCDX attempt this year as well to navigate) The course is a really good baseline for those who have never worked with AWS. The chapters are nice and short, around max 20mins and this is intentional to keep the attention of the listener. Beware you need to give a lot of time to get through this course. There are a lot of labs that you an follow but I found I had to repeat things a few times. Its also worth noting that although the official “acloudguru” site is now subscription based, you can still get good deals from udemy for the individual associate courses. Continue reading
Recently, we decided to test vRO 7.3 in clustered mode. In previous versions we have not had a great experience with vRO clusters and as a result have always had single vROs in Master and Slave setup. With the latest version seemingly giving more stability we have created a POC and Load Balanced as much as possible. I noticed a lack of blog posts about setting this up and decided to add one here.
At this stage I am assuming some pre-requisites met and design decisions have been made:
- Load Balancers and DNS names have been setup for vRA and vRO
- vRA has been configured and will be used as vRO authentication mode
- SQL will be used as the Database for the cluster
- vRO appliances have been deployed and powered on
Set Up Initial vRO Instance
Logon to the fist of our 2 vRO appliances using the control center URL – https://hostnameoffvro001.domain.local:8283/vco-controlcenter
A few years ago, I was asked to deploy vCAC (as it was known then). Soon after I found myself part of a team dedicated to creating a new Service Catalog for an SDDC based Private Cloud offering. It was a huge learning curve for me and I was soon immersed in a world of Cloud development with decisions to make based on these new VMware Cloud tools. The one thing that I did learn very quickly was the importance of version control. Coming from an infrastructure background, this was alien to me but it soon became one of the most critical things I learnt about successfully developing a Private Cloud and more importantly – maintaining it!
At the heart of our Service Catalog was vRealize Orchestrator and as requirements for automated Catalog items grew, so did the team. This caused a lot of issues, with many developers working simultaneously on the same product and as a result changes to the same Workflows occurred and relevant changes lost. It soon became apparent we where lacking a sensible way to ensure our final packages was bug free and not overwritten unintentionally. Natively in vRO we can export a package containing Workflows, Actions, Configuration Files etc, but this is not in an ideal format to efficiently review or track changes. It was becoming impossible to keep tabs on what was happening.
As some of you will be aware, vRA6 will be end of life support by the end of 2017 and as a result i was tasked with deploying a POC for vRA / vRO 7.3 in order to check if our current vRO code was compatible. I expected to see some challenges as we are heavily reliant on vRO for our Service Catalog, however one specific issue I did not expect caught me out.
As part of our Private Cloud offering, we use vRO to request a catalog item rather than vRA. The overall workflow also contains many post request actions such as deploying agents, resetting default passwords etc. All of these rely on the successful deployment passing us the hostname after the catalog request completes. After setting up the POC and running a test deployment I noticed that although the request was successful, the overall Workflow was failing. Looking deeper I saw some differences in the completion details of vRA7.
In vRA6, we used to get the following, where “tyler-prefix04” is the hostname of the newly created VM: