ESXi scripted builds via PXE/kickstart

Periodically we spin up a slew of new hypervisors. If like me you find yourself desiring more automation (and the uniformity that comes with it) but are somewhere between building ESXi by hand and the scale out auto-deploy tooling for hundreds of systems. You may find this useful, especially if you already maintain a kickstart server.

This is my experiences with scripted installations of ESXi 6.5 via kickstart and PXE booting. The beauty of this is it only requires two options set in your DHCP server, the rest of the configuration is static. The net result is, configure your DHCP, let the host PXE boot and within a few minutes you have a repeatable and dependable build process.

So, how to get there?

First up you need to  have the physical environmental basic’s taken care of, e.g. rack & stack, cable up NICs etc. Once that is in place, this process will require DNS entries populated for your new servers, plus explode the ISO on your tftpserver.

n.b. this scripted install uses the “–overwritevmfs” switch so it’s imperative that you do NOT have the host in a state where it can see any storage other than it’s local disks e.g. if you are rebuilding existing hypervisors or the HBA’s have been zoned into something else previously. This is imperative as this switch has the capacity to overwrite the existing filesystems with the new VMFS’d install, therefore it must ONLY see it’s own local disks 😉

Overview of PXE & Kickstart boot workflow:

" <a href=https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vsphere-esxi-vcenter-server-60-pxe-boot-esxi.pdf (pay attention to “/“ and UEFI !)

[1] the boot loader files on the tftpserver. This is an exemplar showing both UEFI and non UEFI sitting side by side on the same install base.

screenshot1

 

[2]  The contents of the CGI script that gleams the FQDN details

screenshot2

[3]  An exemplar of a kickstart file

screenshot3

 

The boot itself

So with that all in place all you need to do is determine if the server you’re booting is legacy BIOS boot or UEFI. The difference being in the boot loader we’ll point at & insert the relevant DHCP-fu for you;

legacy boot

66=your.tftpserver.com

67=esx-6.5/pxelinux.0

UEFI boot

66=your.tftpserver.com

67=esx-6.5/mboot.efi

then simply set the next boot to PXE reboot and watch the little birdy fly!

There you have it. Effectively two DHCP settings and you can rinse and repeat bringing up dozens of systems quickly with repeatable builds. There are of course many other options, all of which have their merits. This one is perhaps most useful if you are already invested in scripted builds/kickstarts.

 

Setting an SRM VM Priority from vRO

Recently I was given a requirement to enhance a vRO workflow which added a VM to disaster Recovery policy in SRM. The existing workflow by default added all VMs added to Priority 3 (normal) start up order. My requirement was to allow the user to specify the start up order.

Having a quick look at the environment, I could see that the SRM plugin was used so felt this was a good start – however it soon became apparent that it wasn’t ideal for me given that the information we can get out of the plugin is limited never mind having to manipulate that data. Looking online , it seemed that using PowerShell was the common answer to automating this, but I also had a constraint of not introducing any new plugins. During my online hunt I found the SRM 6.5 API guide and this became a nice resource. By browsing this API guide it became apparent that SOAP API was my only option and I continued to refer to this guide in order to find a solution – https://www.vmware.com/support/developer/srm-api/site-recovery-manager-65-api.pdf.

I decided to write this blog because there seemed a sever lack of info on using SOAP for SRM. Continue reading

GitHub Learning Lab

This has been cross posted from my own blog vGemba.net. Go check it out.

Introduction

At a VMUG last year during a presentation by Chris Wahl he recommended that all ops people like me learn a Distributed Version Control System such as GitHub. I use GitHub for my blog and storing some files, and still had not really scratched the surface of it.

Last month GitHub released a tool called GitHub Learning Labthat is basically an app that starts a bot that leads you through some training on the use of GitHub.

Lessons

So far there are five lessons available:

  • Introduction to GitHub
  • Communicating using Markdown
  • GitHub Pages
  • Moving your project to GitHub
  • Managing merge conflicts

In the Introduction to GitHub lesson you learn about:

Introduction to GitHub

Continue reading

2018 Personal Objectives (a bit late)

It’s now 6 weeks into the year and i figure it’s finally time to do something that i’ve been meaning to do since late last year… And that’s to to publish my personal objectives for 2018. For me it should be two fold it means they’re publicly out there so i can judge (and be judged) how the year went for me. Secondly i’m sure a lot of my objectives cross over with others in this community so i’m hoping it may spark some conversations and debates as the year goes on.

Certifications:

  • VCAP-DCV – So as posted last year (actually oops, that’s still in draft!) well spoiler alert on the 2nd attempt i got my VCAP-DCD, so it’s high on my list to aim for the 2nd VCAP which would of course unlock VCIX for me. This has to be my #1 goal for the year
  • About a year ago, like so many others i decided to embark on some AWS certs. I started working through the training but then it totally stalled. As i had an expiring exam voucher i forced issue and its scheduled for 3 months time. I need to set aside sufficient time between now and then to give myself any chance. I’m going for AWS Certified Solutions Architect – Associate

New Technologies:

  • Ansible and SaltStack . The scale of things at my new job compared to my old one is staggering. Everything is multiplied by 10x. Therefore it’s going to be essential for me to get much more familiar with configuration management
  • Continue reading

vBrownbag

Recently I had the pleasure of presenting on vBrownbag with my colleague Konrad Clapa. Konrad is a double VCDX in DCV and CMA and I am very proud that we where able to speak about our vRO and vRA best practices. We have worked together for around 3 years now developing and architecting the Service Catalog for Atos DDC and DPC products. See our session below and get in touch with any questions ……

<iframe src=”https://www.youtube.com/embed/iY_S8h5Ryio” width=”560″ height=”315″ frameborder=”0″ allowfullscreen=”allowfullscreen”></iframe>

Terraform with vSphere – Part 4

This has been cross posted from my own blog vGemba.net. Go check it out!

Wrap up

We have barely scratched the surface with Terraform. It is a very powerful piece of software that can do much more than building a single VM from a template. I do find the documentation around the vSphere provider to be lacking and there is not much out there on the internet on using it with vSphere, so it’s been a lot of experimentation for me but very enjoyable.

I first started playing with Terraform after watching Nick Colyer’s excellent Pluralsight course Automating AWS and vSphere with Terraform. It gives good demo driven examples similar to what I have shown in this series but goes further by delving into Modules, local provisoners, remote provisioners, etc. If you have Pluralsight access go check it out.

I also found this series of posts from Gruntwork to be excellent. The posts have actually been converted into a book called Terraform: Up & Running so you should go check it out.

Finally I also found these posts to be helpful:

Don’t forget the official documentation on Terraform.io – sometimes you just have to RTFM.

As you have found Terraform is easy to pick up and you can see results very quickly. It’s given me the bug to dig deeper and see what I can apply at work.

Good luck in your Terraform journey!

Terraform with vSphere – Part 3

This has been cross posted from my own blog vGemba.net. Go check it out!

Introduction

In Part 1 and Part 2 we downloaded, setup, and then created a simple VM using Terraform. Let’s look how to use variables and the files required for this.

Existing Code

Let’s look at the code from Part 2:

provider "vsphere" {
    user = "username@corp.contoso.com" # You need to use this format, not example\username
    password = "Password1"
    vsphere_server = "vcenter.corp.contoso.com"

    # If you use self-signed certificates
    allow_unverified_ssl = true
}

resource "vsphere_virtual_machine" "webserver" {
    name = "webserver1"
    vcpu = 1
    memory = 2048

network_interface {
    label = "VM Network"
}

disk {
   datastore = "datastore"
   template = "MicroCore-Linux"
}

}

Continue reading

Terraform with vSphere – Part 2

This has been cross posted from my own blog vGemba.net. Go check it out!

Introduction

In Part 1 of this series we went about installing Terraform, verifying it was working and setting up Visual Studio Code. In this part we will cover some Terraform basics.

Terraform Components

The three Terraform Constructs we are going to look at are:

  • Providers
  • Resources
  • Provisioners
Providers

Providers are the resources or infrastructure we can interact with in Terraform. These can include AWS, Azure, vSphere, DNS, etc. A full list is available on the Terraform website. As you can see it’s a very big list. In this series we will concentrate on the VMware vSphere Provider.

Resources

Resources are the things we are going to use in the provider. In the vSphere realm this can be a Virtual Machine, Networking, Storage, etc.

Provisioners

Terraform uses Provisioners to talk to the back end infrastructure or services like vSphere to create your Resources. They essentially are used to execute scripts to create or destroy resources.

Setup Terraform for vSphere

Open up Visual Studio Code and create a new file called main.tfin the folder C:\Terraform. If you have added C:\Terraform to your Path environment variable save main.tf anywhere you like, but of course the best place for all of your Terrform files is source control…

Continue reading

Terraform with vSphere – Part 1

This has been cross posted from my own blog vGemba.net. Go check it out!

Introduction

Terraform is one of the new products that let you treat infrastructure as Code. What does Infrastructure as Code actually mean?
According to Wikipedia:

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.

In the case of Terraform this means using code to *declare* what we want from vSphere, AWS, Azure, OpenStack etc. and then Terraform goes and creates the infrastructure to our declared final state. This is the opposite to Procedural Infrastructure where we have to describe *how* to get our end result. Terraform does the hard work in figuring out how to create the infrastructure we have defined – we don’t have to worry how to actually create it or the sequence of steps to get there.

Importance of Version Control in Cloud Development

A few years ago, I was asked to deploy vCAC (as it was known then). Soon after I found myself part of a team dedicated to creating a new Service Catalog for an SDDC based Private Cloud offering. It was a huge learning curve for me and I was soon immersed in a world of Cloud development with decisions to make based on these new VMware Cloud tools. The one thing that I did learn very quickly was the importance of version control. Coming from an infrastructure background, this was alien to me but it soon became one of the most critical things I learnt about successfully developing a Private Cloud and more importantly – maintaining it!

At the heart of our Service Catalog was vRealize Orchestrator and as requirements for automated Catalog items grew, so did the team. This caused a lot of issues, with many developers working simultaneously on the same product and as a result changes to the same Workflows occurred and relevant changes lost. It soon became apparent we where lacking a sensible way to ensure our final packages was bug free and not overwritten unintentionally. Natively in vRO we can export a package containing Workflows, Actions, Configuration Files etc, but this is not in an ideal format to efficiently review or track changes. It was becoming impossible to keep tabs on what was happening.

Continue reading