vRealize Suite Lifecycle Manager – Part II – Deploying Log Insight

In Part I, I showed how to deploy the Lifecycle Manager appliance.

For my first product deployment, I decided on a quick win with Log Insight.

When you log in to vRealize Suite Lifecycle Manager the first time, it takes you through a tour of the UI.

First I generated a certificate


Then I clicked to create a new environment

If you have a My VMware account with acccess to vRealize Suite, you can poined LCM directly to My VMware as a download source, you won’t have to manually download bits.

Changing passwords forced a logoff and login.

Enter your My VMware credentials here to allow for direct download

Click on the items you want to download


Add a new datacenter for LCM to manage


Now adding a new vCenter to the datacenter


Starting the wizard to deploy Log Insight


I’m only installing Log Insight at this time, so check the box.

Now you get a short Log Insight wizard.

I’m only doing a standalone LI host but note you could do a load balanced config as well as add worker nodes.

The job after it’s submitted.

The deployment failed because it wants to put 16 vCPU on my LI VM, but my little lab only has 4 cores per host. The vCenter error said “No host is compatible with the virtual machine.” All I had to do was edit the deployed VM, change it to two cores (I also decreased the RAM), power it on and the LCM deployment continued without issue.

I now have a running Log Insight instance, I connected it to vCenter and I’m done!

vRealize Suite Lifecycle Manager – Part I – Initial Deployment

vRealize Suite Lifecycle Manager is designed to let you manage deployment and upgrades of vRealize Suite. In Part I of this series, I will show you the installation process in my lab.

Here is the OVF that I downloaded from My VMware

Deploy the OVF


Standard OVF deployment options here, setting the hostname and IP address information.

The console while the appliance configures itself

Main welcome screen – default credentials are admin@localhost / vmware

You will get asked to change the appliance password

All set, this is an easy, standard OVF deployment.


In Part II, I use vLCM to deploy Log Insight.

dvSwitch Migration

I was rebuilding my lab and decided to capture the process of moving machines from the standard switch to the distributed virtual switch

In these screenshots, I’ve already created the new distributed switch and added portgroups.


Adding my 2 lab hosts to the distributed switch

Now we need a physical uplink.

In my lab, vmnic1 and vmnic2 are carrying virtual machine traffic. Prior to this step, I disconnected vmnic2 from the standard switch. This is the part that has the most risk in that if you have a bunch of VLANs, it’s possible that vmnic1 doesn’t have all of them trunked. This is where you can cause an outage, so it’s important to check the physical switch configuration for all VLANs to ensure they’re all trunked.

I assign vmnic2 to Uplink 2 for no reason other than to keep the “2”s together. After the migration is done, you’d come back in here and assign vmnic1 to an uplink – I would assign it to Uplink 1 for consistency’s sake., but the name of the uplink doesn’t actually matter.


Repeat the process for host #2.

You get a summary of the changes before the changes are made

This screen will detect if you’re about to make a disastrous change

My VM traffic is VLAN 203

Now to migrate VMs to the distributed switch, I right click and click Migrate MVs to another network


My source network is the standard switch VLAN203 network


Destination is the DVS portgroup, still VLAN203

Here’s where it’s awesome. You could migrate every single VM on VLAN203 to the distributed switch by just selecting all here. I play it safe to start by only migrating one. You obviously would probably not want to start with a domain controller, but I like to live dangerously 🙂

Continous ping to the domain controller


I get a little blip but don’t drop a ping


VM is migrated. I can now migrate all of the VMs on VLAN203, then remove vmnic1 from the standard switch, then come back and add vmnic1 so I have redundant uplinks.

ASUS stock firmware routing problem?

I have a very simple setup with an ASUS as my edge router /24, a routed connection to my homelab Cisco layer 3 switch, and a few /24 SVIs on the Cisco. I have static routes on the ASUS pointing to the Cisco SVIs, and a default route on the Cisco pointing to the ASUS.

A few months back, lightning struck nearby the house and fried my cable modem, ASUS, and Cisco switch. I replaced all of them, but I could never correctly communicate with the homelab. When I was directly connected to the Cisco switch (3750), I had no problems and could communicate with all SVIs. I could ping back and forth between the 3750 and the ASUS (RT-AC66U_B1). But I could never SSH (or drive any other traffic) from the 3750 to the RT-AC66U, or RT-AC66U to 3750 . This has baffled me for some time, but I was bypassing it by directly connecting to the lab with an ethernet cable. I finally sat down to solve it today.

Even though my ethernet cable between the ASUS and Cisco was able to carry successful ping traffic, and tested OK with a cable tester, I decided to replace it. I apparently can still make my own ethernet cables successfully 🙂  The problem persisted after replacement.

Thinking maybe my laptop was the culprit, I tried other devices but they all exhibited the same behavior. Then I started looking at the ASUS. I had always used the Merlin firmware for my ASUS because the stock firmware was severely lacking in features. However,  the newest stock firmware looked OK when I bought the new ASUS, so I kept it. And there was my mistake. I saw a couple of posts saying that static routing wasn’t working correctly on ASUS routers.

Stock ASUS firmware running on my RT-AC66U_B1 does not seem to correctly handle static routes. As soon as I flashed the router to Merlin-RT-AC68U_380.68_4, all of my routing problems disappeared. I didn’t even lose my config.


A reflection on the VMworld Hackathon

Many others have written posts summarizing VMworld, I won’t do that here. If you’d like a live-Tweet archive of the keynotes, you can look on my Twitter timeline starting on August 28, 2017. For a full blogpost, please check out Paul Woodward Jr.’s recap, as well as Sheng Sheen’s detailed VMware announcements post.

I had a great opportunity to participate in the VMworld Hackathon and I believe it was a career-changing experience. Back to that in a minute. First, let’s explore why was I part of the Hackathon at all.  I’m not a developer. I’m a presales engineer. Although I wrote code for a living a while back, I haven’t developed anything professionally in almost 10 years. Most of what I did was classic ASP and VBA, and a few monster T-SQL stored procs. It wasn’t what I considered “real” programming at the time – folks who wrote object-oriented code, used big fancy source control systems, worked on large team projects, etc.

Paul did a vBrownBag tech talk at VMworld, see the replay of From CNC to VCP: A Journey of Professional Growth. One of the things Paul talked about is building your personal brand and the power of social media.  To help build his brand, Paul decided to start the ExploreVM Podcast. Without Twitter, I wouldn’t have known that he was starting a podcast. Without Twitter, I wouldn’t have seen him offering guest slots on the podcast, and I wouldn’t have made Episode 7 – Making the Move to a Pre-Sales Role with him.

Without Twitter, Nick Korte wouldn’t have found the podcast, listened to it, and reached out to me via Twitter DM to ask questions.

Without Twitter, I wouldn’t have known Nick’s name as I scrolled through the list of Hackathon leaders when I was considering a team. And I probably wouldn’t have joined a team because I was intimidated – I’m not a programmer.  But I knew Nick, and he’s not a programmer either, he’s a sysadmin. It’s not scary to join a team with a sysadmin, right? So I joined. Nick did a great post-Hackathon writeup, check that out here.

Without Twitter, I wouldn’t have met Chris Dye, one of the professional developers on our team. He kindly spent his time filling in some of my knowledge gaps as I struggled to understand how software development works today.

A number of people spent considerable time running pre-Hackathon training sessions. I went to Jeeyun Lim‘s excellent “Getting started with Clarity” session.  I learned that I still have a lot to learn – but I understood what Jeeyun was doing. I understood how things like Node.js and Angular make my life much simpler. I understood how the frameworks take what I used to do in hundreds of lines of classic ASP and turned them into a few configuration options.  And thankfully, VMware has invested in a Pluralsight account, allowing me to learn what I’ve missed in the last decade.

I’ll never become a world-class developer. I won’t write any earth-shattering algorithms or contribute to the Linux kernel. There’s a reason I moved out of development and into the infrastructure side. But in this world of automation and devops, being able to write and understand code is a necessity. Hackathon rekindled my interest in programming. It made me realize that I don’t have to be somebody who builds APIs, or builds PowerShell libraries, or writes kernel code. Being able to programmatically consume what others have already made for me is enough. I took my first step towards understanding last week, and I will continue this week and future weeks. I hope I get to go to VMworld next year, and if there’s a Hackathon, you can bet that I’ll participate. I might even contribute some code this time.

I will close by saying that you do NOT need to be a developer to participate in Hackathon. In fact, the best teams have a mix of infrastructure folks and developers, as there is always plenty for the infra folks to do. If you get the opportunity next year, sign up. It’s worth it!

Invoking the vRealize Automation API – Part II

In Part I, I talked about why I wanted to learn API calls in vRA and how I got my lab environment working. In Part II, I will talk about how I learned how to make an API call.

I relied heavily on Grant Orchard’s getting started guides. I have linked to Part I, II, and III below, with my explanations of how I used his blog to achieve my goal.

Part I – Getting Started [grantorchard.com]

I couldn’t figure out how to browse through API calls because I wasn’t seeing what Grant was showing. It took me forever to realize that at the very bottom of the page, you  can click on Show/Hide – then the API calls appear and you can drill into each one for full details.Show / Hide API calls

Part II – Building Your First API Call [grantorchard.com]

Grant wrote:
Before we start, perform the following steps.
1. Download Postman.
2. Import this Postman collection of the vRA 7.2 API.
3. Import this Postman environment variables file.
4. Open up the API docs at https://{{vra-fqdn}}/component-registry/services/docs

Postman? What’s Postman?  It’s a GUI tool to issue API calls.

What’s a Postman collection? It’s a group of API calls that you can easily click on in the GUI.

I can easily search a Collection for the API I want. In this case, I know I want to get the Bearer token (Grant explains this, it’s how the API requests are authenticated), so I search for ‘token’. I click on the “returns a token associated with the provided credentials” and it opens up the request complete with the proper URL. It saves me from having to manually piece together the API calls and paste them into Postman.

You’ve probably noticed {{vra-fqdn}} in the URL. It’s not just a placeholder. It’s an environment variable.

Grant provided a bunch of environment variables in his post – you can import the environment variables and change them to match your lab environment. You can reference these variables inside Postman.

Following Grants example, I opened the token API in Postman.  The ‘Tests’ section saves the Bearer token in a variable named “token”.

Part III – Requesting a Catalog Item [grantorchard.com]

Grant’s post said the API I needed was ‘entitledCatalogItemViews’. You can see that I’m using the {{vra-fqdn}} variable in the URL as well as passing the Bearer {{token}} value. One problem I ran into is that you must have a space between Bearer and {{token}}.

Hit Send and my results come back. I have only one blueprint, a Linked Clone blueprint with Photon Linux in it. You can see two links – one for the GET: Request Template, and the other for the POST: Submit Request. The Request Template will return an example set of JSON showing you how to make the POST call to start the Blueprint.

Now I open another Postman tab and paste in the Request Template URL. Add the proper header for Authorization, and hit Send.

This is just a subset of the JSON I got back. I left the tab open and launched a new tab.

In the new tab, I used the URL in the Submit Request response that I got above. I did the same Authorization header as used previously, and pasted the Template JSON from above into the Body field.


After pressing Send, I got this response in the Body. You can see a Request ID as well as a state of “Submitted”

There is also an API where you can check on the state of a request. You can see now that the state has changed from Submitted to In Progress. You can keep

You can see my request in progress inside vRA

You can also see activity in the vSphere Web Client.

You can continue checking on the provisioning status by clicking Send in Postman. You would do the same thing programmatically – periodically ping the API for this asynchronous request to determine when it has completed. We now see that the status code is Successful instead of In Progress

I now have a new Item in vRA.

Now that I know the correct APIs to use, and that they work as expected in my lab environment, I can get to work calling them from Powershell. Part III of the series will document this process.

Invoking the vRealize Automation API – Part I

This post was inspired by a desire to speed up the prep time of my demos. We use nested demo environments hosted inside vCloud Director. The nested environments have resource limitations and we sometimes have to shut down unused VMs in a demo environment to ensure that other components get enough resources to execute. I also wanted to do as little prep work inside vRA as possible – automatically launch blueprints so I have a few managed VMs to show off. My idea was to write a PowerShell script that could be easily launched from the desktop.

First, I did a simple install of vRA in my home lab (this was back in May, vRA 7.2).  I’d like to thank my friend Eric Shanks for his fantastic vRA7 guide available at The IT Hollow. His posts have been extremely valuable in helping get my lab environment working. When I built my environment, I used the same Windows 2012 template machine for both my IAAS box as my SQL Server. This ended up being a major source of trouble for me, which I will detail later.

This week, I started following Eric’s guide to configure vRA. I got it to the point of creating a new tenant and got AD authentication working. Then I tried using the vCenter endpoint that had been created but the logs were throwing SSL errors. I deleted it and recreated it, which was successful, but then I saw logs in Infrastructure>Monitoring>Logs that said it was looking for something named ‘vCenter’. So I deleted the endpoint again and named it vCenter.  I tried a bunch of stuff including deleting and recreating, then I got other errors and eventually got it to work and I saw my compute resources under the vCenter endpoint.

I moved on to try making a Fabric Group, but I could only select my lab cluster, it didn’t have any resources in it, I couldn’t assign any compute or storage. I went back to the logs and found “DataBaseStatsService: ignoring exception:  Error executing query usp_SelectAgent  Inner Exception: Error executing query usp_SelectAgentCapabilities”

I googled the error and came up with this Communities page as well as KB543238
They both pointed me to MSDTC being a problem. But the KB seemingly only applied to vRA 6.x. I followed the communities post and tried uninstalling and reinstalling MSDTC, but no success.

At this point I wondered if I was hitting some 7.2 bug. Since 7.3 was out, I ran an upgrade. The vRA appliance and IAAS box upgraded without issue.  As soon as I logged back in, the vCenter Endpoint wasn’t working at all. The log was full of errors saying “Failed to connect to the endpoint. To validate that a secure connection can be established to this endpoint, go to the vSphere endpoint on the Endpoints page and click the Test Connection button. Inner Exception: Certificate is not trusted (RemoteCertificateChainErrors).”

Per the vRA 7.3 Release Notes, certificate validation is turned on. Not wanting to mess around with signed certificate replacement in the lab,  I got around this problem by downloading the root CA certificate from the homepage of my VCSA, and installing it in the Trusted Root Certification Authorities bucket on the IAAS box. Making this change brought me back to the usp_SelectAgent error. I logged into SQL and tried to see if I could execute the usp_SelectAgent stored procedure, which worked fine.

Having debugged the problems for the better part of two days at this point, I went for help, which thankfully came quickly in our internal message board. My problem was definitely the MSDTC – even if you Sysprep a box, it doesn’t reset the MSDTC unique CID – so the IAAS box was unable to communicate with the SQL server.

I followed this procedure to reset the CID on both SQL and IAAS:

1. Stop the Manager Service.
2. Stop the SQL Server service.
3. Open a command prompt on the machine with the Manager Service and issue the following command:
msdtc -uninstall
4. Open a registry editor on the Manager Service and delete the following keys if they exist:


5. Reboot the machine with the Manager Service.
6. Open a command prompt on the machine with the Manager Service and issue the following command:
msdtc -install
7. Perform steps 3-6 on the machine running the SQL Server.
This procedure generates new CID values for MSDTC on both servers.

After this procedure was completed, everything worked and I was able to continue my vRA configuration without issue.

In Part II, I will cover how I learned some basic vRA API operations.