vSphere with Tanzu Supervisor Services Part IV - Virtual Machine Service to support Hybrid Application Architectures
Hybrid Application Architectures
As technology advances at a rapid pace, the landscape of application development continues to evolve. The demand for agility, scalability, and cost-effectiveness has given rise to a new breed of architectures that seamlessly integrate modern cloud-native principles with established traditional workloads. One such paradigm that has gained significant traction is the hybrid application architecture, which combines the power of microservice architectures with the reliability and versatility of virtual machines (VMs).
Microservice architectures have revolutionized the way applications are designed and deployed. By breaking down the complexity of an application into smaller chunks, loosely coupled services, this architectural style enables organizations to achieve better modularity, reusability, and interoperability. These services communicating with each other through well-defined APIs.
Traditional workloads, often hosted on virtual machines (VMs) are designed to mostly function in on-premises or legacy environments. These workloads might include databases, ERP systems or similar specific applications which are not easily to containerize or to be refactored.
Other issues besides the incompatibility of an application to be containerized are:
- your application requires a specific hardware
- your application is designed for a custom kernel or a custom operating system
or also quite simple, your application is better suited to run in a virtual machine.
The hybridization of an application, through the combined power of microservices and vms, allows enterprises benefit from the best of both worlds.
In this blog post, I will dive into the Virtual Machine Service of vSphere with Tanzu, exploring its capabilities and how it could be the missing wrench in your toolbox to support the rollout of virtual machines in a DevOps fashion.
Furthermore, to support the idea and message of this article, I’ve created an example application which will run on a Tanzu Kubernetes Cluster and which will write example data into a PostgreSQL DB deployed as an appliance using the Virtual Machine Service.
Belows Figure I provides a high-level overview of a hybrid application which components are deployed and managed using the Supervisor Services in vSphere with Tanzu.
vSphere with Tanzu Supervisor Services Throwback
As described in articles before like HERE, HERE, and HERE, vSphere with Tanzu is supercharging the vSphere platform with Kubernetes power.
Supervisor Services in vSphere with Tanzu enabling modern Platform/DevOps teams to deploy and manage Tanzu Kubernetes Clusters (Tanzu Kubernetes Grid Service), vSphere Pods (vSphere Pod Service - pods natively running on the ESXi) and also virtual machines (VM Service).
All mentioned resources can be deployed in a declaritive way using the exposed Kubernetes API of the Supervisor Cluster.
Virtual Machine Service to deploy Virtual Appliances
Shipping virtual machines in a ready-to-use single file fashion is done by the Open Virtual Appliance (OVA) format. Well known and adopted by customers. The purpose of an OVA is to provide a single file distribution which contains packaged files like .ovf
, .vmdk
or .nvram
of which a virtual appliance is made of. A virtual appliance could be the vSphere control-plane vCenter Server or e.g. the VMware Fling VMware Event Broker Appliance - VEBA.
The Virtual Machine Service was initially introduced with vSphere 7 U2a. It allows users to deploy and manage virtual machines using Kubernetes standard APIs. In order to provide these functionalities, the Virtual Machine Operator, open-sourced by VMware, had to be developed and integrated in vSphere.
If you’re touching this exciting technology for the first time, you might like to start reading one of the posts below, since I will focus only on the latest feature additions of the VM Service in this post.
Level 100:
- The new modern Workload on vSphere by Navneet Verma
- Introducing the vSphere Virtual Machine Service by Glen Simon
Level 200/300:
- The future of VM’s is Kubernetes by Oren Penso
- Exploring the new vSphere with Tanzu VM Service with Nested ESXi by William Lam
- Installing Harbor using VM Operator on vSphere by Navneet Verma
- Introducing Virtual Machine Provisioning, via Kubernetes with VM Service by Myles Gray.
Virtual Machine Service Enhanced
Initially, it was only possible to use the by VMware provided virtual machine images for Ubuntu and CentOS. Both are downloadable via the VMware Marketplace HERE. This limitation is documented on VMware Docs:
vSphere 8 U1 unleashed the full potential of the Virtual Machine Service.
The advanced support of CloudInit to customize any Linux image is already awesome but even more exciting is the capability to now utilize OVF templating through the vAppConfig
within the VirtualMachine
deployment manifest 😃
We will take a look at it in a bit.
VMware Marketplace to accelerate your Business
For the sake of this blog post, I’ve downloaded (sign in required) the PostgreSQL OVA from the VMware Marketplace. The VMware Marketplace offers certified and validated ecosystem solutions to VMware’s customers, community and partners. The amount of solutions which can be discovered and tested is huge. One reason for this is the addition of the extensive Bitnami (by VMware) application catalog in the VMware Marketplace.
⬇️ VMware Bitnami PostgreSQL DB virtual appliance (OVA)
Deploying the Bitnami PostgreSQL OVA using kubectl
Over the next chapters, I’m going to explain briefly what needs to be done in terms of prerequisites and requirements in order to deploy the VMware Bitnami PostgreSQL DB virtual appliance (OVA) using a DevOps approach. I’m going to describe the desired state of the virtual machine in a yaml
manifest and will define OS relevant data and parameters via the new vAppConfig
section.
Ultimately, I will hand over the manifest to the exposed Kubernetes API of the Supervisor Cluster using kubectl
. The Virtual Machine Service (VM Operator) will then take care of the desired state for us.
vSphere Content Library for VM Images
For the next steps I presume that you are familiar with the Workload Management feature of vSphere with Tanzu. This includes the creation and configuration of vSphere Namespaces, including RBAC, Storage Policies, VM Classes, etc. as well as the deployment and management of Tanzu Kubernetes Clusters (TKC).
Since I wanted to have the virtual machine (appliance) images separated from the content library which contains the Kubernetes node images (TKR
) for the TKCs, I’ve created a dedicated content library named vm-images
.
In terms of the configuration for the new Local vSphere Content Library , I kept everything default (next-next-finish) and started the upload of the Bitnami PostgreSQL OVA.
Consequently, the next step is to add the Virtual Machine Images Content Library
to my vSphere Namespace which I named ns-hybrid-app-1
…
…and in which my Tanzu Kubernetes Cluster is already running.
|
|
After adding the new content library to the vSphere Namespace, it will be available on the Kubernetes layer as well:
|
|
Prior to vSphere 8 Update 2, the Kubernetes CR for the vSphere Content Library was not contentlibraries.imageregistry.vmware.com
, instead it was contentsourcebindings.vmoperator.vmware.com
. Furthermore, the content library was displayed only with its UUID and unless you are not Neo 😎 the UUID won’t tell you which content library it actually is right away. However, you can figure it out using the vSphere client.
By selecting a content library in the vSphere client, you’ll notice a UUID in the URL which can be used to get the relation.
Like belows Figure VI shows.
Before I head over to the next step, let me briefly summarize what has been done up to this point:
- OVA downloaded ✅
- Created a Local vSphere Content Library ✅
- Uploaded the OVA ✅
- Associated the new Content Library with a vSphere Namespace ✅
Virtual Machine Manifest Preperations
In this next step I’m finally going to cover the creation of the virtual machine manifest. What’s required and where to get the required data from.
Again, the goal is to deploy an OVA using the VM Service and most important, to describe the values and data using the new vAppConfig
.
Normally, if you deploy an OVA, you’ll enter values (integer
, string
or boolean
) for keys like IP Address, Default Gateway, Password, etc. via the Customize template
step.
These configurable fields are specified in the OVF file as ovf:userConfigurable="true"
and are of high interest for the creation of a virtual machine manifest.
Resource: VMware Docs - Deploy VMs with Configurable OVF Properties
In the following, I will explain two ways how to get the data.
Extract Configurable Properties using the ovftool
Option one is admittedly associated with a little bit more overhead but in comparison to option two, it requires no access to the Supervisor cluster (high priviledged user required). Download the VMware OVF tool from VMware’s Developer Portal .
⬇️ Open Virtualization Format (OVF) Tool
The following command will extract the packaged files from the OVA.
|
|
Browse the created folder until you find the *.ovf
file. In my case it’s the bitnami-postgresql-11.20.0-r1.ovf
. If you run e.g. a cat
on the file and you browse to section <EulaSection>
or basically to the end of the ovf, you will find the wanted ovfEnv
keys (ovf:userConfigurable="true"
) for our manifest file.
Here’s is my example:
|
|
Extract Configurable Properties using kubectl
Option two provides the desired data quicker but as already mentioned, you’ll need access to the Supervisor cluster. Since the content library is associated with our vSphere Namespace, we only have to execute the following kubectl
command:
kubectl -n ns-hybrid-app-1 get vmimage vmi-06d0a6ab5664f7c88 -o jsonpath='{.spec}' | jq
The output on the terminal is immediately providing the necessary ovfEnv
keys:
|
|
Create the Virtual Machine Manifest
By having all the required data for our manifest file, we can finally start describing the desired state of our virtual machine. The manifest basically contains two Kubernetes objects. The first object is kind: VirtualMachine
. Within this section, values are described for the virtual machine configuration in terms of e.g. name, deployment zone, virtual machine class, storage policy, network, etc.
The second section, kind: ConfigMap
, is where the extracted ovf data is going to.
This is how my PostgreSQL VM manifest looks like. I’ve added comments (#
) into the manifest in order to help understanding the keys.
|
|
kubectl apply -f vm.yaml
With the VM configuration described in a yaml
file, everything is settled for a deployment. The next step is similar (familiar) to any other Kubernetes deployment using kubectl
.
|
|
Both objects are created on the Supervisor cluster and the vm deployment can be followed in the vSphere client.
After a successful deployment, the vm will be powered on (powerState: poweredOn
) and you can also watch the initialisation progress via one of the consoles.
Speaking of console 😁 Another new feature which were introduced with vSphere 8U1 is the web-console support for VM Service VMs.
Let’s check it out:
|
|
Here we go:
PostgreSQL-VM Post Deployment Configurations
Before I can actually connect to the DB and start writing data into it, I have to make some post-deployment configurations. As Figure IX shows, the default user for the PostgreSQL DB is postgres
. The password for the user is created randomly and also displayed on the “welcome screen”.
The default user for the vm on the other hand is bitnami
. This user will be used to ssh
into the vm.
|
|
After ssh
ing into the vm, I configured the listening IP address as well as the port for the PostgreSQL.
|
|
I kept it simple for my use case and configured basically any IP (*
) and sticked with the default port 5432
.
The next configuration file which has to be adjusted properly was for the client authentication. This can be done via the pg_hba.conf
file. HBA stands for host-based authentication.
Add the all
configuration to the end of the file:
|
|
Finally, apply the configuration by restarting the postgresql service
:
|
|
Validate your configuration:
|
|
Well, the system settings are look promising and I tried to connect to the DB using the PostgreSQL cli psql
. Unfortunately, it hasn’t worked out well for me. The connection always ran into a timeout
error. It took me some efforts and the help of my appreciated colleague Andreas Marqvardsen to figure out, that nftables
are used on the Debian system and that it’s blocking all the incoming traffic except ssh
.
ProTip: In order to validate that incoming traffic is allowed, you can start a http.server
using Python
.
By executing the following command python3 -m http.server
on the shell, it’ll start a simple http server listening on localhost port 8000. Try curl
ing the endpoint.
That really comes into handy in such troubleshooting situations.
On the server:
|
|
On the client: curl 10.105.3.50:8000
.
|
|
I didn’t want to start configuring nftables
properly in order to allow the incoming traffic. Consequently, I completely stop
ed the service on the system level by executing systemctl stop nftables
.
Connection check using psql
:
|
|
Fabulous! We have a configured PostgreSQL DB VM up and running deployed via the Virtual Machine Service. Nevertheless, in order to really have the “DevOps-feeling”, the post-deployment task needs to be automated as well.
Run pgAdmin in a Container
I’m not an expert in PostgreSQL. Therefore, I was looking for the most simple and effective way to configure and read the DB. pgAdmin is the tool of the DB admin’s choice (me guessing again) and fortunately, a container image for pgAdmin exists.
The following docker
command will spin up a local pgadmin
instance on your computer/jumphost:
|
|
The pgAdmin portal is accessible via your browser. Either http://localhost
or if you are using a jumphost (in my case ssh
only) like me, then use the IP address of your jumphost.
I’m using the given data in order to establish the connection to my PostgreSQL instance.
Example Kubernetes Application
When creating this post as well as the associated demo, the overall idea which I had in mind was to create a Hybrid Application Architecture using the power of the Supervisor Services of vSphere with Tanzu. A Tanzu Kubernetes Cluster will host an application which will write data into a Database instance running as a virtual machine and most important, declaratively deployed using the Virtual Machine Service. This time, I deployed everything manually but ultimately it would be part of a CI/CD pipeline.
I didn’t have an example application and I was not eager looking on Github for an example. Therefore, I decided to write my own application with the little help of a new, very chatty friend of mine 🤖… 😉
You can find the Postgresql-Writer-k8s-Job
application on my repository HERE.
Quickly explained:
The application will establish a connection to the database using the by a user defined data which is stored in a Kubernetes secret.
After the connection has been established, the application will write example data into a defined table.
In order to not only use psql
or pgAdmin
to validate the written data, I added the functionality to the application that a search_query
will be executed and the results will be send to stdout
. Therefore, using kubectl logs
will give you the results as well.
Deploy the Application as a Kubernetes Job
Let’s deploy the Postgres-Writer application as a Kubernetes job. Make sure being in the right Kubernetes context (kubectl config use-context
).
Create a new Kubernetes namespace:
|
|
The next step must be the creation of the Kubernetes secret. Remember, the application will read sensible data from the secret.
Create the secret:
|
|
Ultimately, deploy the application as a Kubernetes job and validate the results after the jobs completion.
|
|
The job (completion 1/1
) as well as the pod (Completed
) got deployed and executed successfully.
|
|
Validating the written Data
We have three variants to use to validate that data was successfully written into the db table and columns.
-
- Using
kubectl
- Using
|
|
The output shows the written example data ✅
-
- Using
psql
:
- Using
|
|
✅
-
- Using
pgAdmin
- Using
Go to Tools and select Query Tool. Enter the following search_query
and press the play button:
SELECT * FROM public.myappdata
✅
Conclusion
As technology continues to evolve, modern hybrid application architectures providing a practical and pragmatic approach to bridge the gap between service-oriented architectures and traditional workloads hosted on virtual machines. By blending the scalability and agility of SOA with the reliability and versatility of VMs, organizations can modernize their applications while leveraging existing investments.
Although challenges exist, the benefits of modern hybrid architectures make them an attractive option for businesses looking to strike a balance between legacy systems and cloud-native principles.
VMware’s comprehensive solution portfolio for application and platform modernization, Tanzu, can help enterprises to achieve the aforementioned objectives and goals.
Thanks a lot for reading.
Extra
I tried the deployment of other OVA’s like the VMware Event Broker Appliance 😍 as well as my Kubernetes Appliance OVA too. Both work like a charme and I wanted to provide the manifests for both here as well.
VEBA VM Service Manifest
|
|
|
|
Kubernetes Appliance VM Service Manifest
|
|