In my previous blog post I took A closer look at VMware’s Project Nautilus and went through some of the implementation details (Fusion 12). I described how it provides a single development platform on the desktop by enabling you to run, build and manage OCI compliant Containers as well as how easy it is now to instantiate Kubernetes Clusters besides Virtual Machines.
The question
This post is intented to answer a question which came into my mind during my work on VMware’s Open Source project VMware Event Broker Appliance (VEBA) and as I’ve heard of Project Nautilus for the first time.
The Question
Given these planned innovations (Container and Kubernetes integration), could they help me and others developing new functions locally with just a minimal effort and resources?
To be honest, the answer to this question was already partially answered with YES before the idea was born to leverage VMware Fusion (or Workstation) for this use case. Fortunately, now seems to be the right time to write about it. Michael Gasch, the maintainer of the Event Router helped me great in supporting the joint idea by not only creating a vcsim container image but also by making necessary code changes to the Event Router to support vcsim as a supported Event Provider ( PR #231) .
UPDATE: October 19, 2020
The aforementioned PR #231 was merged in PR #237. vcsim is now supported as a Event Provider for the Event Router. Adjustments to this post were made accordingly.
Thanks Michael!
Modular Architecture
A look into the inners of VEBA shows us that it is built out of several pieces (modular approach) like e.g. VMware’s Photon OS (Operating System), Kubernetes (Orchestrator/ Runtime) and the Event-Processor (Application). The event provider part is covered by the Event Router which is responsible for connecting to event stream sources, such as the VMware vCenter Server and forward events to an event processor like e.g. OpenFaaS or AWS EventBridge.
The following picture of the high-level architecture gives you an idea how everything is in concert.
Normally, you would deploy VEBA into a dedicated environment, be it a homelab, a development environment or even in production. Since we are listening to the vCenter Event stream, we also need a vCenter Server (Appliance) to which we connect VEBA to. Your developed functions would then be tested against those environments and since we are talking about testing or prototyping and given the fact that our customers are already using VEBA in production, it would be good (if not even necessary) to have a development environment as tiny and fast deployable as posibble available.
With a decent desktop model, you could run VEBA as well as the vCenter Server locally but from a hardware requirement point of view this would mean that your desktop should be equipped with at least 24 GB of RAM for adequate testing. And not to forget about the ESXi host (nested) and at least one VM!
The lightweight alternative would be, to e.g. leverage VMware’s Fusion or Workstation to spin up a Kubernetes cluster locally and to deploy the Event Router, OpenFaaS as well as your function(s) on top of it.
Asking yourself now: And what about the vCenter Server? Good question!
Here comes vcsim into play. vcsim is a vSphere API mock framework which is part of the govmomi project, a Go library for interacting with VMware vSphere APIs - or in simple terms - it’s a vCenter and ESXi API based simulator.
The following content in the table below serves as an overview of what and how we are going to deploy in the next sections. As I have already indicated, my intention is to use Fusion (I’m a Mac user) to instantiate the Kubernetes cluster but ultimately, you could use any tool that runs Kubernetes local on your machine.
Using a “pimped” terminal is really helpful, efficient and FUN. Have a look at my blog post about the “High Way to Shell” to arm yourself accordingly. 😁
Just to give you an idea…
Want to see it in action? I’ve putted a short recording in the Conclusion section at the end of this post.
Let’s get started
1. Create the Kubernetes cluster node
Start with the deployment of the CRX VM that hosts the Kubernetes node container by executing (skip this if you aren’t using Fusion):
1
vctl system start
vctl assigns 2 GB of memory and 2 CPU cores by default for the CRX VM
Depending on your needs, you can adjust the configuration by using e.g. vctl system config --k8s-cpus 4 --k8s-mem 8192 to have more CPUs and Memory assigned
The next step is the Kubenretes cluster creation itself by leveraging KinD which will be available after executing vctl kind
Create a new Kubernetes node with e.g. the following parameters:
Validate the creation of your Kubernetes cluster by executing:
1
2
3
4
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-1.19.1-control-plane Ready master 96s v1.19.1 192.168.43.153 <none> Ubuntu Groovy Gorilla (development branch) 4.19.138-7.ph3-esx containerd://1.4.0
Off to Step 2!
2. Installing OpenFaaS® (event processor)
Installing OpenFaaS® as the event processor on Kubernetes is pretty easy, straight forward and well documented. The following guidance is based on the official documentation. I’m reusing what already exists just to avoid forcing you to switch back and forth the browser tabs. I am going to cover the deployment by using plain yaml files. You could also use arkade or a helm chart for the installation.
kubectl and plain YAML
Clone the official repository from Github and switch into the faas-netes folder:
Validate that you are using the right Kubernetes Context:
1
kubectl config current-context
Your context should start with kind- like e.g. mine: kind-kind-1.19.1
Create the necessary Kubernetes Namespaces for OpenFaaS® by applying the namespaces.yml file:
1
kubectl apply -f namespaces.yml
Validate that the namespaces openfaas and openfaas-fn were created:
1
2
3
4
5
6
7
8
9
10
kubectl get ns
NAME STATUS AGE
default Active 18m
kube-node-lease Active 18m
kube-public Active 18m
kube-system Active 18m
local-path-storage Active 17m
openfaas Active 55s
openfaas-fn Active 55s
The next step is to create a password to access the OpenFaaS® Gateway with the user admin:
The basis is settled and now off to the main deployment. Every component of OpenFaaS® is described in a yaml file which is stored in the according /yaml directory:
Desired state fulfilled! To access the OpenFaaS® Gateway UI, we need the according IP address of it. Copy the name of the OpenFaaS Gateway pod and replace it in the following command:
1
2
3
kubectl get pods/gateway-fc64ff4f6-d9bgt -n openfaas -o=jsonpath='{.status.hostIP}'192.168.43.153
And now by using the port :31112 the portal will show up in your browser tab:
3. vcsim - run the vSphere API Simulator
Before our function(s) can be invoked based on a vSphere event, we need a stream of such coming from a source to which the VMware Event Router is listening to. Like aforementioned, vcsim will help us out here. Interestingly, vcsim is not new and William Lam already wrote about it in 2017(!): govcsim – Neat incubation project (vCenter Server & ESXi API based simulator).
For our purposes, Michael prepared a Docker image which we are going to deploy as a Kubernetes pod and in which vcsim is running as our simulated vSphere endpoint. Furthermore, the necessary govc CLI binaries are also available via the container.
Check if your simulated vSphere Endpoint is working properly:
1
2
3
4
5
6
vcsim@vcsim:~$ govc ls
/DC0/vm
/DC0/host
/DC0/datastore
/DC0/network
List all available Virtual Machines:
1
2
3
4
5
6
vcsim@vcsim:~$ govc ls "/DC0/vm/*"/DC0/vm/DC0_H0_VM0
/DC0/vm/DC0_H0_VM1
/DC0/vm/DC0_C0_RP0_VM0
/DC0/vm/DC0_C0_RP0_VM1
Power Off and On a Virtual Machine:
1
2
3
4
5
6
7
8
# By default, the VMs are in Powered On state# Trying to power on a powered on VM would throw a "govc: *types.InvalidPowerState" error vcsim@vcsim:~$ govc vm.power -off /DC0/vm/DC0_H0_VM0
Powering off VirtualMachine:vm-57... OK
vcsim@vcsim:~$ govc vm.power -on /DC0/vm/DC0_H0_VM0
Powering on VirtualMachine:vm-57... OK
Awesome!
4. Deploy the Event Router
During my work on this post, Michael and I discussed various aspects of the overall deployment of a development environment and especially the usability and simplicity was a main focus for us. Long story short - it ended up with the decision to include vcsim as a supported Event Provider for the Event Router 😁 👏.
Before we can deploy the Router into our recently created namespace vmware, we need to create the event-router-config.yaml file from which we will create our Kubernetes Secret in a bit. If VSCode (code) isn’t your default editor adjust it accordingly.
1
code event-router-config.yaml
You can copy the complete provided code and paste into your empty event-router-config.yaml file:
Attention
This configuration is only valid if you followed my example. Otherwise adapt it accordingly.
Double check that the function is in state running:
via faas-cli list -g http://192.168.43.153:31112 --verbose
1
2
3
Function Image Invocations Replicas
powercli-tag vmware/veba-powercli-tagging:latest 01veba-echo vmware/veba-python-echo:latest 01
And by using kubectl get pods -n openfaas-fn -o wide
1
2
3
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
powercli-tag-58466957cc-vrfqp 1/1 Running 0 8h 10.244.0.27 kind-1.19.1-control-plane <none> <none>
veba-echo-69cd854b7c-btjhn 1/1 Running 0 9h 10.244.0.26 kind-1.19.1-control-plane <none> <none>
I have also deployed the powercli-tag function for my tests and you can watch the result via the recording below.
Conclusion (+ Recording)
With the ability to leverage VMware Fusion or Workstation to quickly create a KinD Kubernetes cluster, leads you really to this at the beginning mentioned “single development platform on the desktop” experience. Of course, every tool which makes it easy to run Kubernetes locally is great and useable.
We’ve deployed the necessary components like e.g. OpenFaaS®, the VMware Event Router, the vCenter Simulator vcsim as well as example functions onto the local running (KinD) Kubernetes cluster.
To finally test our deployed functions, we had to invoke them by creating events from a source. We leveraged a container image for that, which was created by Michael Gasch and which includes the vCenter Simulator vcsim as well as the binaries for the govc CLI to execute the relevant commands.