A closer look at VMware's Project Nautilus
VMware’s Desktop Hypervisor solutions Fusion for Mac users and Workstation for the Windows and Linux userbase were launched in it’s newest versions on September 15th. Lots of new features, enhancements and support around VM Guest Operating Systems, VM Scaling, GPU, Containers and Kubernetes has made it into these releases. The made enhancements will enrich Developers tool kit as well as will provide great new capabilities for IT Admins and everyone else who is keen on spinning up and down Virtual Machines, Containers and NOW also Kubernetes.
A short recap on Project Nautilus
With Fusion Pro Tech Preview 20H1, VMware introduced Project Nautilus earlier this year (January). The main goal of the project is to provide a single development platform on the desktop by enabling users to run OCI compliant Containers as well as Kubernetes besides Virtual Machines on the desktop. A couple of months later (May), Fusion version 11.5 went GA and with this, the possibility was given to manage containers (build & run) as well as VMware’s containerd based runtime. Checkout the corresponding blog post to get to know more about it: ➡️ Fusion 11.5.5 Available Now.
In order to make use of the intruduced “Container feature”, a new command-line utility named vctl
is automatically installed when installing Fusion version 11.5 and higher or Workstation 16 (depending on your OS). Users who are familiar with Docker will propably get a confidential feeling very quickly when using it. Let me show you what I mean by executing vctl
in my terminal:
|
|
What you see are pretty basic capabilities to build, run and manage containers locally with vctl
.
To get started with vctl
, I’d like to point you directly to the official vctl
Getting Started Guide on Github.
New versions bring a KinD way to deploy Kubernetes Clusters
As mentioned in the previous section of this post, it has been planned from the very beginning of Project Nautilus to provide both, the ability to run containers AND to instantiate Kubernetes clusters on the desktop. This has now been made possible by making use of the open source project KinD (Kubernetes in Docker), which is using Docker container “nodes” to run local Kubernetes clusters. KinD has reached a decent popularity in the community and it can be easily installed on various platforms by e.g. using Brew for Mac, Chocolatey for Windows and curl
in general. See also the KinD Quick Start guide.
So, what’s in for you when using Fusion or Workstation instead of the “known, step-by-step, manual way”?
Advantages I see are e.g. not having the binaries of docker
, kind
and kubectl
installed in advance on your platform as well as not to carry about the installation process/ method(s) itself. But let me show you some of the implementation details now, to hopefully enlighten you more. vctl
is available after the installation of Fusion or Workstation.
One thing before I go on! I will not cover both solutions! I’m a Mac user, so from here on I’m going to concentrate just on Fusion 12.
Let’s dive in
First! The ~/.vctl
directory doesn’t exist before you have executed vctl system start
for the first time!
|
|
Run vctl system start
to start the containerd runtime daemon in the background and to have the aforementioned directory created.
|
|
How does it look now?
|
|
The directory has been created succesfully in $HOME
(users home directory).
As you can see via the following output, the binaries for docker
, kind
as well as kubectl
aren’t installed on my local system:
|
|
Asking how vctl
kind
will --help
us here.
|
|
There is some really important information in here!
- If
kind
wasn’t already installed before,vctl
will download and install it for you. - In the current terminal session only, all
docker
commands will be aliased tovctl
. - All made configurations apply only to the running terminal session.
|
|
As you can see on the previous terminal output, the command will download the necessary binaries for kind
and kubectl
as well as a crx.vmdk
which will be used by the CRX VM.
~/.vctl/bin
folder will take precedence over other existing versions of kubectl/kind/docker
binaries that were installed before.Aha! This means that in case I already had e.g. kubectl
installed on my system, the execution of vctl kind
will affect my current running terminal by adjusting the $PATH
for the according binaries. Let me validate this by leveraging which
(*) again:
*The which utility takes a list of command names and searches the $PATH
for each executable file.
|
|
That looks good to me! Also the ~/.vctl
directory has been filled up too.
|
|
KinD create Cluster
It’s time now to create a Kubernetes node. Note! vctl system start
has to be executed first before leveraging kind
for a deployment! Otherwise you will get the following error message:
vctl
assigns 2 GB of memory and 2 CPU cores by default for the CRX VM that hosts the Kubernetes node container. You can adjust the default vctl system
configuration with the options --k8s-cpus
and --k8s-mem
.
Example: vctl system config --k8s-cpus 4 --k8s-mem 8192
The results can be seen in the config.yaml
file: cat ~/.vctl/config.yaml
|
|
To create the Kubernetes node, basically kind create cluster
is all you need to get started. If you like to give the node(s) a name or if you like to use a specific Kubernetes version which should run on your node, checkout the official Docker repository, pick your version and run e.g. kind create cluster --image kindest/node:v1.19.1 --name kind-1.19.1
.
After a couple of seconds, you will have a Kubernetes node running on your desktop locally.
|
|
|
|
|
|
Again! When closing the terminal, the following applies:
Recording
The details of the last two sections can be watched via the following recording. I’ve used asciinema to record my terminal in and -output.
Conclusion
Before closing this post with my conclusion, I wanted to mention the following as well. Besides the addition of kind
to the vctl
utility, it also got some new options which e.g. allows you to login
into a container registry like Harbor for example as well as to manage volume
s (currently only support volume prune
).
|
|
This is also documented in the Documentation.
I enjoyed digging deeper into the implementation details of Project Nautilus which is now fully integrated in Fusion 12 and Workstation 16. This is a great addition to VMware’s Desktop Hypervisor solutions and provides a great experience not even for those who want to get started with containers and Kubernetes but also for the more experienced users among us.
Resources
- Propose an Idea: LINK
- Join the Slack channel #fusion-workstation
#Fusion
- Release Notes
- Download
- FAQ
- File a bug on Github: LINK