When it comes to data persistency in Kubernetes, a Persistent Volume1 (PV) is the corresponding cluster resource which will serve your application with the desired requirements. A PV “knows” all the necessary implementation details from the given storage in your infrastructure. Typically, these are either block, file or object storage systems.
Let’s stick with the file-storage. Normally, that’s NFS, isn’t it? But recently a customer of mine wanted to write data from within a Pod to a SMB share. In the first moment, that sounded odd to me.
Why’s that?
Because SMB is Windows (Microsoft) specific and is known for transfering data mostly from e.g. a Windows client to a Windows Fileserver. Since Kubernetes nodes are (MOSTLY) running Linux as the underlying operating system, I wasn’t familiar with this requirement (Samba aside 😉).
Anyway, a solution had to be found.
Kubernetes FlexVolumes? No?!
When searching the Kubernetes documentation for SMB, the Volumes2 section is mentioning SMB under the subsection FlexVolumes. flexVolumes is an out-of-tree plugin interface which gives storage vendors the possibility to write and deploy plugins, which exposes new storage systems in Kubernetes, without ever interfering with the Kubernetes codebase.
But! It is marked deprecated for Kubernetes version 1.23+ (and above) in favor of the Container Storage Interface (CSI).
flexVolumes Deprecation Proposal
The proposal to depcrecate flexVolumes was raised back in October 2021 and can be read here: issue #30180.
It’s generally recommended to use a CSI driver in order to integrate external storage systems with Kubernetes. I’m pretty sure that this fact isn’t really suprising you 😄. I’d even bet that most of us simply don’t know it any other way.
However and in general, it’s definitely worth reading about the evolution of Volume Plugins in Kubernetes.
I looked up on the Kubernetes CSI repository and found what I was looking for. A SMB CSI Driver which allows Kubernetes to access SMB server on both Linux and Windows nodes. As of writing this article, the latest version listed on the project page is v1.9.0. This will be the version I’m going to install on my VMware Tanzu Kubernetes Grid cluster.
The driver supports dynamic provisioning of Persistent Volumes via Persistent Volume Claims by creating a new sub directory under SMB server.
I’ll guide you through each step of the installation and will finish the post by verifying the write-access to an existing SMB-share on my Windows Fileserver.
Nevertheless, if you prefer an automated remote installation, a install-driver.sh script is provided on the project page.
Let’s kick-off the installation of the CSI SMB Driver/Provisioner step-by-step.
Creating a Namespace
Creating RBAC resources
Installing the CSI-SMB-Driver
Rollout of the CSI-SMB-Controller Deployment
Rollout of the CSI-SMB Provisioner DaemonSet
Creating a SMB Secret
1. Namespace
By default every resource will be installed in namespace kube-system. Personally, I don’t like installations in such Kubernetes system namespaces when I’m validating new implementations in my k8s environments. Therefore, I’m going to use a new namespace named csi-smb-provisioner.
Just replace the name below and export it as an environment variable. I’ll use $NAMESPACE for the next provided steps of the installation.
kubectl get ns
NAME STATUS AGE
csi-smb-provisioner Active 4m14s
default Active 2d21h
kube-node-lease Active 2d21h
kube-public Active 2d21h
kube-system Active 2d21h
vmware-system-auth Active 2d21h
vmware-system-cloud-provider Active 2d21h
vmware-system-csi Active 2d21h
2. RBAC Resources
In order to allow the csi-smb-driver interactions with other Kubernetes resources, like e.g. PersistentVolumes, PersistentVolumeClaims or Nodes, the appropriate RBAC resources have to be created. By executing the next manifest, a ServiceAccount, a ClusterRole as well as a ClusterRoleBinding will be created.
serviceaccount/csi-smb-controller-sa created
serviceaccount/csi-smb-node-sa created
clusterroles.rbac.authorization.k8s.io/smb-external-provisioner-role
clusterrolebinding.rbac.authorization.k8s.io/smb-csi-provisioner-binding created
…as output on your terminal.
3. CSI-SMB-Driver Installation
Next up is the installation of the CSI-SMB driver itself. This won’t be installed in the newly created namespace ($NAMESPACE).
kubectl get csidrivers.storage.k8s.io
NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
csi.vsphere.vmware.com truefalsefalse <unset> false Persistent 2d22h
smb.csi.k8s.io falsetruefalse <unset> false Persistent 21s
Installation from a Private Registry
When you often have to deal with fully internet-restricted environments (air-gapped) like me, it’s important to be well prepared for such offline-installations. One preperation for instance is to make the appropriate container-images offline available first. For the CSI-SMB-Driver it’s the images for the:
CSI-SMB-Controller (deployment)
k8s.gcr.io/sig-storage/csi-provisioner
k8s.gcr.io/sig-storage/livenessprobe
registry.k8s.io/sig-storage/smbplugin
as well as for the CSI-SMB-Node (DaemonSet)
k8s.gcr.io/sig-storage/csi-node-driver-registrar
k8s.gcr.io/sig-storage/livenessprobe
registry.k8s.io/sig-storage/smbplugin
The docker save and docker load -i options as well as the Carvel CLI tool imgpkg are very helpful and powerful for such operations.
I’ve written a dedicated blog post about the topic how to make container-images available offline in order to share or to upload them to your own private container-registry.
Once the images are available offline, you have to create an imagePullSecret in Kubernetes which has to be referenced accordingly in the manifest files.
To be referenced in csi-smb-controller.yaml and csi-smb-node.yaml.
Snipped:
1
2
3
4
[...]imagePullSecrets:- name:harbor-creds[...]
I’m not going to install it from a private-registry and therefore I stick with the public image-references.
4. Deployment - CSI-SMB-Controller
The next step is the installation of the csi-smb-controller which will run as a Kubernetes deployment on your cluster. Create a new .yaml file, name it e.g. csi-smb-controller.yaml, and paste in the following specifications:
Since all our validations have the desired conditions, we’ll finish the installation with a simple test example. In this test, we are going to write from a pod to a SMB share which is accessible through a mounted Persistent Volume.
Test namespace first!
1
exportTESTNS=smb-test
1
kubectl create ns $TESTNS
1. Create SMB Access Secret
A user-level authentication is required to access the share. This will be done by referencing a Kubernetes secret in the Persistent Volume resource creation.
IMPORTANT! Securing your Kubernetes Secrets is recommended when using sensible data like your Active Directory credentials in production. Solutions like e.g. Sealed Secrets by Bitnami or Vault by HashiCorp should be considered being used.
2. Create PersistentVolume
The deployment, which will be created in the next step, will have the SMB share accessible through a PersistentVolume and the associated PersistentVolumeClaim.
Begin with the PersistentVolume:
Pay attention to the spec for volumeHandle and source!
kubectl create -f - << EOF---apiVersion:v1kind:PersistentVolumemetadata:name:pv-smbnamespace:$TESTNSspec:storageClassName:""capacity:storage:50GiaccessModes:- ReadWriteManypersistentVolumeReclaimPolicy:RetainmountOptions:- dir_mode=0777- file_mode=0777- vers=3.0csi:driver:smb.csi.k8s.ioreadOnly:falsevolumeHandle:$VOLUMEID # make sure it's a unique id in the clustervolumeAttributes:source:$SOURCEnodeStageSecretRef:name:smb-credsnamespace:smb-testEOF
Validation:
1
2
3
4
kubectl -n smb-test get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-smb 50Gi RWX Retain Available 46m
Continue with the PersitstentVolumeClaim to bound the volume.
kubectl -n smb-test get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-smb 50Gi RWX Retain Bound smb-test/pvc-smb 49m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-smb Bound pv-smb 50Gi RWX 13s
3. Validating Write-Access
In order to validate that we have access to the provided share on a Windows Fileserver, we’ll create a simple deployment using the following nginx image mcr.microsoft.com/oss/nginx/nginx:1.19.5 and write a file named outfile to the mounted share.