Fortanix has recently rebranded the name Self-Defending KMS to Data Security Manager™.
And with continued and expanded support across various Kubernetes platforms for secrets injection, the latest version of the helm charts installer for Fortanix DSM secrets injector makes it easier to deploy the MutatingAdmissionWebhook. Yay! 🎉
On top the many Kubernetes platforms Fortanix Data Security Manager supports for secrets injection, Fortanix also by extension supports OpenShift to securely inject secrets into a pod or an app. With the latest General Availability of OpenShift 4.7 (based on Kubernetes 1.20 - release notes are here), it was time to give it a quick test and also document the steps involved in setting up the Fortanix sidecar injector.
We'll jump right in!
"Fortanix promotes consistency as one of their major strengths across a multi-container environment..."
RHEL Setup
It's been a while since logging into the Redhat developer access program. So much has changed since the old days of running RHEL (the last time any VM was provisioned with an RHEL image was on RHEL 6.5!) and if you're signed up as a developer, there are so much more benefits you get access to, I certainly wasn't going to miss the opportunity.
Installing RHEL 8.4 (Ootpa) was a breeze. Download the ISO, ensure you have a subscription under the developer program and burn the USB, install, etc... It was still nice to see references to the good old anaconda during the installation phase.
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)
Once we got the RHEL VM or system installed, time to link the Redhat developer subscription to the system so I can start installing the right packages as required:
$ sudo subscription-manager register --username <your username> --password <your password> --auto-attach
We chose to use the CodeReady Containers (CRC) to simplify the OpenShift deployment (for testing purposes), and we prepared a simple Intel NUC system with the following setup:
Intel(R) Core(TM) i5-5250U CPU @ 1.60GHz
16GB of RAM
250G Samsung SSD 840
The major requirement for CRC was to have libvirt (KVM) available as CRC downloads a qcow2 image during the installation process to simulate a node within the OpenShift cluster. Simply type the following to get your system updated along with the appropriate KVM packages and drivers installed:
$ sudo yum update
$ sudo yum install @virt
$ sudo systemctl enable --now libvirtd
Then validate that KVM is indeed enabled both at the BIOS level as well as at the OS level:
$ cat /proc/cpuinfo | egrep "vmx|svm" | head -n 1
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap intel_pt xsaveopt dtherm ida arat pln pts md_clear flush_l1d
$ lsmod | grep -i kvm
kvm_intel 315392 6
kvm 847872 1 kvm_intel
irqbypass 16384 4 kvm
Now we were ready to get OpenShift deployed!
OpenShift Setup
Now that the RHEL installation process is out of the way, we need to get started on deploying OpenShift through CRC. First head to your developer portal and head over to Products. You'll notice there is a shiny link underneath CodeReady Containers that says "Download now":
Select "Install OpenShift on your laptop", which will allow you to download the CRC package. We selected "Linux" and copied the link to the Download link as well:
Let's jump back to our RHEL setup. Here we're going to first download the CRC command and copy that somewhere accessible by the user:
$ wget <link you copied>
$ tar xvf crc-linux-amd64.tar.xz
$ cd crc-linux-<version>-amd64
$ sudo cp crc /usr/local/bin
Confirm CRC was installed correctly by checking the version:
$ crc version
CodeReady Containers version: 1.28.0+08de64bd
OpenShift version: 4.7.13 (embedded in executable)
Go ahead and get CRC to setup, download the qcow2 image, etc...
$ crc setup
You might want to grab a coffee at this point as my internet connection wasn't the best. Downloading 10+ GB of a qcow2 image took this long!
Once you've brewed a nice cup of coffee, you're now, "probably", wondering what the "Download pull secret" button was all about. Since Redhat will limit who can or can't download the image for OpenShift testing (and rightly so), having this pull secret limits who can or can't download the images during the start phase. So here, we're going to go ahead and download that and copy that into the RHEL system as well. We just copied it as "pull-secret" on the home directory of the RHEL system:
$ ls -la pull-secret
-rw-r--r--. 1 test-user test-user 2771 Jun 26 18:18 pull-secret
We can now "start" the KVM virtual machine and have CRC setup all of the prerequisites along the way:
$ crc start -p pull-secrets
INFO Checking if running as non-root INFO Checking if oc binary is cached INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if libvirt is enabled INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active
.
.
.
This will take some time again. Continue sipping on that coffee ☕️.
Once the KVM virtual machine has started and all the setup is completed, you'll find a success message as such:
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
and there would have been some other messages about your credentials, but who's watching. If you ever forget them (like we always do), then type the following command for the credentials again:
$ crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p <redacted> https://api.crc.testing:6443'
Now your OpenShift cluster is up and running! Let's give is a quick test to make sure it's all working. First set the environment setting so you can run the "oc" command:
$ crc oc-env
export PATH="/home/test-user/.crc/bin/oc:$PATH"
# Run this command to configure your shell:
# eval $(crc oc-env)
You'll want to copy this somewhere so you don't have to type this command all the time when you logout and log back in:
$ vim ~/.bashrc
export PATH="/home/test-user/.crc/bin/oc:$PATH"
# Run this command to configure your shell:
eval $(crc oc-env)
$ source ~/.bashrc
$ oc login -u kubeadmin -p <redacted> https://api.crc.testing:6443
Login successful.
You have access to 61 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
Some basic checks to make sure all is running fine on the newly minted OpenShift cluster:
$ oc cluster-info
Kubernetes control plane is running at https://api.crc.testing:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ oc get nodes
NAME STATUS ROLES AGE VERSION
crc-pkjt4-master-0 Ready master,worker 20d v1.20.0+df9c838
$ oc config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://api.crc.testing:6443
name: api-crc-testing:6443
contexts:
- context:
cluster: api-crc-testing:6443
namespace: default
user: kubeadmin
name: crc-admin
- context:
cluster: api-crc-testing:6443
namespace: default
user: developer
name: crc-developer
- context:
cluster: api-crc-testing:6443
namespace: default
user: kubeadmin/api-crc-testing:6443
name: default/api-crc-testing:6443/kubeadmin
current-context: fyoo/api-crc-testing:6443/kubeadmin
kind: Config
preferences: {}
users:
- name: developer
user:
token: REDACTED
- name: kubeadmin
user:
token: REDACTED
- name: kubeadmin/api-crc-testing:6443
user:
token: REDACTED
All good. Let's now start to do some secrets injection work!
Helm Chart Setup
Although not new for Fortanix, certainly a new detail in my blog, hence covering how you can easily deploy the secrets injector by Fortanix using Helm was in order.
Installing Helm is relatively easy. Just head over to helm.sh and follow the installation guide. Once you have it deployed, let's make sure it's working correctly:
$ helm version
version.BuildInfo{Version:"v3.5.0+6.el8", GitCommit:"77fb4bd2415712e8bfebe943389c404893ad53ce", GitTreeState:"clean", GoVersion:"go1.14.12"}
Next thing you'll need is the helm Chart deployment package from Fortanix. If you are a customer of Fortanix, a simple ticket under support.fortanix.com will suffice, otherwise speak to you representative at Fortanix today on how to obtain this. Once you have the package with you, you'll be able to untarball the package and a folder will appear. This is where all your helm Chart installation tools are located:
$ ls -la dsm*tar.gz
-rw-r--r--. 1 test-user test-user 3838 Jun 30 19:55 dsm-secrets-injection-chart.tar.gz
$ tar xvfz dsm-secrets-injection-chart.tar.gz
$ ls -lad dsm*
drwxr-xr-x. 5 test-user test-user 4096 Jul 1 02:02 dsm-secrets-injection
-rw-r--r--. 1 test-user test-user 3838 Jun 30 19:55 dsm-secrets-injection-chart.tar.gz
Installing the Fortanix Data Security Manager's secrets injection is simple. Begin by installing the dependencies required for the helm install:
$ helm dep up sdkms-secrets-injection
Then you can install the Fortanix Data Security Manager's secrets injection helm (yes, Kubernetes 1.22 is coming!):
$ helm install dsm-secrets-injection-chart ./dsm-secrets-injection
W0701 10:34:59.051625 60981 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W0701 10:35:00.609692 60981 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
NAME: dsm-secrets-injection-chart
LAST DEPLOYED: Thu Jul 1 10:34:58 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
kubernetes Integration with Fortanix SDKMS has been deployed successfully.
Check the helm that it's all installed properly:
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dsm-secrets-injection-chart default 1 2021-07-01 10:34:58.289849006 +1000 AEST deployed fortanix-secrets-injector-1.1 1
And now you'll notice a new project has been created under the OpenShift cluster:
$ oc projects
You have access to the following projects and can switch between them with ' project <projectname>':
* default
fortanix
kube-node-lease
kube-public
kube-system
openshift
openshift-apiserver
.
.
.
If we switch to the "fortanix" project, you will see the relevant deployment and pods there:
$ oc project fortanix
Now using project "fortanix" on server "https://api.crc.testing:6443".
$ oc status
In project fortanix on server https://api.crc.testing:6443
svc/fortanix-secrets-injector-svc - 10.217.4.34:443 -> 8443
deployment/fortanix-secrets-injector deploys fortanix/k8s-sdkms-secrets-injector:1.0
deployment #1 running for 4 minutes - 1 pod
job/fortanix-cert-setup manages fortanix/k8s-sdkms-cert-setup:1.0
created 4 minutes ago 1/1 completed 0 running
2 infos identified, use 'oc status --suggest' to see details.
The way the injector works is based on labels. If there is a project that has the matching label "fortanix-secrets-injector" and it is set to "enabled", any new pods created within the project will automatically be selected to be mutated and to have Fortanix DSM secrets injected. I'm now going to create a new project and tag that work through this:
$ oc new-project fyoo
Now using project "fyoo" on server "https://api.crc.testing:6443".
$ oc label namespace fyoo fortanix-secrets-injector=enabled
namespace/fyoo labeled
$ oc describe project fyoo
Name: fyoo
Created: 40 seconds ago
Labels: fortanix-secrets-injector=enabled
Annotations: openshift.io/description=
openshift.io/display-name=
openshift.io/requester=kubeadmin
openshift.io/sa.scc.mcs=s0:c25,c15
openshift.io/sa.scc.supplemental-groups=1000630000/10000
openshift.io/sa.scc.uid-range=1000630000/10000
Display Name: <none>
Description: <none>
Status: Active
Node Selector: <none>
Quota: <none>
Resource limits: <none>
Let's also add some secrets within the Fortanix Data Security Manager and setup the App that will access the secrets. Note: Fortanix Data Security Manager supports API Key as well as JSON Web Token (JWT) for authorisation of the app.
I'll configure an App within Fortanix Data Security Manager:
Now we create a secret (or use one that is already available) within Data Security Manager:
Similar to how we do it for Kubernetes, specify a simple single pod to be deployed within the project fyoo:
$ cat demo.yaml
# This example shows how to use pods spec to render secret values
apiVersion: v1
kind: Pod
metadata:
namespace: fyoo
name: test-app
annotations:
secrets-injector.fortanix.com/inject-through-environment: "false"
secrets-injector.fortanix.com/secrets-volume-path:
/opt/myapp/credentials
# FORMAT: inject-secret-<secret-identifier>:<SDKMS Sobject name>
secrets-injector.fortanix.com/inject-secret-helloworld: "ocp-
test-secret"
spec:
serviceAccountName: demo-sa
terminationGracePeriodSeconds: 0
containers:
- name: busybox
imagePullPolicy: IfNotPresent
image: ubuntu:latest
command:
- sh
- "-c"
- |
sh << 'EOF'
ls -la /opt/myapp/credentials/*
sleep 10m
EOF
$ oc project fyoo && oc apply -f demo.yaml
Already on project "fyoo" on server "https://api.crc.testing:6443".
pod/test-app created
And check the mutation worked and the secrets injected:
$ oc project fortanix && oc logs -f svc/fortanix-secrets-injector-svc
Now using project "fortanix" on server "https://api.crc.testing:6443".
2021/07/01 00:36:09 Configuration:
2021/07/01 00:36:09 controllerConfigFile: /opt/fortanix/controller-config.yaml
2021/07/01 00:36:09 webServerConfig.port: 8443
2021/07/01 00:36:09 webServerConfig.certFile: /opt/fortanix/certs/cert.pem
2021/07/01 00:36:09 webServerConfig.keyFile: /opt/fortanix/certs/key.pem
2021/07/01 00:36:09 AuthTokenType: api-key
2021/07/01 00:36:09 SecretAgentImage: fortanix/k8s-sdkms-secret-agent:1.0
2021/07/01 00:36:09 Server listening on port 8443
2021/07/01 00:46:49 Mutating 'fyoo/singleton'
$ oc project fyoo && oc logs pod/singleton
Already on project "fyoo" on server "https://api.crc.testing:6443".
-rw-r--r--. 1 root root 8 Jul 1 00:46 /opt/myapp/credentials/helloworld
$ oc project fyoo && oc exec pod/singleton -- cat /opt/myapp/credentials/helloworld
Already on project "fyoo" on server "https://api.crc.testing:6443".
I am it!%
Within the Fortanix Data Security Manager, you'll notice the secret was recently accessed:
And there you have it. We've shared (or created) the same secret that you used in your Kubernetes environment in your newly minted OpenShift environment. A truly multi-container environment secrets management, with an HSM backend!
We also covered the latest version of the helm chart installer for the Fortanix DSM secrets injector along the way. There are many other configuration variables you can set there too, and all information and configurable variables are available on Fortanix's support website for more details.
Hoping to cover some basic CCM integration with OpenShift at some stage. Feedback is always welcome and let me know your thoughts!
Comments