Sonarqube Scanner Helm Chart with AzureDisk PersistentVolume Setup.
Published Date: 20-JUL-2019
As-is, this sonarqube helm chart will survive a deleted pod event, maybe even a helm delete. But if your cluster is rebuilt, or you run a helm delete --purge sonarqube
you're going to lose any data and reports that were living with your deployment.
Pre-requirements
- 2 x AzureDisk
- 1 x K8s cluster
- helm
- kubectl
- az-cli
installation instructions for these tools can be found here.
Quick K8s Storage Overview
K8s storage took a bit to wrap my head around but essentially when I understood the following topics, it all made sense:
Understanding PersistentVolumes and PersistentVolumeClaims
This video by "IBM FSS FCI and Counter Fraud Management" (no idea why they're called this?!) is probably the clearest explanation I've seen online.
The abstractions: Volumes, PV, StorageClasses and PVC's
Here's a diagram:
Essentially, in an oversimplified nutshell:
- Volumes are derived from Physical Disks that an Admin provisions
- PersistentVolumes (static) are an abstraction of the Physical Disks
- StorageClasses (dynamic) are also an abstraction of the Physical Disks
- PersistentVolumeClaims is what the Pod will actually mount as a "volume". The PVC will determine if you'll be mounting a static PV, or a storageClass Volume.
Right, so now that we've got that out of the way, let's build some things.
Provision your AzureDisks
- 1 x sonarqube data disk
- 1 x postgresql database disk
az disk create -g AKS-CLOUDRESOURCES -n sonarqube-data --size-gb 10 --sku Standard_LRS --tags application=sonarqube
az disk create -g AKS-CLOUDRESOURCES -n postgresql-data --size-gb 16 --sku Standard_LRS --tags application=postgresql
example output for sonarqube disk
{
"creationData": {
"createOption": "Empty",
"imageReference": null,
"sourceResourceId": null,
"sourceUri": null,
"storageAccountId": null
},
"diskIopsReadWrite": 500,
"diskMbpsReadWrite": 60,
"diskSizeGb": 10,
"encryptionSettings": null,
"id": "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX/resourceGroups/AKS-CLOUDRESOURCES/providers/Microsoft.Compute/disks/sonarqube-data",
"location": "australiaeast",
"managedBy": null,
"name": "sonarqube-data",
"osType": null,
"provisioningState": "Succeeded",
"resourceGroup": "AKS-CLOUDRESOURCES",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"tags": {
"application": "sonarqube"
},
"timeCreated": "2019-07-23T11:01:16.893039+00:00",
"type": "Microsoft.Compute/disks",
"zones": null
}
storageClass.yaml
Create your storage class. Notice the Retain
reclaimPolicy so that the volumes aren't deleted when pods and container disappear.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: staticManagedVolumeRetain
provisioner: kubernetes.io/azure-disk
parameters:
kind: Managed
storageaccounttype: Standard_LRS
reclaimPolicy: Retain
apply
kubectl apply -f storageClass.yaml
Sonarqube Helm Chart
Grab a copy of the official sonarqube helm chart from Github
Update sonarqube values.yaml
add these values to the values.yaml
file
persistence:
enabled: true
storageClass: "staticManagedVolumeRetain"
accessMode: ReadWriteOnce
size: 5Gi
azureDisk:
kind: Managed
diskName: sonarqube-data
diskURI: /subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX/resourceGroups/AKS-CLOUDRESOURCES/providers/Microsoft.Compute/disks/sonarqube-data
Create pv.yaml for sonarqube chart
Place this under /sonarqube/templates/
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-{{ template "sonarqube.name" . }}-data
labels:
app: {{ template "sonarqube.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
capacity:
storage: {{ .Values.persistence.size }}
storageClassName: {{ .Values.persistence.storageClassName | quote }}
azureDisk:
kind: {{ .Values.azureDisk.kind | quote }}
diskName: {{ .Values.azureDisk.diskName | quote }}
diskURI: {{ .Values.azureDisk.diskURI | quote }}
accessModes:
- {{ .Values.persistence.accessMode | quote }}
{{- end }}
PostgreSQL Helm Sub Chart
We are going to use this PostgreSQL helm chart as a subchart to sonarqube.
- Create a directory called
'/charts/'
in the sonarqube root directory. - Download the postgresql chart into
/charts/
. - Make the following changes.
Add postgresql values.yaml
Add the following values (that match the azureDisk's you provisioned earlier).
persistence:
enabled: true
storageClass: "staticManagedVolumeRetain"
accessMode: ReadWriteOnce
size: 5Gi
azureDisk:
kind: Managed
diskName: postgresql-data
diskURI: /subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX/resourceGroups/AKS-CLOUDRESOURCES/providers/Microsoft.Compute/disks/postgresql-data
Create pv.yaml for PostgreSQL sub-chart
Place this under /post/templates/
:
{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-{{ template "sonarqube.name" . }}-data
labels:
app: {{ template "sonarqube.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
capacity:
storage: {{ .Values.persistence.size }}
storageClassName: {{ .Values.persistence.storageClassName | quote }}
azureDisk:
kind: {{ .Values.azureDisk.kind | quote }}
diskName: {{ .Values.azureDisk.diskName | quote }}
diskURI: {{ .Values.azureDisk.diskURI | quote }}
accessModes:
- {{ .Values.persistence.accessMode | quote }}
{{- end }}
Zip PostgreSQL (optional)
After adding the pv.yaml
file to the postgresql/templates directory and adding the details into the postgresql values.yaml
file you can tar zip that up so as postgresql-0.8.3.tgz and make sure it can be found under sonarqube/charts
folder.
You can also just keep the files unzipped in their folders under sonarqube/charts/postgresql/
etc.
Worse case scenario none of this makes sense and is hard to follow - have a look at my git repo with the files laid out how it worked for me: https://github.com/ronamosa/sonarqube-static-disks.gitng}
Helm deploy chart
You can now run whatever helm deploy line you usually do to put that badboy into your k8s cluster.
Test the persistence by doing a
helm delete --purge sonarqube
orterraform destroy myAksCluster
and watch when you re-deploy sonarqube that the deployment will find and mount the existing PV's into the pod and you'll see all your previous sonarqube data.