Red Hat, Red Hat Enterprise Linux are trademarks of Red Hat, Inc., registered in the United States and other countries.
Containers are very light system level virtualization with less overhead because programs in virtual partitions use the operating system's normal system call interface and do not need to be subjected to emulation or run in an intermediate virtual machine. Kubernetes manages a set of containers to create a flexible execution environment for applications and services.
The Bacula Kubernetes plugin will save all the important Kubernetes resources which make up the application or service. This includes the following namespaced objects:
and non namespaced objects:
All namespaced objects which belong to the particular namespace are grouped together for easy backup data browsing and recovery.
Users and service accounts can be authorized to access the server API. This process goes through authentication, authorization and admission control. To be able to successfully backup the Kubernetes resources, it is required to have a user or service account with the correct permissions and rights to be successfully authenticated and authorized to access the API server and resources to be backed up.
For resource configuration backups, the user or service account must be able to read and list the resources. In the case of PVC Data backup, it is also required that the user or service account can create and delete pods because the plugin will need to create and delete the Bacula Backup Proxy Pod during the backup.
Please see the Kubernetes documentation for more details.
All Pods in Kubernetes are ephemeral and may be destroyed manually or by operations from controllers. Pods do not store data locally because stored data would be destroyed with a pod's life cycle management, so data is saved on Persistent Volumes using Persistent Volume Claim objects to control Volume Space availability.
This brings a new challenge to data backup. Fortunately most of the challenges found here are similar to standard bare metal or virtualized environments. As with bare metal and virtual machine environments, data stored in databases should be protected with dedicated Bacula plugins that take advantage of the database backup routines.
Please refer to the appropriate Bacula plugin whitepapers for more details on database backups.
On the other hand, most non-database applications store data as simple flat files we can backup as-is without forcing complicated transactions or data consistency procedures. This use case is handled directly with the Kubernetes plugin using a dedicated Bacula Backup Proxy Pod executed in the cluster.
If the container application is more complex, it is possible to execute commands inside the container to quiesce the application.
A problem with command execution can abort the backup of the container with the run.*.failonerror annotation. You can find detailed description of this feature at (here).
A Bacula Backup Proxy Pod is a service executed automatically by the Kubernetes plugin which manages secure access to Persistent Volume data for the plugin. It is executed on the Kubernetes cluster infrastructure and requires a network connection to the Kubernetes plugin for data exchange on backup and restore operations. No external cluster service like NodePort, LoadBalancer, Ingress or Host Based Networking configuration is required to use this feature.
It is also not required to permanently deploy and run this service on the cluster itself as it is executed on demand. The Bacula Backup Proxy Pod does not consume any valuable compute resources outside of the backup window. You can even operate your Kubernetes backup solution (Bacula Edition service with Kubernetes plugin) directly from your on-premise backup infrastructure to backup a public Kubernetes cluster (it requires a simple port forwarding firewall rule) or use public backup infrastructure to backup on-premise Kubernetes cluster(s). Support for these varied architecture modes is built into the Kubernetes plugin. It is designed to be a One-Click solution for Kubernetes backups.
It is possible to backup and restore any PVC data including PVCs not attached to any running Kubernetes Pod.
Kubernetes CSI Snapshot functionality support together with a bunch of other features was added. Starting from this version you can use CSI Snapshots to acquire a consistent data view of selected Volume. Additionally, you can configure remote command execution on a selected Container of the Pod. You can configure command execution just before or after a Pod backup and just after snapshot creation.
Our plugin uses the volume clone api when doing a volume snapshot. CSI drivers may or may not have implemented the volume cloning functionality.
The CSI Snapshot Support feature described in (here) comes with a configuration of Volume data backup using Kubernetes Pod annotations. This feature allows you to define what volumes (PVC Data) to backup, where and what commands to execute, and how to react to some failures to achieve the best results from data snapshot functionality. You can select which volumes mounted at the Pod you want to backup, the preferred backup mode for the Pod, and what commands you want to execute.
The supported annotations are:
Pod annotations is an extension to the current PVC Data backup feature available with the pvcdata=... plugin parameter as described in (here). This is an independent function which may be used together with the functionality described above, especially since both use the same data archive stream handling with Bacula Backup Pod.
All you need to use a new feature is to configure selected Pod annotations and make sure that the backup for a required Kubernetes namespace is properly configured. There is no need for any plugin configuration modifications. A Pod's volumes will be backed up automatically.
Below you can find some examples how to configure Bacula annotations in Kubernetes Pods.
In the example below you will use a simple Linux command sync to synchronize cached writes to persistent storage before volume snapshot.
apiVersion: v1 kind: Pod metadata: name: app1 namespace: default annotations: bacula/backup.mode: snapshot bacula/run.before.job.container.command: "*/sync -f /data1; sync -f /data2" bacula/run.before.job.failjobonerror: "no" bacula/backup.volumes: "pvc1, pvc2" spec: containers: - image: ubuntu:latest name: test-container volumeMounts: - name: pvc1 mountPath: /data1 - name: pvc2 mountPath: /data2 volumes: - name: pvc1 persistentVolumeClaim: claimName: pvc1 - name: pvc2 persistentVolumeClaim: claimName: pvc2
In the example below you will use PostgreSQL's database data files quiesce feature to perform consistent backup with snapshotnoteThe final PostgreSQL backup solution requires more configuration and preparation which was skipped in this example to make it clear .
The first command (run.before.job.container.command) freezes writes to database files and the second (run.after.snapshot.container.command) resumes standard database operation as soon as PVC snapshot becomes ready.
apiVersion: v1 kind: Pod metadata: name: postgresql13 namespace: default annotations: bacula/backup.mode: standard bacula/run.before.job.container.command: "*//bin/startpgsqlbackup.sh" bacula/run.after.snapshot.container.command: "*//bin/stoppgsqlbackup.sh" bacula/run.after.snapshot.failjobonerror: "yes" bacula/backup.volumes: "pgsql" spec: containers: - image: postgresql:13 name: postgresql-server volumeMounts: - name: pgsql mountPath: /var/lib/pgsql volumes: - name: pgsql persistentVolumeClaim: claimName: pgsql-volume
All flavors of the Run Container Command parameters are remotely executed using the Kubernetes Pod remote execution API. Every command is prepared to execute with a standard Linux shell /bin/sh. This requires that a container image has to have the specified shell available. Using command shell execution gives flexibility to command execution or even allows for preparation of small scripts without additional container image customization.
To enable the build of the Kubernetes plugin, add the following options to your ./configure command line:
./configure --enable-kubernetes-plugin [... other options]
The Kubernetes plugin is written in Python, to ensure the stability of the Python environment, the default installation process will use Cython to convert the Python program to an executable.
The Bacula File Daemon and the Kubernetes plugin can be installed outside of the Kubernetes cluster on a server which has access to the Kubernetes API, or inside a protected cluster in a Container / Pod. The Kubernetes plugin can be installed on different operating systems and distributions, so the Bacula File Daemon for the correct operating system and platform has to be used.
There is no need, or in some solutions (when K8S is a cloud service like GKE or EKS), is it even a possible to install a Bacula File Daemon and the Kubernetes Plugin on a Kubernetes Master Server (etcd, control pane).
The -enable-kubernetes-plugin command line option of the ./configure script will allow to install the kubernetes plugin with make install
The following packages are required to build and use the kubernetes plugin:
Note that it should be possible to use directly python files rather than Cython.
The Plugin Directory directive of File Daemon resource in /opt/bacula/etc/bacula-fd.conf must point to where the kubernetes-fd.so plugin file is installed. The standard Bacula plugin directory is /opt/bacula/plugins
FileDaemon { Name = bacula-fd Plugin Directory = /opt/bacula/plugins ... }
This image should be installed manually on your local Docker images registry service which is available on your Kubernetes cluster as a source for application images.
Installation of the image can be performed with the following example commands:
# cd bacula-13.0.0/scripts/kubernetes-bacula-backup # make # docker load -i bacula-backup-<timestamp>.tar # docker image tag bacula-backup:<timestamp> <registry>/bacula-backup:<timestamp> # docker push <registry>/bacula-backup:<timestamp>
Where <timestamp> is the image version generated with the above package and <registry> is the location of your Docker images registry service. The exact procedure depends on your Kubernetes cluster deployment, so please verify the above steps before attempting to run the docker commands.
You can use any registry service available for your cluster, public or private, i.e. gcr.io/.
Depending on your cluster configuration it may be necessary to set the baculaimage=<name> plugin parameter (described at (here)) to define which repository and container image to use. The default for this parameter is bacula-backup:<timestamp> which may not be correct for your deployment.
Another example where you will need to modify the Bacula Backup Proxy Pod Image is in the case where your registry requires authentication. Please see the section (here) for more details.
The plugin can backup a number of Kubernetes Resources including: Deployments, Pods, Services or Persistent Volume Claims, check chapter (here) for a complete list.
The backup will create a single (.yaml) file for any Kubernetes Resource which is saved. For PVC Data backup functionality the Kubernetes plugin generates a data archive as a single <pvcname>.tar archive file. The resources are organized inside the Bacula catalog to facilitate browsing and restore operations. In the Bacula catalog, Kubernetes resources are represented as follows:
All supported Kubernetes Resources will be saved if no filter options are set. You may limit which resources are saved using filtering options described in chapter (here). By default, if no filter options are set, all supported Kubernetes Resources will be saved. To see the Kubernetes Resources that may be filtered, a listing mode is available. This mode is described in chapter (here).
The Kubernetes plugin provides two targets for restore operations:
To use this restore method, the where=/ parameter of a Bacula restore command is used. You can select any supported Kubernetes Resource to restore, or batch restore the whole namespace or even multiple namespaces. If you select a single resource to restore it will be restored as is without any dependent objects. In most cases, for (Config Maps, Secrets, Services, etc.) this is fine and restore will always succeed. On the other hand, compound objects (Pods, Deployments, etc.) won't be ready unless all dependencies are resolved during the restore. In this case you should make sure that you select all required resources to restore.
In Kubernetes, a successful resource restore doesn't necessarily result in the service successfully coming online. In some cases further monitoring and investigation will be required. For example:
All example cases above must be resolved by the Kubernetes administrator. When all issues are resolved, the resource should automatically come online. If not, it may be necessary to repeat a restore to redeploy the Resource configuration.
The Kubernetes plugin does not wait for a Resource to come online during restore. It checks the Resource creation or replace operation status and reports any errors in the job log. The only exception to this is PVC Data restore, when the Kubernetes plugin will wait for a successful archive data restore. This operation is always executed at the end of the namespace recovery (when pvcdata is restored with other k8s objects) and should wait for proper PVC mount.
To use this mode, the where=/some/path Bacula restore parameter is set to a full path on a server where the Bacula File Daemon and Kubernetes plugin are installed. If the path does not exist, it will be created by the Kubernetes plugin. With this restore mode you can restore any saved Kubernetes Resource including PVC Data archive file to a location on disk.
The plugin is configured using Plugin Parameters defined in a FileSets -> Include section of the Bacula Enterprise Edition Director configuration.
The following Kubernetes plugin parameters affect any type of Job (Backup, Estimate, or Restore).
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
If none of the parameters above are specified, then all available Namespaces and Persistent Volume Configurations will be backed up. However, the plugin does not force a PVC Data archive backup in this case.
This parameter is optional if pluginhost=<IP or name> is defined.
This parameter is optional.
This parameter is required when fdaddress=<IP or name> is not defined. Otherwise it is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
This parameter is optional.
During restore, the Kubernetes plugin will use the same parameters which were set for the backup job and saved in the catalog. During restore, you may change any of the parameters described in chapter (here) and (here). In addition to the options used for backups, the outputformat option can be used during restore. This option specifies the file format when restoring to a local filesystem. You can choose the restore output in JSON or YAML. If not defined the restored files will be saved in YAML.
This parameter is optional.
In the example below, all Kubernetes Namespaces, Resources and Persistent Volume Configurations will be backed up using the default Kubernetes API access credentials.
FileSet { Name = FS_Kubernetes_All Include { Plugin = "kubernetes:" } }
In this example, we will backup a single Kubernetes Namespace using the Bearer Token authorization method.
FileSet { Name = FS_Kubernetes_plugintest Include { Plugin = "kubernetes: host=http://10.0.0.1/k8s/clusters/test \ token=kubeconfig-user:cbhssdxq8vv8hrcw8jdxs2 namespace=plugintest" } }
The same example as above, but with a Persistent Volume:
FileSet { Name = FS_Kubernetes_mcache1 Include { Plugin = "kubernetes: host=http://10.0.0.1/k8s/clusters/test \ token=kubeconfig-user:cbhssdxq8vv8hrcw8jdxs2 \ namespace=plugintest persistentvolume=myvol" } }
This example backs up a single Namespace and all detected PVCs in this Namespace using a defined listening and entry point address and the default connection port:
FileSet { Name = FS_Kubernetes_test_namespace Include { Plugin = "kubernetes: namespace=test pvcdata fdaddress=10.0.10.10" } }
The same example as above, but using different listening and entry point addresses as may be found when the service is behind a firewall using port forwarding features:
FileSet { Name = FS_Kubernetes_test_namespace_through_firewall Include { Plugin = "kubernetes: namespace=test pvcdata=plugin-storage fdaddress=10.0.10.10 \ pluginhost=backup.example.com pluginport=8080" } }
This example shows PVC Data archive backup with the Bacula File Daemon inside a Kubernetes cluster:
FileSet { Name = FS_Kubernetes_incluster Include { Plugin = "kubernetes: incluster namespace=test pvcdata \ pluginhost=backup.bacula.svc.cluster.local" } }
The configuration above is designed for use in situations where the Bacula server components are located on-premise and behind a firewall with no external ports allowed in, but must back up data on an external Kubernetes cluster.
To restore Kubernetes resources to a Kubernetes cluster, the administrator should execute the restore command and specify the where parameter as in this example:
* restore where=/
and then set any other required restore plugin parameters for the restore.
* restore where=/ ... $ cd /@kubernetes/namespaces/plugintest/configmaps/ cwd is: /@kubernetes/namespaces/plugintest/configmaps/ $ ls plugintest-configmap.yaml $ add * 1 file marked. $ done Bootstrap records written to /opt/bacula/working/bacula-devel-dir.restore.1.bsr The Job will require the following (*=>InChanger): Volume(s) Storage(s) SD Device(s) =========================================================================== Vol005 File1 FileChgr1 Volumes marked with "*" are in the Autochanger. 1 file selected to be restored. Run Restore job JobName: RestoreFiles Bootstrap: /opt/bacula/working/bacula-devel-dir.restore.1.bsr Where: / Replace: Always FileSet: Full Set Backup Client: bacula-devel-fd Restore Client: bacula-devel-fd Storage: File1 When: 2019-09-30 12:39:13 Catalog: MyCatalog Priority: 10 Plugin Options: *None* OK to run? (yes/mod/no): mod Parameters to modify: 1: Level 2: Storage 3: Job 4: FileSet 5: Restore Client 6: When 7: Priority 8: Bootstrap 9: Where 10: File Relocation 11: Replace 12: JobId 13: Plugin Options Select parameter to modify (1-13): 13 Automatically selected : kubernetes: config=/home/radekk/.kube/config Plugin Restore Options config: radekk/.kube/config (*None*) host: *None* (*None*) token: *None* (*None*) username: *None* (*None*) password: *None* (*None*) verify_ssl: *None* (True) ssl_ca_cert: *None* (*None*) outputformat: *None* (RAW) Use above plugin configuration? (yes/mod/no): mod You have the following choices: 1: config (K8S config file) 2: host (K8S API server URL/Host) 3: token (K8S Bearertoken) 4: verify_ssl (K8S API server cert verification) 5: ssl_ca_cert (Custom CA Certs file to use) 6: outputformat (Output format when saving to file (JSON, YAML)) 7: fdaddress (The address for listen to incoming backup pod data) 8: fdport (The port for opening socket for listen) 9: pluginhost (The endpoint address for backup pod to connect) 10: pluginport (The endpoint port to connect) Select parameter to modify (1-8): 1 Please enter a value for config: /root/.kube/config Plugin Restore Options config: /root/.kube/config (*None*) host: *None* (*None*) token: *None* (*None*) verify_ssl: *None* (True) ssl_ca_cert: *None* (*None*) outputformat: *None* (RAW) fdaddress: *None* (*FDAddress*) fdport: *None* (9104) pluginhost: *None* (*FDAddress*) pluginport: *None* (9104) Use above plugin configuration? (yes/mod/no): yes Job queued. JobId=1084 \end{verbatimx} The plugin does not wait for Kubernetes Resources to become ready and online in the same way as the \texttt{kubectl} or the \texttt{oc} commands. \subsection{Restore to a Local Directory} \label{localdirrestore} It is possible to restore any Kubernetes Resource(s) to file without loading them into a cluster. To do so, the \texttt{where} restore option should point to the local directory: \begin{verbatim} * restore where=/tmp/bacula/restores ... $ cd /@kubernetes/namespaces/ cwd is: /@kubernetes/namespaces/ $ ls bacula/ cattle-system/ default/ graphite/ ingress/ plugintest/ $ add plugintest 25 files marked. $ done Bootstrap records written to /opt/bacula/working/bacula-devel-dir.restore.2.bsr The Job will require the following (*=>InChanger): Volume(s) Storage(s) SD Device(s) =========================================================================== Vol005 File1 FileChgr1 Volumes marked with "*" are in the Autochanger. 25 files selected to be restored. Run Restore job JobName: RestoreFiles Bootstrap: /opt/bacula/working/bacula-devel-dir.restore.2.bsr Where: /tmp/bacula/restores Replace: Always FileSet: Full Set Backup Client: bacula-devel-fd Restore Client: bacula-devel-fd Storage: File1 When: 2019-09-30 12:58:16 Catalog: MyCatalog Priority: 10 Plugin Options: *None* OK to run? (yes/mod/no): mod Parameters to modify: 1: Level 2: Storage 3: Job 4: FileSet 5: Restore Client 6: When 7: Priority 8: Bootstrap 9: Where 10: File Relocation 11: Replace 12: JobId 13: Plugin Options Select parameter to modify (1-13): 13 Automatically selected : kubernetes: config=/home/radekk/.kube/config debug=1 Plugin Restore Options config: *None* (*None*) host: *None* (*None*) token: *None* (*None*) verify_ssl: *None* (True) ssl_ca_cert: *None* (*None*) outputformat: *None* (RAW) fdaddress: *None* (*FDAddress*) fdport: *None* (9104) pluginhost: *None* (*FDAddress*) pluginport: *None* (9104) Use above plugin configuration? (yes/mod/no): mod You have the following choices: 1: config (K8S config file) 2: host (K8S API server URL/Host) 3: token (K8S Bearertoken) 4: verify_ssl (K8S API server cert verification) 5: ssl_ca_cert (Custom CA Certs file to use) 6: outputformat (Output format when saving to file (JSON, YAML)) 7: fdaddress (The address for listen to incoming backup pod data) 8: fdport (The port for opening socket for listen) 9: pluginhost (The endpoint address for backup pod to connect) 10: pluginport (The endpoint port to connect) Select parameter to modify (1-8): 8 Please enter a value for outputformat: JSON Plugin Restore Options config: *None* (*None*) host: *None* (*None*) token: *None* (*None*) verify_ssl: *None* (True) ssl_ca_cert: *None* (*None*) outputformat: *None* (RAW) fdaddress: *None* (*FDAddress*) fdport: *None* (9104) pluginhost: *None* (*FDAddress*) pluginport: JSON (9104) Use above plugin configuration? (yes/mod/no): yes Run Restore job JobName: RestoreFiles Bootstrap: /opt/bacula/working/bacula-devel-dir.restore.2.bsr Where: /tmp/bacula/restores Replace: Always FileSet: Full Set Backup Client: bacula-devel-fd Restore Client: bacula-devel-fd Storage: File1 When: 2019-09-30 12:58:16 Catalog: MyCatalog Priority: 10 Plugin Options: User specified OK to run? (yes/mod/no): Job queued. JobId=1085
Output format conversion at restore time will format all data in a human readable format. You can find an example of this restore below.
# cat /tmp/bacula/restores/namespaces/plugintest/plugintest.json { "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "field.cattle.io/projectId": "c-hb9ls:p-bm6cw", "lifecycle.cattle.io/create.namespace-auth": "true" }, "cluster_name": null, "creation_timestamp": "2019-09-25T16:31:03", "deletion_grace_period_seconds": null, "deletion_timestamp": null, "finalizers": [ "controller.cattle.io/namespace-auth" ], "generate_name": null, "generation": null, "initializers": null, "labels": { "field.cattle.io/projectId": "p-bm6cw" }, "name": "plugintest", "namespace": null, "owner_references": null, "resource_version": "11622", "self_link": "/api/v1/namespaces/plugintest", "uid": "dd873930-dfb1-11e9-aad0-022014368e80" }, "spec": { "finalizers": [ "kubernetes" ] }, "status": { "phase": "Active" } }
The supported output transformations are: JSON and YAML.
Here we describe functionalities and requirements related to pvcdata restore.
The procedure to restore a PVC Data archive file to a local directory is basically the same as restoring the Kubernetes Resource configuration file as described in (here). However, output transformation is unavailable and ignored when restoring PVC data. Restore of this data will create a tar archive file you can manually inspect and use.
This procedure is similar to the one described in PVC Data backup and uses the same Bacula Backup Proxy Pod image. During restore, the plugin uses the same endpoint configuration parameters so it is not necessary to setup it again. If your endpoint parameters have changed you can update them using Bacula plugin restore options modification as in example below:
*restore select all done where=/ (...) OK to run? (yes/mod/no): mod Parameters to modify: 1: Level 2: Storage 3: Job 4: FileSet 5: Restore Client 6: When 7: Priority 8: Bootstrap 9: Where 10: File Relocation 11: Replace 12: JobId 13: Plugin Options Select parameter to modify (1-13): 13 Automatically selected : kubernetes: namespace=plugintest pvcdata pluginhost=example.com Plugin Restore Options config: *None* (*None*) host: *None* (*None*) token: *None* (*None*) verify_ssl: *None* (True) ssl_ca_cert: *None* (*None*) outputformat: *None* (RAW) fdaddress: *None* (*FDAddress*) fdport: *None* (9104) pluginhost: *None* (*FDAddress*) pluginport: *None* (9104) Use above plugin configuration? (yes/mod/no): mod You have the following choices: 1: config (K8S config file) 2: host (K8S API server URL/Host) 3: token (K8S Bearertoken) 4: verify_ssl (K8S API server cert verification) 5: ssl_ca_cert (Custom CA Certs file to use) 6: outputformat (Output format when saving to file (JSON, YAML)) 7: fdaddress (The address for listen to incoming backup pod data) 8: fdport (The port for opening socket for listen) 9: pluginhost (The endpoint address for backup pod to connect) 10: pluginport (The endpoint port to connect) Select parameter to modify (1-10): 9 Please enter a value for pluginhost: newbackup.example.com Plugin Restore Options config: *None* (*None*) host: *None* (*None*) token: *None* (*None*) verify_ssl: *None* (True) ssl_ca_cert: *None* (*None*) outputformat: *None* (RAW) fdaddress: *None* (*FDAddress*) fdport: *None* (9104) pluginhost: newbackup.example.com (*FDAddress*) pluginport: *None* (9104) Use above plugin configuration? (yes/mod/no): yes
You can restore all data available from the backup archive for a selected Persistent Volume Claim and all data will be overwritten, ignoring the Replace job parameter. Please take note of this behavior, which may change in the future.
The Bacula Enterprise Kubernetes plugin supports the "plugin listing" feature of Bacula Enterprise Edition 8.x or newer. This mode allows the plugin to display some useful information about available Kubernetes resources such as:
The feature uses the special .ls command with a plugin=<plugin> parameter.
The command requires the following parameters to be set:
The supported values for a path=<path> parameter are:
To display available Object types, follow the following command example:
*.ls plugin=kubernetes: client=kubernetes-fd path=/ Connecting to Client kubernetes-fd at localhost:9102 drwxr-x--- 1 root root 2018-09-28 14:32:20 /namespaces drwxr-x--- 1 root root 2018-09-28 14:32:20 /persistentvolumes drwxr-x--- 1 root root 2018-09-28 14:32:20 /storageclass 2000 OK estimate files=2 bytes=0
To display the list of all available Kubernetes namespaces, the following command example can be used:
*.ls plugin=kubernetes: client=kubernetes-fd path=namespaces Connecting to Client kubernetes-fd at localhost:9102 drwxr-xr-x 1 root root 2019-09-25 16:39:56 /namespaces/default drwxr-xr-x 1 root root 2019-09-25 16:39:56 /namespaces/kube-public drwxr-xr-x 1 root root 2019-09-25 16:39:56 /namespaces/kube-system drwxr-xr-x 1 root root 2019-09-25 16:46:19 /namespaces/cattle-system drwxr-xr-x 1 root root 2019-09-27 13:04:01 /namespaces/plugintest 2000 OK estimate files=5 bytes=0
To display the list of available Persistent Volume Claims which could be used for PVC Data archive feature selection, you can use the following example command for the mysql namespace:
*.ls client=bacula-devel-fd plugin="kubernetes:" path=/namespaces/mysql/pvcdata Connecting to Client kubernetes-fd at localhost:9102 -rw-r----- 1 root root 2019-10-16 14:29:38 /namespaces/mysql/pvcdata/mysql-mysql 2000 OK estimate files=1 bytes=0
To display the list of all available Persistent Volumes, the following command example can be used:
*.ls plugin=kubernetes: client=kubernetes-fd path=persistentvolumes Connecting to Client kubernetes-fd at localhost:9102 -rw-r----- 1073741824 2019-09-25 /persistentvolumes/pvc-bfaebd0d-dfad-11e9-a2cc-42010a8e0174 -rw-r----- 1073741824 2019-09-25 /persistentvolumes/pvc-b1a49497-dfad-11e9-a2cc-42010a8e0174 -rw-r----- 1073741824 2019-09-25 /persistentvolumes/pvc-949cb638-dfad-11e9-a2cc-42010a8e0174 -rw-r----- 1073741824 2019-09-25 /persistentvolumes/pvc-9313388c-dfad-11e9-a2cc-42010a8e0174 -rw-r----- 10737418240 2019-09-24 /persistentvolumes/myvolume 2000 OK estimate files=5 bytes=15,032,385,536
The volume lists display a Volume Storage size which does not reflect the actual configuration size during backup.
To display the list of all defined Storage Class Resources, the following command example can be used:
*.ls plugin=kubernetes: client=kubernetes-fd path=storageclass Connecting to Client kubernetes-fd at localhost:9102 -rw-r----- 1024 2020-07-27 13:39:48 /storageclass/local-storage -rw-r----- 1024 2020-07-23 16:14:13 /storageclass/default-postgresql-1 -rw-r----- 1024 2020-07-24 11:47:02 /storageclass/local-storage-default -rw-r----- 1024 2020-07-23 12:00:02 /storageclass/standard 2000 OK estimate files=4 bytes=4,096
WARNING: This is an advanced topic related to Kubernetes clusters. You should !!NOT!! try to implement or customize the Bacula Kubernetes plugin behavior unless you REALLY know what you are doing.
You can customize the service parameters used for deploying Bacula backup Pods dedicated to Persistent Volume Claim data backups to suit your needs. The plugin uses the following Pod service deployment YAML template to execute the proxy operation pod on the cluster.
apiVersion: v1 kind: Pod metadata: name: {podname} namespace: {namespace} labels: app: {podname} spec: hostname: {podname} {nodenameparam} containers: - name: {podname} resources: limits: cpu: "1" memory: "64Mi" requests: cpu: "100m" memory: "16Mi" image: {image} env: - name: PLUGINMODE value: "{mode}" - name: PLUGINHOST value: "{host}" - name: PLUGINPORT value: "{port}" - name: PLUGINTOKEN value: "{token}" imagePullPolicy: {imagepullpolicy} volumeMounts: - name: {podname}-storage mountPath: /{mode} restartPolicy: Never volumes: - name: {podname}-storage persistentVolumeClaim: claimName: {pvcname}
The above template uses a number of predefined placeholders which will be replaced by corresponding variables during Pod execution preparation. To customize proxy Pod deployment you can change or tune template variables or the template body. Below is a list of all supported variables with short descriptions and requirement conditions.
In this section we describe the common problems you may encounter when you first deploy the Bacula Kubernetes plugin or when you are not very familiar with this plugin.
Error: kubernetes: incluster error: Service host/port is not set.This means you are running the Bacula File Daemon and Kubernetes plugin not in a Pod, or Kubernetes does not provide default service access in your installation.
In the latter case you should use a standard Kubernetes access method in a prepared kubeconfig file.