The password information that might be present in some specific plugin FileSets is now hidden from the status client console output.
New security events are generated after incorrect network connections.
The restricted console have now a better control over various bconsole commands such last list, restore, ...
The Bacula Storage Daemon can now encrypt the data at the volume level to enhance security of data at rest. The volumes cannot be read by a system that doesn't have the correct encryption keys.
More information can be found in the Security chapter of the documentation (here).
Bacula allows you to configure your jobs to detect known Malware. The detection can be done at the end of the Backup job and/or with a Verify job. The Malware database can be downloaded from different providers, the default is set to abuse.ch. If a Backup job detects a malware in the backup content, an error is reported and the Job status is adapted.
20-Sep 12:26 zog8-dir JobId 9: Start Backup JobId 9, Job=backup.2022-09-20_12.26.30_13 ... 20-Sep 12:26 zog8-dir JobId 9: [DI0002] Checking file metadata for Malwares 20-Sep 12:26 zog8-dir JobId 9: Error: [DE0007] Found Malware(s) on JobIds 9 Build OS: x86_64-pc-linux-gnu archlinux JobId: 9 Job: backup.2022-09-20_12.26.30_13 Backup Level: Full ... Last Volume Bytes: 659,912,644 (659.9 MB) Non-fatal FD errors: 1 SD Errors: 0 FD termination status: OK SD termination status: OK Termination: Backup OK -- with warnings
The list of the Malware detected in a given Job can be displayed with the list files type=malware command.
*list files type=malware jobid=1 +-------+-----------------------------+---------------+----------+ | jobid | filename | description | source | +-------+-----------------------------+---------------+----------+ | 1 | /tmp/regress/build/po/fr.po | Malware found | abuse.ch | +-------+-----------------------------+---------------+----------+
See the (here) section of this manual for more information.
The TOTP (Time based One Time Password) Authentication Plugin is compatible with the RFC 6238. Many smartphone apps are available to store the keys and compute the TOTP code.
The standard password and encryption mechanisms will still be used to accept an incoming console connection. Once accepted, the Console will prompt for a second level of authentication with a TOTP secret key generated from a shared token.
To enable this feature, you needed to install the bacula-totp-dir-plugin package on your Director system, then to set the PluginDirectory directive of the Director resource and configure the AuthenticationPlugin directive of a given restricted Console in the Director configuration file.
# in bacula-dir.conf Director { Name = myname-dir ... Plugin Directory = /opt/bacula/plugins } Console { Name = "totpconsole" Password = "xxx" Authentication Plugin = "totp" CommandACL = *all* JobACL = *all* ClientAcl = *all* PoolACL = *all* DirectoryACL = *all* WhereACL = *all* FileSetACL = *all* StorageACL = *all* CatalogACL = *all* ScheduleACL = *all* }
The matching Console configuration in bconsole.conf has no extra settings compared to a standard restricted Console.
# in bconsole.conf Console { Name = totpconsole Password = "xxx" # Same as in bacula-dir.conf/Console } Director { Name = mydir-dir Address = localhost Password = notused }
At the first console connection, if the TLS link is correctly setup (using the shared secret key), the plugin will generate a specific random key for the console and display a QR code in the console output. The user must then scan the QR code with his smartphone using an app such as Aegis (Opensource) or Google Authenticator. The plugin can also be configured to send the QR code via an extranal program.
More information can be found in the main manual on (here).
The StorageGroup feature is now compatible with Copy and Migration jobs.
This policy queries each Storage Daemon in the list for its FreeSpace (as a sum of devices specified in the Storage Daemon config) and sort the list by FreeSpace returned.
This policy ensures that job is backed up to the storage where the same job (of the same level, i.e. Full or Incremental) has been backed up the longest time ago. The goal is to split the jobs to improve redundancy.
This policy ensures that job is backed up on the storage where the same job (with same level i.e. Full or Incremental) has been backed up to the longest time ago. The goal is to spread the jobs to improve redundancy.
This policy ensures that a job is backed up to the storage with the most free space and least running jobs. Within the candidates storages, the least used one will be selected. Candidate storages are determined by the StorageGroupPolicyThreshold Job directive. If MaxFreeSpace is the largest amount of free space for all storages in the group, a storage will be a candidate if its free space is above .
For example:
with StorageGroupPolicyThreshold=100MB and storages free space being: Storage1 = 500GB free Storage2 = 200GB free Storage3 = 400GB free storage4 = 500GB free In this case MaxFreeSpace=500GB. Storage 1, 4 and 3 are candidates. If 5 jobs are running on Storage1, 2 on Storage4, and 3 on Storage3 then Storage4 will be the selected storage.
Storage Groups can be used as follows (as part of Job and Pool configuration):
Job { ... Storage = File1, File2, File3 ... } Pool { ... Storage = File4, File5, File6 StorageGroupPolicy = FreeSpaceLeastUsed StorageGroupPolicyThreshold = 200 MB ... }
New FileDaemon directives lets the deamon control which client directories are allowed for backup on a per-director basis. Directives can be specified as comma-separated list of directories. The simplest version of the AllowedBackupDirectories and ExcludedBackupDirectories directives may look as follows:
# in bacula-fd.conf Director { Name = myname-dir ... AllowedBackupDirectories = "/path/to/allowed/directory" } Director { Name = my-other-dir ... ExcludedBackupDirectories = "/path/to/excluded/directory" }
This directive works on the FD side, and is fully independent of the include/exclude part of the Fileset defined in the Director's config file. Nothing is backed up if none of the files defined in the Fileset is inside FD's allowed directory.
# in bacula-fd.conf Director { Name = myname-dir ... AllowedRestoreDirectories = "/path/to/directory" }
# in bacula-fd.conf Director { Name = myname-dir ... AllowedScriptDirectories = "/path/to/directory" }
When this directive is set, Bacula is also checking programs to be run against set of not-allowed characters. When following resource:
FileSet { Name = "Fileset_1" Include { File = "\\|/path/to/binary &" } }is defined inside the Director's config file, Bacula won't backup any file for such Fileset. It's because of the '&' character, which is not allowed when the 'Allowed Script Directories' is used on the Client's side. Full list of not-allowed characters:
$ ! ; \ & < > ` ( )
To disable all command sent by the Director, it is possible to use the following configuration in your FileDaemon:
AllowedScriptDirectories = none
The FileDaemon Antivirus plugin provides integration between the ClamAV Antivirus daemon and Bacula Verify Jobs, allowing post-backup virus detection within Bacula Enterprise.
More information can be found in the Antivirus Plugin user's guide.
This feature can only be used if Bacula is run as a systemd service because only then, with proper capabilities set for the daemon, it's allowed to manage Volume Files' attributes. The `show storage` bconsole command informs if Bacula has needed capabilities.
For File-based volumes Bacula will set the Append Only attribute during the first backup job that uses the new volume. Note that the flag is set when Bacula actually uses volume for the first time, not when the volume is labelled (either automatically or using the `label` command). This will prevent Volumes to lost data by being ovewritten.
The Append Only file attribute is cleared when the volume is being relabeled.
Bacula is now also able to set the Immutable file attribute on a file volume which is marked as Full. Note that as of now, Immutable flag is set only for Full volumes, setting it for other statuses (e.g. 'Used' volumes) may come in the future releases.
When a volume is Full and has the Immutable flag set, it cannot be relabeled and reused until the expiration period elapses. This helps to protect volumes from being reused too early, according to the protection period set.
If Volume's filesystem does not support the Append only or Immutable flags, a proper warning message is printed in the job log and Bacula proceeds with the usual backup workflow.
There are three new directives available to set on a per-device basis to control the the Volume Protection behavior:
Note: Both Append Only and Immutable flags set for volumes cannot be modified using catalog-related commands, e.g. purging a volume won't clear the Immutable flag, that can only be done when MinimumVolumeProtection time expires.
If administrator wants to manually remove file attributes, chattr must be used:
In some cases, for example when the status of the Volume is changed by the Director via the update volume command, the Storage Daemon will not be able to change the permission on the Volume. Some Volumes may have the Full/Used status without the proper protection.
The command update volumeprotect is designed to determine the list of the volumes that are not protected and connect the Storage Daemon to update the permissions. It can be executed in an Admin job once a day.
*update Update choice: 1: Volume parameters 2: Pool from resource 3: Slots from autochanger 4: Long term statistics 5: Snapshot parameters 6: Volume protection attributes on Storage Daemon Choose catalog item to update (1-6): 6 Found 1 volumes with status Used/Full that must be protected Connected to Storage "File2" at zog8:8103 with TLS 3000 Marking volume "Vol-0009" as read-only.
or via update volumeprotect
*update volumeprotect Found 1 volumes with status Used/Full that must be protected Connected to Storage "File2" at zog8:8103 with TLS 3000 Marking volume "Vol-0009" as read-only.
The command can be scheduled in an Admin job
Job { Name = adm-update-protected Type = Admin Runscript { Console = "update volumeprotect" RunsOnClient = no RunsWhen = Before } JobDefs = DefaultJob }
The ZSTD compression algorithm is now available in the FileSet option directive Compression. It is possible to configure ZSTD level 1 zstd1, level 10 zstd10 and level 19 zstd19. The default zstd compression is 10.
The Bacula Kubernetes plugin will save all the important Kubernetes resources which make up the application or service. The plugin can now use the CSI Snapshot method to backup persistent volumes.
A new Amazon Cloud Driver is available for beta testing. In the long term, it will enhance and replace the existing S3 cloud driver. The aws tool provided by Amazon is needed to use this cloud driver. The Amazon cloud driver is available within the bacula-cloud-storage-s3 package and can be enabled by using “Driver = Amazon” in the Cloud Storage Daemon resource.
Bacula version 15.0 uses a new volume version named BB03. The new format adds the support for Volume Encryption, and the previous 32bits CRC32 checksum was replaced by the faster 64bits XXH64.
Volumes written with the BB03 format can only be read by Bacula version 15 or later. Old BB02 volumes can still be restored, and Volumes may start with BB02 blocks, and continue with BB03 blocks.
It is not possible to use the Volume Encryption=yes directive on a volume that was labeled using the BB02 format. In that case, the volume will be automatically marked as Used.
The status director command can now report the progress of Copy and Migration Jobs.
The status director bconsole command has been updated to limit the number of scheduled jobs listed by default to 50.
The keyword limit can be used to choose how many lines will be printed.
* status director limit=5 127.0.0.1-dir Version: 14.1.5 (26 October 2022) x86_64-pc-linux-gnu archlinux Daemon started 31-Oct-22 17:27, conf reloaded 31-Oct-2022 17:27:55 Jobs: run=2, running=0 max=4 mode=1,2010 Crypto: fips=no crypto=OpenSSL 1.0.2u 20 Dec 2019 Heap: heap=675,840 smbytes=774,259 max_bytes=2,058,994 bufs=2,470 max_bufs=4,024 Res: njobs=24 nclients=1 nstores=3 npools=1 ncats=1 nfsets=25 nscheds=3 Plugin: ldap totp Scheduled Jobs (5/1440): Level Type Pri Scheduled Job Name Volume =================================================================================== Full Backup 10 31-Oct-22 17:29 NightlySave0 TestVolume001 Full Backup 10 31-Oct-22 17:30 NightlySave1 TestVolume001 Full Backup 10 31-Oct-22 17:31 NightlySave2 TestVolume001 Full Backup 10 31-Oct-22 17:32 NightlySave3 TestVolume001 Full Backup 10 31-Oct-22 17:33 NightlySave4 TestVolume001 5 scheduled Jobs over 1440 are displayed. Use the limit parameter to display more Jobs. ==== Running Jobs: Console connected using TLS at 31-Oct-22 17:27 No Jobs running. ==== Terminated Jobs: JobId Level Files Bytes Status Finished Name ==================================================================== 1 Full 35 5.070 M OK 31-Oct-22 15:50 NightlySave 2 Full 35 5.070 M OK 31-Oct-22 17:27 NightlySave 3 Full 35 5.070 M OK 31-Oct-22 17:28 NightlySave ====
The APIv2 json output support has been added for the status dir scheduled command.
Now, the Catalog stores an overview of the FileSet definition in the **Content** field. If the FileSet handles files and directories, the Content field will be set to **files**. If any plugins are used, each plugin will be inserted into the Content field.
*sql SELECT Content FROM FileSet; +------------------------------+ | content | +------------------------------+ | bpipe | +------------------------------+
New SQL attributes have been added to the Job table such as isVirtualFull, Encrypted, LastReadStorageId, WriteStorageId, Rate, CompressRatio, StatusInfo, and so on.
New SQL attributes have been added to the Media table such as **UseProtect**, **Protected**, **VolEncrypted**
Bconsole has a new .search command to search into the catalog accross client, job and volumes.
* .search text=sometext volume= client=sometext-fd job=
The text to be searched must have 4 characters at the minimum.
The Job RunScript feature has been enhanced to control the start of a Job inside the Run Queue. When a Job is starting, the Director controls that resources are available for the Job to start properly, if these resources are not available, the Job will stay in the queue, waiting to acquire them.
It is now possible to execute a script to control any kind of external and custom resources and decide when a Job should start. For example, a script might control the load average of a server before to start a Job to find the best execution time.
More information can be found in the Director documentation (here).
The Director PID file timestamp is now updated after a successful reload command. External tools can use this information to perform certain actions such as to clear the cache.
Job { ... Runscript { RunsOnClient = no RunsWhen = AtJobCompletion Command = "mail command" AbortJobOnError = yes } }It has been added because the RunsWhen keyword After was not designed to update the job status if the command fails.
The console has been improved to support a JSON output to list catalog objects and various daemon output. The new “.jlist” command is a shortcut of the standard “list” command and will display the results in a JSON table. All options and filters of the “list” command can be used in the “.jlist” command. Only catalog objects are listed with the “list” or “.jlist” commands. Resources such as Schedule, FileSets, etc... are not handled by the “list” command.
See the “help list” bconsole output for more information about the “list” command. The Bacula configuration can be displayed in JSON format with the standard “bdirjson”, “bsdjson”, “bfdjson” and “bbconsjson” tools.
*.jlist jobs {"type": "jobs", "data":[{"jobid": 1,"job": "CopyJobSave.2021-10-04_18.35.55_03",...
*.api 2 api_opts=j *.status dir header {"header":{"name":"127.0.0.1-dir","version":"12.8.2 (09 September 2021)"...