The picture (here) shows two Volumes (Volume0001 and Volume0002) with their parts in the cache. Below the cache, one can see that Volume0002 has be uploaded or synchronized with the Cloud.
With most cloud providers uploads are free of charge, but downloads of data from the cloud are billed. By using the local cache and multiple small parts, Bacula can be configured to substantially reduce download costs.
The Maximum File Size Device directive is valid within the Storage Daemon's cloud device configuration and defines the granularity of a restore chunk. In order to minimize the number of volume parts to download during a restore (in particular when restoring single files), it is useful to set the Maximum File Size to a value smaller than or equal to the configured Maximum Part Size.
The Cache is treated much like a normal Disk based backup, so that in configuring Cloud the administrator should take care to set Archive Device in the Device resource to a dirctory that would also be suitable for storing backup data. Obviously, unless the truncate/prune cache commands are used, the Archive Device will continue to fill.
The cache retention can be controlled per Volume with the Cache Retention attribute. The default value is 0, meaning that pruning of the cache is disabled.
The Cache Retention value for a volume can be modified with the update command, or configured via the Pool directive Cache Retention for newly created volumes.
Cloud choice: 1: List Cloud Volumes in the Cloud 2: Upload a Volume to the Cloud 3: Prune the Cloud Cache 4: Truncate a Volume Cache 5: Done Select action to perform on Cloud (1-5):The different choices should be rather obvious.
Device Type = Cloud
Cloud = S3Cloud
Device { Name = CloudStorage Device Type = Cloud Cloud = S3Cloud Archive Device = /opt/bacula/backups Maximum Part Size = 10000000 Media Type = File LabelMedia = yes Random Access = Yes; AutomaticMount = yes RemovableMedia = no AlwaysOpen = no }
As can be seen above, the Cloud directive in the Device resource contains the name (S3Cloud), which references the Cloud resource that is shown below.
Note also that the Archive Device is specified in the same manner as used for a File device, i.e. by indicating a directory name. However, in place of containing regular files as Volumes, the archive device for the Cloud drivers will contain the local cache, which consists a directory per Volume, and these directories contain the parts associated with the particular Volume. So with the above Device resource, and the two cached Volumes shown in figure (here) above, the following layout on disk would result:
/opt/bacula/backups /opt/bacula/backups/Volume0001 /opt/bacula/backups/Volume0001/part.1 /opt/bacula/backups/Volume0001/part.2 /opt/bacula/backups/Volume0001/part.3 /opt/bacula/backups/Volume0001/part.4 /opt/bacula/backups/Volume0002 /opt/bacula/backups/Volume0002/part.1 /opt/bacula/backups/Volume0002/part.2 /opt/bacula/backups/Volume0002/part.3
Default east USA location:
Cloud { Name = S3Cloud Driver = "S3" HostName = "s3.amazonaws.com" BucketName = "BaculaVolumes" AccessKey = "BZIXAIS39DP9YNER5DFZ" SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0" Protocol = HTTPS URIStyle = VirtualHost Truncate Cache = No Upload = EachPart Region = "us-east-1" Maximum Upload Bandwidth = 5MB/s }
For central europe location:
Cloud { Name = S3Cloud Driver = "S3" HostName = "s3-eu-central-1.amazonaws.com" BucketName = "BaculaVolumes" AccessKey = "BZIXAIS39DP9YNER5DFZ" SecretKey = "beesheeg7iTe0Gaexee7aedie4aWohfuewohGaa0" Protocol = HTTPS UriStyle = VirtualHost Truncate Cache = No Upload = EachPart Region = "eu-central-1" Maximum Upload Bandwidth = 4MB/s }
For Amazon Cloud, refer to http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_regionto get a complete list of regions and corresponding endpoints and use them respectively as Region and HostName directives.
or in the following example for CEPH S3 interface:
Cloud { Name = CEPH_S3 Driver = "S3" HostName = ceph.mydomain.lan BucketName = "CEPHBucket" AccessKey = "xxxXXXxxxx" SecretKey = "xxheeg7iTe0Gaexee7aedie4aWohfuewohxx0" Protocol = HTTPS Upload = EachPart UriStyle = Path # Must be set for CEPH }
For Azure:
Cloud { Name = MyCloud Driver = "Azure" HostName = "MyCloud" #not used but needs to be specified BucketName = "baculaAzureContainerName" AccessKey = "baculaaccess" SecretKey = "/Csw1SECRETUmZkfQ==" Protocol = HTTPS UriStyle = Path }
The directives of the above Cloud resource for the S3 driver are defined as follows:
This defines which driver to use. At the moment, the only Cloud driver that is implemented is S3. There is also a File driver, which is used mostly for testing.
This directive specifies the bucket name that you wish to use on the Cloud service. This name is normally a unique name that identifies where you want to place your Cloud Volume parts. With Amazon S3, the bucket must be created previously on the Cloud service. With Azure Storage, it is generaly refered as Container and it can be created automatically by Bacula when it does not exist. The maximum bucket name size is 255 characters.
The access key is your unique user identifier given to you by your cloud service provider.
The protocol defines the communications protocol to use with the cloud service provider. The two protocols currently supported are: HTTPS and HTTP. The default is HTTPS.
This directive specifies the URI style to use to communicate with the cloud service provider. The two Uri Styles currently supported are: VirtualHost and Path. The default is VirtualHost.
This directive specifies when Bacula should automatically remove (truncate) the local cache parts. Local cache parts can only be removed if they have been uploaded to the cloud. The currently implemented values are:
This directive specifies when local cache parts will be uploaded to the Cloud. The options are:
The default is unlimited, but by using this directive, you may limit the upload bandwidth used globally by all devices referencing this Cloud resource.
The default is unlimited, but by using this directive, you may limit the download bandwidth used globally by all devices referencing this Cloud resource.
The following Cloud directives are ignored: Bucket Name, Access Key, Secret Key, Protocol, URI Style. The directives Truncate Cache and Upload work on the local cache in the same manner as they do for the S3 driver.
The main difference to note is that the Host Name, specifies the destination directory for the Cloud Volume files, and this Host Name must be different from the Archive Device name, or there will be a conflict between the local cache (in the Archive Device directory) and the destination Cloud Volumes (in the Host Name directory).
As noted above, the File driver is mostly used for testing purposes, and we do not particularly recommend using it. However, if you have a particularly slow backup device you might want to stage your backup data into an SSD or disk using the local cache feature of the Cloud device, and have your Volumes transferred in the background to a slow File device.
To use the Progressive Virtual Full feature, the Backups To Keep directive is added to a Job resource. The value specified for the directive indicates the number of backup jobs that should not be merged into the Virtual Full. The default is zero and behaves the same way the prior script pvf worked.
Backups To Keep = 30
where the value (30 in the figure (here) ) is the number of backups to retain. When this directive is present during a Virtual Full (it is ignored for any other Job types), Bacula will check if the latest Full backup has more subsequent backups than the value specified. In the above example, the Job would simply terminate unless there is a Full back followed by at least 31 backups of either Differential or Incremental level.
Assuming that the latest Full backup is followed by 32 Incremental backups, a Virtual Full will be run that consolidates the Full with the first two Incrementals that were run after the Full backup. The result is a Full backup followed by 30 Incremental ones. The Job Resource in bacula-dir.conf to accomplish this would be:
Job { Name = "VFull" Type = Backup Level = VirtualFull Client = "my-fd" File Set = "FullSet" Accurate = Yes Backups To Keep = 10 }
However, it should be noted that Virtual Full jobs are not compatible with Windows backups using VSS writers (mostly plugins), nor are they compatible with a number of non-Windows Bacula Systems plugins. Please contact Bacula Systems Support team for more details Virtual Full compatibility.
Device { Name = ... Archive Device = /dev/nst0 Alert Command = "/opt/bacula/scripts/tapealert %l" Control Device = /dev/sg1 # must be SCSI ctl for Archive Device ... }
The Control Device directive in the Storage Daemon's configuration was previously used only for the SAN Shared Storage feature. With Bacula version 8.8, it is also used for the TapeAlert command to permit Bacula to detect tape alerts on a specific device (normally only tape devices).
Once the above mentioned two directives (Alert Command and Control Device) are in place in all Device resources, Bacula will check for tape alerts at two points:
At each of the above times, Bacula will call the new tapealert script, which uses the tapeinfo program. The tapeinfo utility is part of the apt sg3-utils and rpm sg3_utils packages. Then for each tape alert that Bacula finds for that drive, it will emit a Job message that is either INFO, WARNING, or FATAL depending on the designation in the Tape Alert published by the SCSI Storage Interfaces">https://www.t10.org. For the specification, please see: http://www.t10.org/ftp/t10/document.02/02-142r0.pdf
As a somewhat extreme example, if tape alerts 3, 5, and 39 are set, you will get the following output in your backup job:
17-Nov 13:37 rufus-sd JobId 1: Error: block.c:287 Write error at 0:17 on device "tape" (/home/kern/bacula/k/regress/working/ach/drive0) Vol=TestVolume001. ERR=Input/output error. 17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert: Volume="TestVolume001" alert=3: ERR=The operation has stopped because an error has occurred while reading or writing data which the drive cannot correct. The drive had a hard read or write error 17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert: Volume="TestVolume001" alert=5: ERR=The tape is damaged or the drive is faulty. Call the tape drive supplier helpline. The drive can no longer read data from the tape 17-Nov 13:37 rufus-sd JobId 1: Warning: Disabled Device "tape" (/home/kern/bacula/k/regress/working/ach/drive0) due to tape alert=39. 17-Nov 13:37 rufus-sd JobId 1: Warning: Alert: Volume="TestVolume001" alert=39: ERR=The tape drive may have a fault. Check for availability of diagnostic information and run extended diagnostics if applicable. The drive may have had a failure which may be identified by stored diagnostic information or by running extended diagnostics (eg Send Diagnostic). Check the tape drive users manual for instructions on running extended diagnostic tests and retrieving diagnostic data.
Without the tape alert feature enabled, you would only get the first error message above, which is the error Bacula received. Notice also, in this case the alert number 5 is a critical error, which causes two things to happen: First, the tape drive is disabled, and second, the Job is failed.
If you attempt to run another Job using the Device that has been disabled, you will get a message similar to the following:
17-Nov 15:08 rufus-sd JobId 2: Warning: Device "tape" requested by DIR is disabled.
and the Job may be failed if no other usable drive can be found.
Once the problem with the tape drive has been corrected, you can clear the tape alerts and re-enable the device with the Bacula Bacula Console command such as the following:
enable Storage=Tape
Note, when you enable the device, the list of prior tape alerts for that drive will be discarded.
Since is is possible to miss tape alerts, Bacula maintains a temporary list of the last 8 alerts, and each time Bacula calls the tapealert script, it will keep up to 10 alert status codes. Normally there will only be one or two alert errors for each call to the tapealert script.
Once a drive has one or more tape alerts, they can be inspected by using the Bacula Console status command as follows:
status storage=Tapewhich produces the following output:
Device Vtape is "tape" (/home/kern/bacula/k/regress/working/ach/drive0) mounted with: Volume: TestVolume001 Pool: Default Media type: tape Device is disabled. User command. Total Bytes Read=0 Blocks Read=1 Bytes/block=0 Positioned at File=1 Block=0 Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" alert=Hard Error Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" alert=Read Failure Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" alert=Diagnostics RequiredIf you want to see the long message associated with each of the alerts, simply set the debug level to 10 or more and re-issue the status command:
setdebug storage=Tape level=10 status storage=Tape
... Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" flags=0x0 alert=The operation has stopped because an error has occurred while reading or writing data which the drive cannot correct. The drive had a hard read or write error Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" flags=0x0 alert=The tape is damaged or the drive is faulty. Call the tape drive supplier helpline. The drive can no longer read data from the tape Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" flags=0x1 alert=The tape drive may have a fault. Check for availability of diagnostic information and run extended diagnostics if applicable. The drive may have had a failure which may be identified by stored diagnostic information or by running extended diagnostics (eg Send Diagnostic). Check the tape drive users manual for instructions on running extended diagnostic tests and retrieving diagnostic data. ...The next time you enable the Device by either using Bacula Console or you restart the Storage Daemon, all the saved alert messages will be discarded.
Tape Alerts numbered 14, 20, 29, 30, 31, 38, and 39 will cause Bacula to disable the drive.
Please note certain tape alerts such as 14 have multiple effects (disable the Volume and disable the drive).
This directive is used to specify a list of directories that can be accessed by a restore session. Without this directive, the console cannot restore any file. Multiple directories names may be specified by separating them with commas, and/or by specifying multiple DirectoryACL directives. For example, the directive may be specified as:
DirectoryACL = /home/bacula/, "/etc/", "/home/test/*"
With the above specification, the console can access the following files:
But not the following files or directories:
If a directory starts with a Windows pattern (ex: c:/), Bacula will automatically ignore the case when checking directories.
This directive is used to specify a list of UID/GID that can be accessed from a restore session. Without this directive, the console cannot restore any file. During the restore session, the Director will compute the restore list and will exclude files and directories that cannot be accessed. Bacula uses the LStat database field to retrieve st_mode, st_uid and st_gid information for each file and compare them with the UserId ACL elements. If a parent directory doesn't have a proper catalog entry, access to this directory will be automatically granted.
UID/GID names are resolved with getpwnam() function within the Director. The UID/GID mapping might be different from one system to an other.
Windows systems are not compatible with the UserId ACL feature. The use of UserId ACL = *all* is required to restore Windows systems from a restricted Console.
Multiple UID/GID names may be specified by separating them with commas, and/or by specifying multiple UserId ACL directives. For example, the directive may be specified as:
UserIdACL = "bacula", "100", "100:100", ":100", "bacula:bacula"
# ls -l /home total 28 drwx------ 45 bacula bacula 12288 Oct 24 17:05 bacula drwx------ 45 test test 12288 Oct 24 17:05 test drwx--x--x 45 test2 test2 12288 Oct 24 17:05 test2 drwx------ 2 root root 16384 Aug 30 14:57 backup -rwxr--r-- 1 root root 1024 Aug 30 14:57 afile
In the example above, if the uid of the user test is 100, the following files will be accessible:
The directory backup will not be accessible.
The Bacula Console restore command can now accept the new jobuser= and jobgroup= parameters to restrict the restore process to a given user account. Files and directories created during the restore session will be restricted.
* restore jobuser=joe jobgroup=users
The Restore Job restriction can be used on Linux and on FreeBSD. If the restore Client OS doesn't support the needed thread-level user impersonation, the restore job will be aborted.
The Bacula Console list commands can now be used safely from a restricted bconsole session. The information displayed will respect the ACL configured for the Console session. For example, if a Console has access to JobA, JobB and JobC, information about JobD will not appear in the list jobs command.
# cat /opt/bacula/etc/bacula-dir.conf ... Console { Name = fd-cons # Name of the FD Console Password = yyy ... ClientACL = localhost-fd # everything allowed RestoreClientACL = test-fd # restore only BackupClientACL = production-fd # backup only }
The Client ACL directive takes precedence over the Restore Client ACL and the Backup Client ACL settings. In the Console resource above, this means that the bconsole linked to the Console named “fd-cons” will be able to run:
At restore time, jobs for client “localhost-fd”, “test-fd” and “production-fd” will be available.
If *all* is set for Client ACL, backup and restore will be allowed for all clients, despite the use of Restore Client ACL or Backup Client ACL.
A console program such as the new tray-monitor or bconsole can now be configured to connect a File Daemon. There are many new features available (see the New Tray Monitor (here)), but probably the most important one is the ability for the user to initiate a backup of her own machine. The connection established by the FD to the Director for the backup can be used by the Director for the backup, thus not only can clients (users) initiate backups, but a File Daemon that is NATed (cannot be reached by the Director) can now be backed up without using advanced tunneling techniques.
The flow of information is shown in the picture (here)
# cat /opt/bacula/etc/bacula-dir.conf ... Console { Name = fd-cons # Name of the FD Console Password = yyy # These commands are used by the tray-monitor, it is possible to restrict CommandACL = run, restore, wait, .status, .jobs, .clients CommandACL = .storages, .pools, .filesets, .defaults, .info # Adapt for your needs jobacl = *all* poolacl = *all* clientacl = *all* storageacl = *all* catalogacl = *all* filesetacl = *all* }
# cat /opt/bacula/etc/bacula-fd.conf ... Console { # Console to connect the Director Name = fd-cons DIRPort = 9101 address = localhost Password = "yyy" } Director { Name = remote-cons # Name of the tray monitor/bconsole Password = "xxx" # Password of the tray monitor/bconsole Remote = yes # Allow to use send commands to the Console defined }
cat /opt/bacula/etc/bconsole-remote.conf .... Director { Name = localhost-fd address = localhost # Specify the FD address DIRport = 9102 # Specify the FD Port Password = "notused" } Console { Name = remote-cons # Name used in the auth process Password = "xxx" }
cat ~/.bacula-tray-monitor.conf Monitor { Name = remote-cons } Client { Name = localhost-fd address = localhost # Specify the FD address Port = 9102 # Specify the FD Port Password = "xxx" Remote = yes }
A more detailed description with complete examples is available in the Tray monitor chapter of this manual.
A new tray monitor has been added to the 8.6 release, which offers the following features:
The Tray Monitor can periodically scan a specific directory configured as Command Directory and process “*.bcmd” files to find jobs to run.
The format of the “file.bcmd” command file is the following:
<component name>:<run command> <component name>:<run command> ... <component name> = string <run command> = string (bconsole command line)
For example:
localhost-fd: run job=backup-localhost-fd level=full localhost-dir: run job=BackupCatalog
A command file should contain at least one command. The component specified in the first part of the command line should be defined in the tray monitor. Once the command file is detected by the tray monitor, a popup is displayed to the user and it is possible for the user to cancel the job.
Command files can be created with tools such as cron or the task scheduler on Windows. It is possible and recommended to verify network connectivity at that time to avoid network errors:
#!/bin/sh if ping -c 1 director &> /dev/null then echo "my-dir: run job=backup" > /path/to/commands/backup.bcmd fi
As of Bacula version 8.4.1, it has been possible to have a Verify Job configured with level = Data that will reread all records from a job and optionally check size and checksum of all files.
Starting with 8.6, it is now possible to use the accurate option to check catalog records at the same time. Using a Verify job with level = Data and accurate = yes can replace the level = VolumeToCatalog option.
For more information on how to setup a Verify Data job, see (here).
To run a Verify Job with the accurate option, it is possible to set the option in the Job definition or set use the accurate = yes on the command line.
* run job=VerifyData jobid=10 accurate=yes
Bacula version 8.6.0 can generate indexes stored in the catalog to speed up file access during a Single Item Restore session for VMWare or for Exchange. The index can be displayed in bconsole with the list filemedia command.
* list filemedia jobid=1
It is now possible to send the list of all saved files to a Messages resource with the saved message type. It is not recommended to send this flow of information to the Director and/or the Catalog when the client FileSet is large. To avoid side effects, the all keyword doesn't include the saved message type. The saved message type should be explicitly set.
# cat /opt/bacula/etc/bacula-fd.conf ... Messages { Name = Standard director = mydirector-dir = all, !terminate, !restored, !saved append = /opt/bacula/working/bacula-fd.log = all, saved, restored }
The 8.6 release adds some new BWeb features, such as:
The new .estimate command can be used to get statistics about a job to run. The command uses the database to estimate the size and the number of files of the next job. On a PostgreSQL database, the command uses regression slope to compute values. On SQLite or MySQL, where these statistical functions are not available, the command uses a simple “average” estimation. The correlation number is given for each value.
*.estimate job=backup level=I nbjob=0 corrbytes=0 jobbytes=0 corrfiles=0 jobfiles=0 duration=0 job=backup *.estimate job=backup level=F level=F nbjob=1 corrbytes=0 jobbytes=210937774 corrfiles=0 jobfiles=2545 duration=0 job=backup
A plugin for Microsoft SQL Server (MSSQL) is now available. The plugin uses MSSQL advanced backup and restore features (like Point In Time Recovery, Log backup, Differential backup, ...).
Job { Name = MSSQLJob Type = Backup Client = windows1 FileSet = MSSQL Pool = 1Month Storage = File Level = Incremental } FileSet { Name = MSSQL Enable VSS = no Include { Options { Signature = MD5 } Plugin = "mssql" } } FileSet { Name = MSSQL2 Enable VSS = no Include { Options { Signature = MD5 } Plugin = "mssql: database=production" } }
# Verify Job definition Job { Name = VerifyData Type = Verify Level = Data Client = 127.0.0.1-fd # Use local file daemon FileSet = Dummy # Will be adapted during the job Storage = File # Should be the right one Messages = Standard Pool = Default } # Backup Job definition Job { Name = MyBackupJob Type = Backup Client = windows1 FileSet = MyFileSet Pool = 1Month Storage = File } FileSet { Name = MyFileSet Include { Options { Verify = s5 Signature = MD5 } File = / }
To run the Verify job, it is possible to use the “jobid” parameter of the run command.
*run job=VerifyData jobid=10 Run Verify Job JobName: VerifyData Level: Data Client: 127.0.0.1-fd FileSet: Dummy Pool: Default (From Job resource) Storage: File (From Job resource) Verify Job: MyBackupJob.2015-11-11_09.41.55_03 Verify List: /opt/bacula/working/working/VerifyVol.bsr When: 2015-11-11 09:47:38 Priority: 10 OK to run? (yes/mod/no): yes Job queued. JobId=14 ... 11-Nov 09:46 my-dir JobId 13: Bacula Enterprise 8.4.1 (13Nov15): Build OS: x86_64-unknown-linux-gnu archlinux JobId: 14 Job: VerifyData.2015-11-11_09.46.29_03 FileSet: MyFileSet Verify Level: Data Client: 127.0.0.1-fd Verify JobId: 10 Verify Job:q Start time: 11-Nov-2015 09:46:31 End time: 11-Nov-2015 09:46:32 Files Expected: 1,116 Files Examined: 1,116 Non-fatal FD errors: 0 SD Errors: 0 FD termination status: Verify differences SD termination status: OK Termination: Verify Differences
The current Verify Data implementation requires specifying the correct Storage resource in the Verify job. The Storage resource can be changed with the bconsole command line and with the menu.
The list jobs bconsole command now accepts new command line options:
The “@tall” command allows logging all input and output of a console session.
*@tall /tmp/log *st dir ... @tall
It is now possible to specify the database name during a restore in the Plugin Option menu. It is still possible to use the “Where” parameter to specify the target database name.
We added a “timeout” option to the PostgreSQL plugin command line that is set to 60s by default. Users may want to change this value when the PostgreSQL cluster is slow to complete SQL queries used during the backup.
It is now possible to explore VMWare virtual machines backup jobs (Full, Incremental and Differential) made with the Bacula Enterprise vSphere plugin to restore individual files and directories. The Single Item Restore feature comes with both a console interface and a BWeb Management Suite specific interface. See the VMWare Single File Restore whitepaper for more information.
It is now possible to explore Microsoft Exchange databases backups made with the Bacula Enterprise VSS plugin to restore individual mailboxes. The Single Item Restore feature comes with both a console interface and a web interface. See the Exchange Single Mailbox Restore whitepaper for more information.
Job { Name = Migrate-Job Type = Migrate ... RunAfter = "echo New JobId is %I" }
disable Job=NightlyBackup Client=Windows-fd
will disable the Job named NightlyBackup as well as the client named Windows-fd.
disable Storage=LTO-changer Drive=1
will disable the first drive in the autochanger named LTO-changer.
Please note that doing a reload command will set any values changed by the enable/disable commands back to the values in the bacula-dir.conf file.
The Client and Schedule resources in the bacula-dir.conf file now permit the directive Enabled = yes or Enabled = no.
Bacula Enterprise 8.2 is now able to handle Snapshots on Linux/Unix systems. Snapshots can be automatically created and used to backup files. It is also possible to manage Snapshots from Bacula's bconsole tool through a unique interface.
The following Snapshot backends are supported with Bacula Enterprise 8.2:
By default, Snapshots are mounted (or directly available) under .snapshots directory on the root filesystem. (On ZFS, the default is .zfs/snapshots).
The Snapshot backend program is called bsnapshot and is available in the bacula-enterprise-snapshot package. In order to use the Snapshot Management feature, the package must be installed on the Client.
The bsnapshot program can be configured using /opt/bacula/etc/bsnapshot.conf file. The following parameters can be adjusted in the configuration file:
# cat /opt/bacula/etc/bsnapshot.conf trace=/tmp/snap.log debug=10 lvm_snapshot_size=/dev/ubuntu-vg/root:5% mountopts=nouuid mountopts=/dev/ubuntu-vg/root:nouuid,nosuid
When using Snapshots, it is very important to quiesce applications that are running on the system. The simplest way to quiesce an application is to stop it. Usually, taking the Snapshot is very fast, and the downtime is only about a couple of seconds. If downtime is not possible and/or the application provides a way to quiesce, a more advanced script can be used. An example is described on (here).
The use of the Snapshot Engine on the FileDaemon is determined by the new Enable Snapshot FileSet directive. The default is no.
FileSet { Name = LinuxHome Enable Snapshot = yes Include { Options = { Compression = LZO } File = /home } }
By default, Snapshots are deleted from the Client at the end of the backup. To keep Snapshots on the Client and record them in the Catalog for a determined period, it is possible to use the Snapshot Retention directive in the Client or in the Job resource. The default value is 0 secconds. If, for a given Job, both Client and Job Snapshot Retention directives are set, the Job directive will be used.
Client { Name = linux1 ... Snapshot Retention = 5 days }
To automatically prune Snapshots, it is possible to use the following RunScript command:
Job { ... Client = linux1 ... RunScript { RunsOnClient = no Console = "prune snapshot client=%c yes" RunsAfter = yes } }
In RunScripts, the AfterSnapshot keyword for the RunsWhen directive will allow a command to be run just after the Snapshot creation.
AfterSnapshot is a synonym for the AfterVSS keyword.
Job { ... RunScript { Command = "/etc/init.d/mysql start" RunsWhen = AfterSnapshot RunsOnClient = yes } RunScript { Command = "/etc/init.d/mysql stop" RunsWhen = Before RunsOnClient = yes } }
Information about Snapshots are displayed in the Job output. The list of all devices used by the Snapshot Engine is displayed, and the Job summary indicates if Snapshots were available.
JobId 3: Create Snapshot of /home/build JobId 3: Create Snapshot of /home/build/subvol JobId 3: Delete snapshot of /home/build JobId 3: Delete snapshot of /home/build/subvol ... JobId 3: Bacula 127.0.0.1-dir 8.2.0 (23Feb15): Build OS: x86_64-unknown-linux-gnu archlinux JobId: 3 Job: Incremental.2015-02-24_11.20.27_08 Backup Level: Full ... Snapshot/VSS: yes ... Termination: Backup OK
The new snapshot command will display by default the following menu:
*snapshot Snapshot choice: 1: List snapshots in Catalog 2: List snapshots on Client 3: Prune snapshots 4: Delete snapshot 5: Update snapshot parameters 6: Update catalog with Client snapshots 7: Done Select action to perform on Snapshot Engine (1-7):
The snapshot command can also have the following parameters:
[client=<client-name> | job=<job-name> | jobid=<jobid>] [delete | list | listclient | prune | sync | update]
It is also possible to use traditional list, llist, update, prune or delete commands on Snapshots.
*llist snapshot jobid=5 snapshotid: 1 name: NightlySave.2015-02-24_12.01.00_04 createdate: 2015-02-24 12:01:03 client: 127.0.0.1-fd fileset: Full Set jobid: 5 volume: /home/.snapshots/NightlySave.2015-02-24_12.01.00_04 device: /home/btrfs type: btrfs retention: 30 comment:
* snapshot listclient Automatically selected Client: 127.0.0.1-fd Connecting to Client 127.0.0.1-fd at 127.0.0.1:9102 Snapshot NightlySave.2015-02-24_12.01.00_04: Volume: /home/.snapshots/NightlySave.2015-02-24_12.01.00_04 Device: /home CreateDate: 2015-02-24 12:01:03 Type: btrfs Status: OK Error:
With the Update catalog with Client snapshots option (or snapshot sync), the Director contacts the FileDaemon, lists snapshots of the system and creates catalog records of the Snapshots.
*snapshot sync Automatically selected Client: 127.0.0.1-fd Connecting to Client 127.0.0.1-fd at 127.0.0.1:9102 Snapshot NightlySave.2015-02-24_12.35.47_06: Volume: /home/.snapshots/NightlySave.2015-02-24_12.35.47_06 Device: /home CreateDate: 2015-02-24 12:35:47 Type: btrfs Status: OK Error: Snapshot added in Catalog *llist snapshot snapshotid: 13 name: NightlySave.2015-02-24_12.35.47_06 createdate: 2015-02-24 12:35:47 client: 127.0.0.1-fd fileset: jobid: 0 volume: /home/.snapshots/NightlySave.2015-02-24_12.35.47_06 device: /home type: btrfs retention: 0 comment:
LVM Snapshots are quite primitive compared to ZFS, BTRFS, NetApp and other systems. For example, it is not possible to use Snapshots if the Volume Group (VG) is full. The administrator must keep some free space in the VG to create Snapshots. The amount of free space required depends on the activity of the Logical Volume (LV). bsnapshot uses 10% of the LV by default. This number can be configured per LV in the bsnapshot.conf file (See (here)).
[root@system1]# vgdisplay --- Volume group --- VG Name vg_ssd ... VG Size 29,81 GiB ... Alloc PE / Size 125 / 500,00 MiB Free PE / Size 7507 / 29,32 GiB <---- Free Space ...
It is also not advisable to leave snapshots on the LVM backend. Having multiple snapshots of the same LV on LVM will slow down the system.
Only Ext4, XFS and EXT3 noteXFS and EXT3 are available in 8.2.7 and later filesystems are supported with the Snapshot LVM backend.
To get low level information about the Snapshot Engine, the debug tag “snapshot” should be used in the setdebug command.
* setdebug level=10 tags=snapshot client * setdebug level=10 tags=snapshot dir
Copy and Migration Jobs now use the Global Endpoint Deduplication protocol if the destination Device Type is dedup.
A new automatic Deduplication index optimization has been added to the Vacuum procedure.
Part of the Deduplication index can be locked into memory to improve performance.
Users can now configure parameters related to the size of the Deduplication index and the amount of memory that can be used to cache the index.
Backing up and restoring Hyper-V virtual machines is supported with Full level backups using the VSS API. Use of the Global Endpoint Deduplication plugin and the bothsides FileSet option minimizes the amount of data transfered and the amount of storage used.
The KVM plugin provides the following main features:
The KVM plugin is designed to be used when the hypervisor uses local storage for virtual machine disks and libvirtd for virtual machine management.
The Bacula Enterprise Windows File Daemon now automatically supports files and directories that are encrypted on Windows filesystem.
The Copy, Migration and VirtualFull performance on large jobs with millions of files has been greatly enhanced.
The status storage command now reports the space available on disk devices:
... Device status: Device file: "FileStorage" (/bacula/arch1) is not open. Available Space=5.762 GB == Device file: "FileStorage1" (/bacula/arch2) is not open. Available Space=5.862 GB
The Global Endpoint Deduplication solution minimizes network transfers and Bacula Volume size using deduplication technology.
The new Global Endpoint Deduplication Storage daemon directives are:
See below for a FileSet example using the new dedup directive.
# from bacula-sd.conf Storage { Name = my-sd Working Directory = /opt/bacula/working Pid Directory = /opt/bacula/working Plugin Directory = /opt/bacula/plugins Dedup Directory = /opt/bacula/dedup Dedup Index Directory = /opt/bacula/ssd # default for Dedup Directory } Device { Name = DedupDisk Archive Device = /opt/bacula/storage Media Type = DedupVolume Label Media = yes Random Access = yes Automatic Mount = yes Removable Media = no Always Open = no Device Type = Dedup # Required }
The Global Endpoint Deduplication Client cache system can speed up restore jobs by getting blocks from the local client disk instead of requesting them over the network. Note that if blocks are not available locally, the FileDaemon will get blocks from the Storage Daemon. This feature can be enabled with the Dedup Index Directory directive in the FileDaemon resource. When using this option, the File Daemon will have to maintain the cache during Backup jobs.
# from bacula-fd.conf FileDaemon { Name = my-fd Working Directory = /opt/bacula/working Pid Directory = /opt/bacula/working # Optional, Keep indexes on the client for faster restores Dedup Index Directory = /opt/bacula/dedupindex }
It is possible to configure the Global Endpoint Deduplication system in the Director with a FileSet directive called Dedup. Each FileSet Include section can specify a different deduplication behavior depending on your needs.
FileSet { Name = FS_BASE # Send everything to the Storage Daemon as usual # and let the Storage Daemon do the deduplication Include { Options { Dedup = storage } File = /opt/bacula/etc } # Send only references and new blocks to the Storage Daemon Include { Options { Dedup = bothsides } File = /VirtualBox } # Do not try to dedup my encrypted directory Include { Options { Dedup = none } File = /encrypted } }
The FileSet Dedup directive accepts the following values:
FileSet { Name = "All Drives" Include { Options { Signature = MD5 } File = / } }
If you have mountpoints, the onefs=no option should be used as it is with Unix systems.
FileSet { Name = "All Drives with mountpoints" Include { Options { Signature = MD5 OneFS = no } File = C:/ # will include mountpoint C:/mounted/... } }
To exclude a mountpoint from a backup when OneFS = no, use the Exclude block as usual:
FileSet { Name = "All Drives with mountpoints" Include { Options { Signature = MD5 OneFS = no } File = C:/ # will include all mounted mountpoints under C:/ # including C:/mounted (see Exclude below) } Exclude { File = C:/mounted # will not include C:/mounted } }
The digest algorithm was set to SHA1 or SHA256 depending on the local OpenSSL options. We advise you to not modify the PkiDigest default setting. Please, refer to the OpenSSL documentation to understand the pros and cons regarding these options.
FileDaemon { ... PkiCipher = AES256 }
Added in version 8.0.5, the new “M” option letter for the Accurate directive in the FileSet Options block, which allows comparing the modification time and/or creation time against the last backup timestamp. This is in contrast to the existing options letters “m” and/or “c”, mtime and ctime, which are checked against the stored catalog values, which can vary accross different machines when using the BaseJob feature.
The advantage of the new “M” option letter for Jobs that refer to BaseJobs is that it will instruct Bacula to backup files based on the last backup time, which is more useful because the mtime/ctime timestamps may differ on various Clients, causing files to be needlessly backed up.
Job { Name = USR Level = Base FileSet = BaseFS ... } Job { Name = Full FileSet = FullFS Base = USR ... } FileSet { Name = BaseFS Include { Options { Signature = MD5 } File = /usr } } FileSet { Name = FullFS Include { Options { Accurate = Ms # check for mtime/ctime of last backup timestamp and Size Signature = MD5 } File = /home File = /usr } }
In Bacula Enterprise version 8.0 and later, we introduced a new .api version to help external tools to parse various Bacula bconsole output.
The api_opts option can use the following arguments:
.api 2 api_opts=t1s43S35 .status dir running ================================== jobid=10 job=AJob ...
In Bacula Enterprise version 8.0 and later, we introduced a new options parameter for the setdebug bconsole command.
The following arguments to the new option parameter are available to control debug functions.
The following command will enable debugging for the File Daemon, truncate an existing trace file, and turn on timestamps when writing to the trace file.
* setdebug level=10 trace=1 options=ct fd
It is now possible to use a class of debug messages called tags to control the debug output of Bacula daemons.
* setdebug level=10 tags=bvfs,sql,memory * setdebug level=10 tags=!bvfs # bacula-dir -t -d 200,bvfs,sql
The tags option is composed of a list of tags. Tags are separated by “,” or “+” or “-” or “!”. To disable a specific tag, use “-” or “!” in front of the tag. Note that more tags are planned for future versions.
Component | Tag | Debug Level | Comment |
---|---|---|---|
director | scheduler | 100 | information about job queue mangement |
director | scheduler | 20 | information about resources in job queue |
director | bvfs | 10 | information about bvfs |
director | sql | 15 | information about bvfs queries |
all | memory | 40-60 | information about smartalloc |
Comm Compression = no
This directive can appear in the following resources:
In many cases, the volume of data transmitted across the communications line can be reduced by a factor of three when this directive is enabled. In the case that the compression is not effective, Bacula turns it off on a record by record basis.
If you are backing up data that is already compressed the comm line compression will not be effective, and you are likely to end up with an average compression ratio that is very small. In this case, Bacula reports None in the Job report.
Read Only = yes
The update_xxx_catalog script will automatically update the Bacula database format, but you should realize that for very large databases (greater than 1GB), it may take some time, and there are several different options for doing the update:
This database format change can provide very significant improvements in the speed of metadata insertion into the database, and in some cases (backup of large email servers) can significantly reduce the size of the database.
The restore options, if implemented in a plugin, will be presented to you during initiation of a restore either by command line or if available by a GUI such as BWeb. For examples of the command line interface and the GUI interface, please see below:
*run restore jobid=11766 Run Restore job JobName: RestoreFiles Bootstrap: /tmp/regress/working/my-dir.restore.1.bsr Where: /tmp/regress/tmp/bacula-restores ... Plugin Options: *None* OK to run? (yes/mod/no): mod Parameters to modify: 1: Level ... 13: Plugin Options Select parameter to modify (1-13): 13 Automatically selected : vsphere: host=squeeze2 Plugin Restore Options datastore: *None* restore_host: *None* new_hostname: *None* Use above plugin configuration? (yes/mod/no): mod You have the following choices: 1: datastore (Datastore to use for restore) 2: restore_host (ESX host to use for restore) 3: new_hostname (Restore host to specified name) Select parameter to modify (1-3): 3 Please enter a value for new_hostname: test Plugin Restore Options datastore: *None* restore_host: *None* new_hostname: test Use above plugin configuration? (yes/mod/no): yes
Or via the BWeb restore interface (see Fig (here))
The alldrives plugin now accepts the snapshot option that generates snapshots for all local Windows drives, but without explicitly adding them to the FileSet. It may be combined with the VSS plugin. For example:
FileSet { ... Include { Plugin = "vss:/@MSSQL/" Plugin = "alldrives: snapshot" # should be placed after vss plugin } }
purge volume action=truncate storage=File pool=Default
The above command is now simplified to be:
truncate storage=File pool=Default
The following features were added during the 6.4.x life cycle.
The Bacula Enterprise SAP Plugin is designed to implement the official SAP Backint interface to simplify the backup and restore procedure through your traditional SAP database tools. See SAP-Backint whitepaper for more information.
By default, the Oracle backup Manager, RMAN, sends all backups to an operating system specific directory on disk. You can also configure RMAN to make backups to media such as tape using the SBT module. Bacula will act as Media Manager, and the data will be transfered directly from RMAN to Bacula. See Oracle Plugin whitepaper for more information.
The MySQL plugin is designed to simplify the backup and restore of your MySQL database, the backup administrator doesn't need to know about the internals of MySQL backup techniques or how to write complex scripts. This plugin will automatically backup essential information such as configurations and user definitions. The MySQL plugin supports both dump (with support for Incremental backup) and binary backup techniques. See the MySQL Plugin whitepaper for more information.
For example, if you have the following backup Jobs in your catalog:
+-------+---------+-------+----------+----------+-----------+ | JobId | Name | Level | JobFiles | JobBytes | JobStatus | +-------+---------+-------+----------+----------+-----------+ | 1 | Vbackup | F | 1754 | 50118554 | T | | 2 | Vbackup | I | 1 | 4 | T | | 3 | Vbackup | I | 1 | 4 | T | | 4 | Vbackup | D | 2 | 8 | T | | 5 | Vbackup | I | 1 | 6 | T | | 6 | Vbackup | I | 10 | 60 | T | | 7 | Vbackup | I | 11 | 65 | T | | 8 | Save | F | 1758 | 50118564 | T | +-------+---------+-------+----------+----------+-----------+
and you want to consolidate only the first 3 jobs and create a virtual backup equivalent to Job 1 + Job 2 + Job 3, you will use jobid=3 in the run command, then Bacula will select the previous Full backup, the previous Differential (if any) and all subsequent Incremental jobs.
run job=Vbackup jobid=3 level=VirtualFull
If you want to consolidate a specific job list, you must specify the exact list of jobs to merge in the run command line. For example, to consolidate the last Differential and all subsequent Incrementals, you will use jobid=4,5,6,7 or jobid=4-7 on the run command line. Because one of the Jobs in the list is a Differential backup, Bacula will set the new job level to Differential. If the list is composed of only Incremental jobs, the new job will have its level set to Incremental.
run job=Vbackup jobid=4-7 level=VirtualFull
When using this feature, Bacula will automatically discard jobs that are not related to the current Job. For example, specifying jobid=7,8, Bacula will discard JobId 8 because it is not part of the same backup Job.
We do not recommend it, but if you really want to consolidate jobs that have different names (so probably different clients, filesets, etc...), you must use alljobid= keyword instead of jobid=.
run job=Vbackup alljobid=1-3,6-8 level=VirtualFull
* prune expired volume * prune expired volume pool=FullPool
To schedule this option automatically, it can be added to the Catalog backup job definition.
Job { Name = CatalogBackup ... RunScript { Console = "prune expired volume yes" RunsWhen = Before } }
The BWeb Management Suite offers a number of Wizards which support the Administrator in his daily work. The wizards provide a step by step set of required actions that graphically guide the Administrator to perform quick and easy creation and modification of configuration files.
BWeb also provides diagnostic tools that enable the Administrator to check that the Catalog Database is well configured, and that BWeb is installed properly.
The new Online help mode displays automatic help text suggestions when the user searches data types.
This project was funded by Bacula Systems and is available with the .
A number of error messages have been enhanced to have more specific data on what went wrong.
If a file changes size while being backed up the old and new size are reported.
The WinBMR 3 version is a major rewrite of the product that support all x86 Windows versions and technologies. Especially UEFI and secure boot systems. The WinBMR 3 File Daemon plugin is now part of the plugins included with the Bacula File Daemon package. The rescue CD or USB key is available separately.
The Incremental Accelerator for NetApp Plugin is designed to simplify the backup and restore procedure of your NetApp NAS hosting a huge number of files.
When using the NetApp HFC Plugin, Bacula Enterprise will query the NetApp device to get the list of all files modified since the last backup instead of having to walk through the entire filesystem. Once Bacula have the list of all files to back's up, it will use a standard network share (such as NFS or CIFS) to access files.
This project was funded by Bacula Systems and is available with the .
The PostgreSQL plugin is designed to simplify the backup and restore procedure of your PostgreSQL cluster, the backup administrator doesn't need to learn about internals of PostgreSQL backup techniques or write complex scripts. The plugin will automatically take care for you to backup essential information such as configuration, users definition or tablespaces. The PostgreSQL plugin supports both dump and PITR backup techniques.
This project was funded by Bacula Systems and is available with the .
The new Director directive Maximum Reload Requests permits to configure the number of reload requests that can be done while jobs are running.
Director { Name = localhost-dir Maximum Reload Requests = 64 ... }
When the Director is behind a NAT, in a WAN area, to connect tothe StorageDaemon, the Director uses an “external” ip address, and the FileDaemon should use an “internal” IP address to contact the StorageDaemon.
The normal way to handle this situation is to use a canonical name such as “storage-server” that will be resolved on the Director side as the WAN address and on the Client side as the LAN address. This is now possible to configure this parameter using the new directive FDStorageAddress in the Storage or Client resource.
Storage { Name = storage1 Address = 65.1.1.1 FD Storage Address = 10.0.0.1 SD Port = 9103 ... }
Client { Name = client1 Address = 65.1.1.2 FD Storage Address = 10.0.0.1 FD Port = 9102 ... }
Note that using the Client FDStorageAddress directive will not allow to use multiple Storage Daemon, all Backup or Restore requests will be sent to the specified FDStorageAddress.
The default value is set to 0 (zero), which means there is no limit on the number of read jobs. Note, limiting the read jobs does not apply to Restore jobs, which are normally started by hand. A reasonable value for this directive is one half the number of drives that the Storage resource has rounded down. Doing so, will leave the same number of drives for writing and will generally avoid over committing drives and a deadlock.
The Bacula Enterprise vSphere plugin provides virtual machine bare metal recovery, while the backup at the guest level simplify data protection of critical applications.
The plugin integrates the VMware's CBT technology to ensure only blocks that have changed since the initial Full, and/or the last Incremental or Differential Backup are sent to the current Incremental or Differential backup stream to give you more efficient backups and reduced network load.
The Bacula Enterprise Oracle Plugin is designed to simplify the backup and restore procedure of your Oracle Database instance, the backup administrator don't need to learn about internals of Oracle backup techniques or write complex scripts. The Bacula Enterprise Oracle plugin supports both dump and PITR with RMAN backup techniques.
To make Bacula function properly with multiple Autochanger definitions, in the Director's configuration, you must adapt your bacula-dir.conf Storage directives.
Each autochanger that you have defined in an Autochanger resource in the Storage daemon's bacula-sd.conf file, must have a corresponding Autochanger resource defined in the Director's bacula-dir.conf file. Normally you will already have a Storage resource that points to the Storage daemon's Autochanger resource. Thus you need only to change the name of the Storage resource to Autochanger. In addition the Autochanger = yes directive is not needed in the Director's Autochanger resource, since the resource name is Autochanger, the Director already knows that it represents an autochanger.
In addition to the above change (Storage to Autochanger), you must modify any additional Storage resources that correspond to devices that are part of the Autochanger device. Instead of the previous Autochanger = yes directive they should be modified to be Autochanger = xxx where you replace the xxx with the name of the Autochanger.
For example, in the bacula-dir.conf file:
Autochanger { # New resource Name = Changer-1 Address = cibou.company.com SDPort = 9103 Password = "xxxxxxxxxx" Device = LTO-Changer-1 Media Type = LTO-4 Maximum Concurrent Jobs = 50 } Storage { Name = Changer-1-Drive0 Address = cibou.company.com SDPort = 9103 Password = "xxxxxxxxxx" Device = LTO4_1_Drive0 Media Type = LTO-4 Maximum Concurrent Jobs = 5 Autochanger = Changer-1 # New directive } Storage { Name = Changer-1-Drive1 Address = cibou.company.com SDPort = 9103 Password = "xxxxxxxxxx" Device = LTO4_1_Drive1 Media Type = LTO-4 Maximum Concurrent Jobs = 5 Autochanger = Changer-1 # New directive } ...
Note that Storage resources Changer-1-Drive0 and Changer-1-Drive1 are not required since they make up part of an autochanger, and normally, Jobs refer only to the Autochanger resource. However, by referring to those Storage definitions in a Job, you will use only the indicated drive. This is not normally what you want to do, but it is very useful and often used for reserving a drive for restores. See the Storage daemon example .conf below and the use of AutoSelect = no.
So, in summary, the changes are:
*stop Select Job: 1: JobId=3 Job=Incremental.2012-03-26_12.04.26_07 2: JobId=4 Job=Incremental.2012-03-26_12.04.30_08 3: JobId=5 Job=Incremental.2012-03-26_12.04.36_09 Choose Job to stop (1-3): 2 2001 Job "Incremental.2012-03-26_12.04.30_08" marked to be stopped. 3000 JobId=4 Job="Incremental.2012-03-26_12.04.30_08" marked to be stopped.
If you enter the restart command in bconsole, you will get the following prompts:
*restart You have the following choices: 1: Incomplete 2: Canceled 3: Failed 4: All Select termination code: (1-4):
If you select the All option, you may see something like:
Select termination code: (1-4): 4 +-------+-------------+---------------------+------+-------+----------+-----------+-----------+ | jobid | name | starttime | type | level | jobfiles | jobbytes | jobstatus | +-------+-------------+---------------------+------+-------+----------+-----------+-----------+ | 1 | Incremental | 2012-03-26 12:15:21 | B | F | 0 | 0 | A | | 2 | Incremental | 2012-03-26 12:18:14 | B | F | 350 | 4,013,397 | I | | 3 | Incremental | 2012-03-26 12:18:30 | B | F | 0 | 0 | A | | 4 | Incremental | 2012-03-26 12:18:38 | B | F | 331 | 3,548,058 | I | +-------+-------------+---------------------+------+-------+----------+-----------+-----------+ Enter the JobId list to select:
Then you may enter one or more JobIds to be restarted, which may take the form of a list of JobIds separated by commas, and/or JobId ranges such as 1-4, which indicates you want to restart JobIds 1 through 4, inclusive.
Restores can be done while Exchange is running, but you must first unmount (dismount in Microsoft terms) any database you wish to restore and explicitly mark them to permit a restore operation (see the white paper for details).
This project was funded by Bacula Systems and is available with the .
Incremental backups for MSSQL are not support by Microsoft. We strongly recommend that you not perform Incremental backups with MSSQL as they will probably produce a situation where restore will no longer work correctly.
We are currently working on producing a white paper that will give more details of backup and restore with MSSQL. One point to note is that during a restore, you will normally not want to restore the master database. You must exclude it from the backup selections that you have made or the restore will fail.
It is possible to restore the master database, but you must first shutdown the MSSQL server, then you must perform special recovery commands. Please see Microsoft documentation on how to restore the master database.
This project was funded by Bacula Systems and is available with the .
The new Job Bandwidth Limitation directive may be added to the File daemon's and/or Director's configuration to limit the bandwidth used by a Job on a Client. It can be set in the File daemon's conf file for all Jobs run in that File daemon, or it can be set for each Job in the Director's conf file. The speed is always specified in bytes per second.
For example:
FileDaemon { Name = localhost-fd Working Directory = /some/path Pid Directory = /some/path ... Maximum Bandwidth Per Job = 5Mb/s }
The above example would cause any jobs running with the FileDaemon to not exceed 5 megabytes per second of throughput when sending data to the Storage Daemon. Note, the speed is always specified in bytes per second (not in bits per second), and the case (upper/lower) of the specification characters is ignored (i.e. 1MB/s = 1Mb/s).
You may specify the following speed parameter modifiers: k/s (1,024 bytes per second), kb/s (1,000 bytes per second), m/s (1,048,576 bytes per second), or mb/s (1,000,000 bytes per second).
For example:
Job { Name = locahost-data FileSet = FS_localhost Accurate = yes ... Maximum Bandwidth = 5Mb/s ... }
The above example would cause Job localhost-data to not exceed 5MB/s of throughput when sending data from the File daemon to the Storage daemon.
A new console command setbandwidth permits to set dynamically the maximum throughput of a running Job or for future jobs of a Client.
* setbandwidth limit=1000 jobid=10
Number of bytes can be expressed using modifiers mentioned above (k/s, kb/s, m/s or mb/s).
This project was funded by Bacula Systems and is available in .
The new delta Plugin is able to compute and apply signature-based file differences. It can be used to backup only changes in a big binary file like Outlook PST, VirtualBox/VMware images or database files.
It supports both Incremental and Differential backups and stores signatures database in the File Daemon working directory. This plugin is available on all platform including Windows 32 and 64bit.
Accurate option should be turned on in the Job resource.
Job { Accurate = yes FileSet = DeltaFS ... } FileSet { Name = DeltaFS ... Include { # Specify one file Plugin = "delta:/home/eric/.VirtualBox/HardDisks/lenny-i386.vdi" } } FileSet { Name = DeltaFS-Include ... Include { Options { Compression = GZIP1 Signature = MD5 Plugin = delta } # Use the Options{} filtering and options File = /home/user/.VirtualBox } }
Please contact Bacula Systems support to get Delta Plugin specific documentation.
This project was funded by Bacula Systems and is available with the .
The problem with backing up multiple servers at the same time to the same tape library (or autoloader) is that if both servers access the same tape drive same time, you will very likely get data corruption. This is where the Bacula Systems shared tape storage plugin comes into play. The plugin ensures that only one server at a time can connect to each device (tape drive) by using the SPC-3 SCSI reservation protocol. Please contact Bacula Systems support to get SAN Shared Storage Plugin specific documentation.
This project was funded by Bacula Systems and is available with .
The new Shared Storage Director's directive is a Bacula Enterprise feature that allows you to share volumes between different Storage resources. This directive should be used only if all Media Type are correctly set across all Devices.
The Shared Storage directive should be used when using the SAN Shared Storage plugin or when accessing from the Director Storage resources directly to Devices of an Autochanger.
When sharing volumes between different Storage resources, you will need also to use the reset-storageid script before using the update slots command. This script can be scheduled once a day in an Admin job.
$ /opt/bacula/scripts/reset-storageid MediaType StorageName $ bconsole * update slots storage=StorageName drive=0
Please contact Bacula Systems support to get help on this advanced configuration.
This project was funded by Bacula Systems and is available with .
The reset-storageid procedure is no longer required when using the appropriate Autochanger configuration in the Director configuration side.
The previous NDMP Plugin 4.0 was fully supporting only the NetApp hardware, the new NDMP Plugin should now be able to support all NAS vendors with the volume_format plugin command option.
On some NDMP devices such as Celera or Blueray, the administrator can use arbitrary volume structure name, ex:
/dev/volume_home /rootvolume/volume_tmp /VG/volume_var
The NDMP plugin should be aware of the structure organization in order to detect if the administrator wants to restore in a new volume (where=/dev/vol_tmp) or inside a subdirectory of the targeted volume (where=/tmp).
FileSet { Name = NDMPFS ... Include { Plugin = "ndmp:host=nasbox user=root pass=root file=/dev/vol1 volume_format=/dev/" } }
Please contact Bacula Systems support to get NDMP Plugin specific documentation.
This project was funded by Bacula Systems and is available with the
When the Accurate mode is turned on, you can decide to always backup a file by using then new A Accurate option in your FileSet. For example:
Job { Name = ... FileSet = FS_Example Accurate = yes ... } FileSet { Name = FS_Example Include { Options { Accurate = A } File = /file File = /file2 } ... }
This project was funded by Bacula Systems based on an idea of James Harper and is available with the .
You are now able to specify the Accurate mode on the run command and in the Schedule resource.
* run accurate=yes job=Test
Schedule { Name = WeeklyCycle Run = Full 1st sun at 23:05 Run = Differential accurate=yes 2nd-5th sun at 23:05 Run = Incremental accurate=no mon-sat at 23:05 }
It can allow you to save memory and and CPU resources on the catalog server in some cases.
These advanced tuning options are available with the .
RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h Dir=%D"
LZO compression was added in the Unix File Daemon. From the user point of view, it works like the GZIP compression (just replace compression=GZIP with compression=LZO).
For example:
Include { Options { compression=LZO } File = /home File = /data }
LZO provides much faster compression and decompression speed but lower compression ratio than GZIP. It is a good option when you backup to disk. For tape, the built-in compression may be a better option.
LZO is a good alternative for GZIP1 when you don't want to slow down your backup. On a modern CPU it should be able to run almost as fast as:
Note that bacula only use one compression level LZO1X-1.
The code for this feature was contributed by Laurent Papier.
Since the old integrated Windows tray monitor doesn't work with recent Windows versions, we have written a new Qt Tray Monitor that is available for both Linux and Windows. In addition to all the previous features, this new version allows you to run Backups from the tray monitor menu.
To be able to run a job from the tray monitor, you need to allow specific commands in the Director monitor console:
Console { Name = win2003-mon Password = "xxx" CommandACL = status, .clients, .jobs, .pools, .storage, .filesets, .messages, run ClientACL = *all* # you can restrict to a specific host CatalogACL = *all* JobACL = *all* StorageACL = *all* ScheduleACL = *all* PoolACL = *all* FileSetACL = *all* WhereACL = *all* }
This project was funded by Bacula Systems and is available with and .
The new Purge Migration Job directive may be added to the Migration Job definition in the 's configuration file. When it is enabled the Job that was migrated during a migration will be purged at the end of the migration job.
For example:
Job { Name = "migrate-job" Type = Migrate Level = Full Client = localhost-fd FileSet = "Full Set" Messages = Standard Storage = DiskChanger Pool = Default Selection Type = Job Selection Pattern = ".*Save" ... Purge Migration Job = yes }
This project was submitted by Dunlap Blake; testing and documentation was funded by Bacula Systems.
We rewrote the job pruning algorithm in this version. Previously, in some users reported that the pruning process at the end of jobs was very long. It should not be longer the case. Now, Bacula won't prune automatically a Job if this particular Job is needed to restore data. Example:
JobId: 1 Level: Full JobId: 2 Level: Incremental JobId: 3 Level: Incremental JobId: 4 Level: Differential .. Other incrementals up to now
In this example, if the Job Retention defined in the Pool or in the Client resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will detect that JobId 1 and 4 are essential to restore data at the current state and will prune only JobId 2 and 3.
Important, this change affect only the automatic pruning step after a Job and the prune jobs Bacula Console command. If a volume expires after the VolumeRetention period, important jobs can be pruned.
This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.
To verify a given job, just specify the Job jobid in argument when starting the job.
*run job=VerifyVolume jobid=1 level=VolumeToCatalog Run Verify job JobName: VerifyVolume Level: VolumeToCatalog Client: 127.0.0.1-fd FileSet: Full Set Pool: Default (From Job resource) Storage: File (From Job resource) Verify Job: VerifyVol.2010-09-08_14.17.17_03 Verify List: /tmp/regress/working/VerifyVol.bsr When: 2010-09-08 14:17:31 Priority: 10 OK to run? (yes/mod/no):
This project was funded by Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.