RunAfterJob = "/bin/echo Director=%D
There are additional features (plugins) available in the Enterprise version that are described in another chapter. A subscription to Bacula Systems is required for the Enterprise version.
LZO compression has been to the File daemon. From the user's point of view, it works like the GZIP compression (just replace compression=GZIP with compression=LZO).
For example:
Include { Options {compression=LZO } File = /home File = /data }
LZO provides a much faster compression and decompression speed but lower compression ratio than GZIP. It is a good option when you backup to disk. For tape, the hardware compression is almost always a better option.
LZO is a good alternative for GZIP1 when you don't want to slow down your backup. With a modern CPU it should be able to run almost as fast as:
Note, Bacula uses compression level LZO1X-1.
The code for this feature was contributed by Laurent Papier.
Since the old integrated Windows tray monitor doesn't work with recent Windows versions, we have written a new Qt Tray Monitor that is available for both Linux and Windows. In addition to all the previous features, this new version allows you to run Backups from the tray monitor menu.
To be able to run a job from the tray monitor, you need to allow specific commands in the Director monitor console:
Console { Name = win2003-mon Password = "xxx" CommandACL = status, .clients, .jobs, .pools, .storage, .filesets, .messages, run ClientACL = *all* # you can restrict to a specific host CatalogACL = *all* JobACL = *all* StorageACL = *all* ScheduleACL = *all* PoolACL = *all* FileSetACL = *all* WhereACL = *all* }
This project was funded by Bacula Systems and is available with Bacula the Enterprise Edition and the Community Edition.
The new Purge Migration Job directive may be added to the Migration Job definition in the Director's configuration file. When it is enabled the Job that was migrated during a migration will be purged at the end of the migration job.
For example:
Job { Name = "migrate-job" Type = Migrate Level = Full Client = localhost-fd FileSet = "Full Set" Messages = Standard Storage = DiskChanger Pool = Default Selection Type = Job Selection Pattern = ".*Save" ... Purge Migration Job = yes }
This project was submitted by Dunlap Blake; testing and documentation was funded by Bacula Systems.
Bat has now a bRestore panel that uses Bvfs to display files and directories.
the Bvfs module works correctly with BaseJobs, Copy and Migration jobs.
This project was funded by Bacula Systems.
Bvfs allows you to query the catalog against any combination of jobs. You can combine all Jobs and all FileSet for a Client in a single session.
To get all JobId needed to restore a particular job, you can use the .bvfs_get_jobids command.
.bvfs_get_jobids jobid=num [all]
.bvfs_get_jobids jobid=10 1,2,5,10 .bvfs_get_jobids jobid=10 all 1,2,3,5,10
In this example, a normal restore will need to use JobIds 1,2,5,10 to compute a complete restore of the system.
With the all option, the Director will use all defined FileSet for this client.
The .bvfs_update command computes the directory cache for jobs specified in argument, or for all jobs if unspecified.
.bvfs_update [jobid=numlist]
Example:
.bvfs_update jobid=1,2,3
You can run the cache update process in a RunScript after the catalog backup.
Bvfs allows you to find all versions of a specific file for a given Client with the .bvfs_version command. To avoid problems with encoding, this function uses only PathId and FilenameId. The jobid argument is mandatory but unused.
.bvfs_versions client=filedaemon pathid=num filenameid=num jobid=1 PathId FilenameId FileId JobId LStat Md5 VolName Inchanger PathId FilenameId FileId JobId LStat Md5 VolName Inchanger ...
Example:
.bvfs_versions client=localhost-fd pathid=1 fnid=47 jobid=1 1 47 52 12 gD HRid IGk D Po Po A P BAA I A /uPgWaxMgKZlnMti7LChyA Vol1 1
Bvfs allows you to list directories in a specific path.
.bvfs_lsdirs pathid=num path=/apath jobid=numlist limit=num offset=num PathId FilenameId FileId JobId LStat Path PathId FilenameId FileId JobId LStat Path PathId FilenameId FileId JobId LStat Path ...
You need to pathid or path. Using path="" will list “/” on Unix and all drives on Windows. If FilenameId is 0, the record listed is a directory.
.bvfs_lsdirs pathid=4 jobid=1,11,12 4 0 0 0 A A A A A A A A A A A A A A . 5 0 0 0 A A A A A A A A A A A A A A .. 3 0 0 0 A A A A A A A A A A A A A A regress/
In this example, to list directories present in regress/, you can use
.bvfs_lsdirs pathid=3 jobid=1,11,12 3 0 0 0 A A A A A A A A A A A A A A . 4 0 0 0 A A A A A A A A A A A A A A .. 2 0 0 0 A A A A A A A A A A A A A A tmp/
Bvfs allows you to list files in a specific path.
.bvfs_lsfiles pathid=num path=/apath jobid=numlist limit=num offset=num PathId FilenameId FileId JobId LStat Path PathId FilenameId FileId JobId LStat Path PathId FilenameId FileId JobId LStat Path ...
You need to pathid or path. Using path="" will list “/” on Unix and all drives on Windows. If FilenameId is 0, the record listed is a directory.
.bvfs_lsfiles pathid=4 jobid=1,11,12 4 0 0 0 A A A A A A A A A A A A A A . 5 0 0 0 A A A A A A A A A A A A A A .. 1 0 0 0 A A A A A A A A A A A A A A regress/
In this example, to list files present in regress/, you can use
.bvfs_lsfiles pathid=1 jobid=1,11,12 1 47 52 12 gD HRid IGk BAA I BMqcPH BMqcPE BMqe+t A titi 1 49 53 12 gD HRid IGk BAA I BMqe/K BMqcPE BMqe+t B toto 1 48 54 12 gD HRie IGk BAA I BMqcPH BMqcPE BMqe+3 A tutu 1 45 55 12 gD HRid IGk BAA I BMqe/K BMqcPE BMqe+t B ficheriro1.txt 1 46 56 12 gD HRie IGk BAA I BMqe/K BMqcPE BMqe+3 D ficheriro2.txt
Bvfs allows you to create a SQL table that contains files that you want to restore. This table can be provided to a restore command with the file option.
.bvfs_restore fileid=numlist dirid=numlist hardlink=numlist path=b2num OK restore file=?b2num ...
To include a directory (with dirid), Bvfs needs to run a query to select all files. This query could be time consuming.
hardlink list is always composed of a series of two numbers (jobid, fileindex). This information can be found in the LinkFI field of the LStat packet.
The path argument represents the name of the table that Bvfs will store results. The format of this table is b2[0-9]+. (Should start by b2 and followed by digits).
Example:
.bvfs_restore fileid=1,2,3,4 hardlink=10,15,10,20 jobid=10 path=b20001 OK
To drop the table used by the restore command, you can use the .bvfs_cleanup command.
.bvfs_cleanup path=b20001
To clear the BVFS cache, you can use the .bvfs_clear_cache command.
.bvfs_clear_cache yes OK
We rewrote the job pruning algorithm in this version. Previously, in some users reported that the pruning process at the end of jobs was very long. It should not be longer the case. Now, Bacula won't prune automatically a Job if this particular Job is needed to restore data. Example:
JobId: 1 Level: Full JobId: 2 Level: Incremental JobId: 3 Level: Incremental JobId: 4 Level: Differential .. Other incrementals up to now
In this example, if the Job Retention defined in the Pool or in the Client resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will detect that JobId 1 and 4 are essential to restore data at the current state and will prune only JobId 2 and 3.
Important, this change affect only the automatic pruning step after a Job and the prune jobs Bconsole command. If a volume expires after the VolumeRetention period, important jobs can be pruned.
This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.
To verify a given job, just specify the Job jobid in argument when starting the job.
*run job=VerifyVolume jobid=1 level=VolumeToCatalog Run Verify job JobName: VerifyVolume Level: VolumeToCatalog Client: 127.0.0.1-fd FileSet: Full Set Pool: Default (From Job resource) Storage: File (From Job resource) Verify Job: VerifyVol.2010-09-08_14.17.17_03 Verify List: /tmp/regress/working/VerifyVol.bsr When: 2010-09-08 14:17:31 Priority: 10 OK to run? (yes/mod/no):
This project was funded by Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.
RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h"
The exact definition as of this writing is:
typedef struct s_baculaFuncs { uint32_t size; uint32_t version; bRC (*registerBaculaEvents)(bpContext *ctx, ...); bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value); bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value); bRC (*JobMessage)(bpContext *ctx, const char *file, int line, int type, utime_t mtime, const char *fmt, ...); bRC (*DebugMessage)(bpContext *ctx, const char *file, int line, int level, const char *fmt, ...); void *(*baculaMalloc)(bpContext *ctx, const char *file, int line, size_t size); void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem); /* New functions follow */ bRC (*AddExclude)(bpContext *ctx, const char *file); bRC (*AddInclude)(bpContext *ctx, const char *file); bRC (*AddIncludeOptions)(bpContext *ctx, const char *opts); bRC (*AddRegex)(bpContext *ctx, const char *item, int type); bRC (*AddWild)(bpContext *ctx, const char *item, int type); bRC (*checkChanges)(bpContext *ctx, struct save_pkt *sp); } bFuncs;
typedef enum { bEventJobStart = 1, bEventJobEnd = 2, bEventStartBackupJob = 3, bEventEndBackupJob = 4, bEventStartRestoreJob = 5, bEventEndRestoreJob = 6, bEventStartVerifyJob = 7, bEventEndVerifyJob = 8, bEventBackupCommand = 9, bEventRestoreCommand = 10, bEventLevel = 11, bEventSince = 12, /* New events */ bEventCancelCommand = 13, bEventVssBackupAddComponents = 14, bEventVssRestoreLoadComponentMetadata = 15, bEventVssRestoreSetComponentsSelected = 16, bEventRestoreObject = 17, bEventEndFileSet = 18, bEventPluginCommand = 19, bEventVssBeforeCloseRestore = 20, bEventVssPrepareSnapshot = 21 } bEventType;
The following enhancements are made to the Bacula Filed with regards to Access Control Lists (ACLs)
This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and is available with Bacula Enterprise Edition and Community Edition.
The following enhancements are made to the Bacula Filed with regards to Extended Attributes (XATTRs)
This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and is available with Bacula Enterprise Edition and Community Edition.
The main Bacula Director code is independent of the SQL backend in version 5.2.0 and greater. This means that the Bacula Director can be packaged by itself, then each of the different SQL backends supported can be packaged separately. It is possible to build all the DB backends at the same time by including multiple database options at the same time.
./configure can be run with multiple database configure options.
--with-mysql --with-postgresql
Order of testing for databases is:
Each configured backend generates a file named: libbaccats-<sql_backend_name>-<version>.so
A dummy catalog library is created named libbaccats-version.so
At configure time the first detected backend is used as the so called default backend and at install time the dummy libbaccats-<version>.so
is replaced with the default backend type.
If you configure all three backends you get three backend libraries and the postgresql gets installed as the default.
When you want to switch to another database, first save any old catalog you may have then you can copy one of the three backend libraries over the libbaccats-<version>.so
e.g.
An actual command, depending on your Bacula version might be:
cp libbaccats-postgresql-5.2.2.so libbaccats-5.2.2.so
where the 5.2.2
must be replaced by the Bacula release version number.
Then you must update the default backend in the following files:
create_bacula_database drop_bacula_database drop_bacula_tables grant_bacula_privileges make_bacula_tables make_catalog_backup update_bacula_tables
And re-run all the above scripts. Please note, this means you will have a new empty database and if you had a previous one it will be lost.
All current database backend drivers for catalog information are rewritten to use a set of multi inherited C++ classes which abstract the specific database specific internals and make sure we have a more stable generic interface with the rest of SQL code. From now on there is a strict boundary between the SQL code and the low-level database functions. This new interface should also make it easier to add a new backend for a currently unsupported database. An extra bonus of the new code is that you can configure multiple backends in the configure and build all backends in one compile session and select the correct database backend at install time. This should make it a lot easier for packages maintainers.
We also added cursor support for PostgreSQL backend, this improves memory usage for large installation.
This project was implemented by Planets Communications B.V. and ELM Consultancy B.V. and Bacula Systems and is available with both the Bacula Enterprise Edition and the Community Edition.
The htable hash table class has been extended with extra hash functions for handling next to char pointer hashes also 32 bits and 64 bits hash keys. Also the hash table initialization routines have been enhanced with support for passing a hint as to the number of initial pages to use for the size of the hash table. Until now the hash table always used a fixed value of 10 Mb. The private hash functions of the mountpoint entry cache have been rewritten to use the new htable class with a small memory footprint.
This project was funded by Planets Communications B.V. and ELM Consultancy B.V. and Bacula Systems and is available with Bacula Enterprise Edition and Community Edition.
There are no new features in version 5.0.2. This version simply fixes a number of bugs found in version 5.0.1 during the ongoing development process.
There are no new features in version 5.0.2. This version simply fixes a number of bugs found in version 5.0.1 during the ongoing development process.
This chapter presents the new features that are in the released Bacula version 5.0.1. This version mainly fixes a number of bugs found in version 5.0.0 during the ongoing development process.
The Pool directive ActionOnPurge=Truncate instructs Bacula to truncate the volume when it is purged with the new command purge volume action. It is useful to prevent disk based volumes from consuming too much space.
Pool { Name = Default Action On Purge = Truncate ... }
As usual you can also set this property with the update volume command
*update volume=xxx ActionOnPurge=Truncate *update volume=xxx actiononpurge=None
To ask Bacula to truncate your Purged volumes, you need to use the following command in interactive mode or in a RunScript as shown after:
*purge volume action=truncate storage=File allpools # or by default, action=all *purge volume action storage=File pool=Default
This is possible to specify the volume name, the media type, the pool, the storage, etc...(see help purge) Be sure that your storage device is idle when you decide to run this command.
Job { Name = CatalogBackup ... RunScript { RunsWhen=After RunsOnClient=No Console = "purge volume action=all allpools storage=File" } }
Important note: This feature doesn't work as expected in version 5.0.0. Please do not use it before version 5.0.1.
Maximum Concurrent Jobs is a new Device directive in the Storage Daemon configuration permits setting the maximum number of Jobs that can run concurrently on a specified Device. Using this directive, it is possible to have different Jobs using multiple drives, because when the Maximum Concurrent Jobs limit is reached, the Storage Daemon will start new Jobs on any other available compatible drive. This facilitates writing to multiple drives with multiple Jobs that all use the same Pool.
This project was funded by Bacula Systems.
Previously, you were able to restore from multiple devices in a single Storage Daemon. Now, Bacula is able to restore from multiple Storage Daemons. For example, if your full backup runs on a Storage Daemon with an autochanger, and your incremental jobs use another Storage Daemon with lots of disks, Bacula will switch automatically from one Storage Daemon to an other within the same Restore job.
You must upgrade your File Daemon to version 3.1.3 or greater to use this feature.
This project was funded by Bacula Systems with the help of Equiinet.
This is something none of the competition does, as far as we know (except perhaps BackupPC, which is a Perl program that saves to disk only). It is big win for the user, it makes Bacula stand out as offering a unique optimization that immediately saves time and money. Basically, imagine that you have 100 nearly identical Windows or Linux machine containing the OS and user files. Now for the OS part, a Base job will be backed up once, and rather than making 100 copies of the OS, there will be only one. If one or more of the systems have some files updated, no problem, they will be automatically restored.
See the Base Job Chapterbasejobs for more information.
This project was funded by Bacula Systems.
This new directive may be added to Storage resource within the Director's configuration to allow users to selectively disable the client compression for any job which writes to this storage resource.
For example:
Storage { Name = UltriumTape Address = ultrium-tape Password = storage_password # Password for Storage Daemon Device = Ultrium Media Type = LTO 3 AllowCompression = No # Tape drive has hardware compression }The above example would cause any jobs running with the UltriumTape storage resource to run without compression from the client file daemons. This effectively overrides any compression settings defined at the FileSet level.
This feature is probably most useful if you have a tape drive which supports hardware compression. By setting the AllowCompression = No directive for your tape drive storage resource, you can avoid additional load on the file daemon and possibly speed up tape backups.
This project was funded by Collaborative Fusion, Inc.
In previous versions, the accurate code used the file creation and modification times to determine if a file was modified or not. Now you can specify which attributes to use (time, size, checksum, permission, owner, group, ...), similar to the Verify options.
FileSet { Name = Full Include = { Options { Accurate = mcs Verify = pin5 } File = / } }
Important note: If you decide to use checksum in Accurate jobs, the File Daemon will have to read all files even if they normally would not be saved. This increases the I/O load, but also the accuracy of the deduplication. By default, Bacula will check modification/creation time and size.
This project was funded by Bacula Systems.
If you build bconsole with readline support, you will be able to use the new auto-completion mode. This mode supports all commands, gives help inside command, and lists resources when required. It works also in the restore mode.
To use this feature, you should have readline development package loaded on your system, and use the following option in configure.
./configure --with-readline=/usr/include/readline --disable-conio ...
The new bconsole won't be able to tab-complete with older directors.
This project was funded by Bacula Systems.
We added two new Pool directives, FileRetention and JobRetention, that take precedence over Client directives of the same name. It allows you to control the Catalog pruning algorithm Pool by Pool. For example, you can decide to increase Retention times for Archive or OffSite Pool.
It seems obvious to us, but apparently not to some users, that given the definition above that the Pool File and Job Retention periods is a global override for the normal Client based pruning, which means that when the Job is pruned, the pruning will apply globally to that particular Job.
Currently, there is a bug in the implementation that causes any Pool retention periods specified to apply to all Pools for that particular Client. Thus we suggest that you avoid using these two directives until this implementation problem is corrected.
It introduces new bacula-fd option (-k) specifying that ReadAll capabilities should be kept after UID/GID switch.
root@localhost:~# bacula-fd -k -u nobody -g nobody
The code for this feature was contributed by our friends at AltLinux.
To help developers of restore GUI interfaces, we have added new dot commands that permit browsing the catalog in a very simple way.
You can use limit=xxx and offset=yyy to limit the amount of data that will be displayed.
* .bvfs_update jobid=1,2 * .bvfs_update * .bvfs_lsdir path=/ jobid=1,2
This project was funded by Bacula Systems.
To determine the best configuration of your tape drive, you can run the new speed command available in the btape program.
This command can have the following arguments:
file_size=n | Specify the Maximum File Size for this test (between 1 and 5GB). This counter is in GB. |
nb_file=n | Specify the number of file to be written. The amount of data should be greater than your memory ( 3#3). |
skip_zero | This flag permits to skip tests with constant data. |
skip_random | This flag permits to skip tests with random data. |
skip_raw | This flag permits to skip tests with raw access. |
skip_block | This flag permits to skip tests with Bacula block access. |
*speed file_size=3 skip_raw btape.c:1078 Test with zero data and bacula block structure. btape.c:956 Begin writing 3 files of 3.221 GB with blocks of 129024 bytes. ++++++++++++++++++++++++++++++++++++++++++ btape.c:604 Wrote 1 EOF to "Drive-0" (/dev/nst0) btape.c:406 Volume bytes=3.221 GB. Write rate = 44.128 MB/s ... btape.c:383 Total Volume bytes=9.664 GB. Total Write rate = 43.531 MB/s btape.c:1090 Test with random data, should give the minimum throughput. btape.c:956 Begin writing 3 files of 3.221 GB with blocks of 129024 bytes. +++++++++++++++++++++++++++++++++++++++++++ btape.c:604 Wrote 1 EOF to "Drive-0" (/dev/nst0) btape.c:406 Volume bytes=3.221 GB. Write rate = 7.271 MB/s +++++++++++++++++++++++++++++++++++++++++++ ... btape.c:383 Total Volume bytes=9.664 GB. Total Write rate = 7.365 MB/s
When using compression, the random test will give your the minimum throughput of your drive . The test using constant string will give you the maximum speed of your hardware chain. (CPU, memory, SCSI card, cable, drive, tape).
You can change the block size in the Storage Daemon configuration file.
Block Checksum = no
doing so can reduce the Storage daemon CPU usage slightly. It will also permit Bacula to read a Volume that has corrupted data.
The default is yes - i.e. the checksum is computed on write and checked on read.
We do not recommend to turn this off particularly on older tape drives or for disk Volumes where doing so may allow corrupted data to go undetected.
Those new features were funded by Bacula Systems.
By clicking on “Media”, you can see the list of all your volumes. You will be able to filter by Pool, Media Type, Location,...And sort the result directly in the table. The old “Media” view is now known as “Pool”.
By double-clicking on a volume (on the Media list, in the Autochanger content or in the Job information panel), you can access a detailed overview of your Volume. (cf figure fig:mediainfo.)
By double-clicking on a Job record (on the Job run list or in the Media information panel), you can access a detailed overview of your Job. (cf figure fig:jobinfo.)
By double-clicking on a Storage record (on the Storage list panel), you can access a detailed overview of your Autochanger. (cf figure fig:jobinfo.)
To use this feature, you need to use the latest mtx-changer script version. (With new listall and transfer commands)
cacls "C:\Program Files\Bacula" /T /G SYSTEM:F Administrators:F
We are working on an equivalent USB key for Windows bare metal recovery, but it will take some time to develop it (best estimate 3Q2010 or 4Q2010)
Note that the Truncate Volume after purge feature doesn't work as expected in 5.0.0 version. Please, don't use it before version 5.0.1.
If you wish to add specialized commands that list the contents of the catalog, you can do so by adding them to the query.sql file. This query.sql file is now empty by default. The file examples/sample-query.sql has an a number of sample commands you might find useful.
The following items have been deprecated for a long time, and are now removed from the code.