-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathzebu.1
187 lines (187 loc) · 7.22 KB
/
zebu.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
.TH ZEBU 1
.\" NAME should be all caps, SECTION should be 1-8, maybe w/ subsection
.\" other parms are allowed; see man(7), man(1)
.SH NAME
zebu \- ZFS-based network filesystem backup tool
.SH SYNOPSIS
.B zebu
.I [options]
.SH "DESCRIPTION"
Create backups, including backups of remote machines, using ZFS snapshots.
.PP
.BR zebu
takes a configured list of ZFS file systems and performs regular backup
tasks. This can include copying data from a remote server to the local
file system using
.BR rsync
, and generally includes making snapshots, deleting old snapshots, and
may someday include transporting incremental snapshots to a remote machine
for off-site archival.
.PP
Backups take place in a series of phases, described below.
.BR zebu
iterates over each ZFS file system, and runs through all applicable phases
in turn (rather than running one phase at a time across each file system).
.PP
More documentation about
.BR zebu
will be forthcoming. Someday.
.SH PHASES
.SS Snapshotting
ZFS snapshots are created, using the configured prefix with the current time
stamp appended as a snapshot name. Time stamps are of the form
.I <year><month><day><hour><minute><second>
and are listed numerically (e.g., snap-20080607204837).
.PP
A file system with a defined
.I rsync_server
(see \fBzebu.conf(5)\fP for details) will be assumed to contain a copy of
files on a remote system.
.BR zebu
will use
.BR rsync
to copy files into the ZFS file system. Files will be copied into the
.I tree
subdirectory. Additionally, three log files will be created:
\fIexcludes\fP, which contains a list of exclude patterns used during the
\fBrsync\fP, \fIrsync_log\fP, which contains the log of the \fBrsync\fP
process itself, and \fIzebu_log\fP, which lists applicable messages from
.BR zebu
during the transfer. Once the rsync process is completed successfully,
and all log files created, the ZFS file system receives a snapshot. If the
.BR rsync
did not complete successfully, no snapshot is created.
.PP
Following any
.BR rsync
transfers, all ZFS file systems are treated identically through the remaining
.BR zebu
phases.
.SS Cleanup
Any snapshot older than indicated in the
\fItime_to_live\fP configuration file parameter is deleted. Age is determined
by comparing the current time with the time portion of a snapshot name.
.PP
.BR Zebu
only concerns itself with snapshots whose names begin with the
.I snap_prefix
configuration parameter. All other snapshots are ignored.
.PP
.BR Zebu
will never attempt to delete snapshots if only one snapshot is remaining.
This is intended to preserve at least one good snapshot, to avoid potentially
removing the last remaining viable backup (particularly important for
\fBrsync\fR-based backup file systems).
.SS Transmit
As a final step,
.BR Zebu
will attempt to transfer snapshots using
.BR "zfs send"
and the \fItransmit_cmd\fP defined in the configuration file. No vaults will be
transmitted until all vaults have been through a snapshot and cleanup phase (if
applicable) - vault transmission must always be the final step.
.PP
.BR Zebu
relies on
.BR "zfs send"
to create a stream to send to the configured \fItransmit_cmd\fP. The first time a
vault is sent,
.BR zebu
will send a full copy of the latest snapshot. All subsequent transmissions will be done
using incremental sends;
.BR zebu
will find the last snapshot that was successfully transmitted, and send a series of
incremental snapshots until all available
.BR zebu
snapshots have been transmitted.
.PP
The list of successfully transmitted snapshots for a
vault is kept in a status file, named \fI.zebu_status\fP, in the root of each
transmitted vault. If this status file is deleted,
.BR zebu
will assume no valid snapshots have been transmitted and will transmit the most
recent snapshot at the next opportunity.
.PP
Recursive vaults (ZFS filesystems with children, configured to be recursively
snapshotted) will be recursively transmitted. Status information is only kept for
the root of the recursive vault, so transfers of child filesystems must all be
successful for
.BR zebu
to record a successful transfer and move on to the next incremental. While failed
transfers will be attempted again on the next run (and may eventually work themselves
out, if
.BR zebu
is able to move enough snapshots from the failed run for later incremental transfers
to succeed), it is possible for the server and the recipient to get out of sync and
require manual intervention. This is particularly likely if transfers take an
exceptionally long time (either due to a slow transmission medium or a large amount of
data to send) and the cleanup phase removes the necessary source snapshots before they
can be sent as an incremental transmission. Take care to configure expire times
accordingly.
.PP
A new filesystem created in a recursive vault will not have the same snapshot
history as its peers. On each run across a recursive vault,
.BR zebu
will save the list of child filesystems in a file named \fI.zebu_children\fP.
If new children have appeared since the last run, the only the latest
snapshot will be transferred for the new child. As long as current
.BR zebu
run completes, both the source and destination systems will remain in sync
(and further runs will include the new filesystem, as normal). However, if
.BR zebu
fails during this run additional work may be required to maintain
synchronization.
If the \fI.zebu_children\fP file does not exist, a new one will silently be
created (and new children will not be properly detected until the creation
has completed).
.PP
The use of the \fB-F\fR option with
.BR "zfs recv"
is recommended for recursive vaults being sent to another live ZFS filesystem. To pipe
snapshots over
.BR ssh
to another backup server, the author uses a \fItransmit_cmd\fP like this:
.BR "/usr/bin/ssh -x -qT -l root backup.example.com /sbin/zfs recv -F -d pool"
.SH OPTIONS
.TP
\fB\-c\fR, \fB\-\-no-cleanup\fR
Skip the cleanup phase, and continue to the next phase. All cleanup phases will be
skipped (for all vaults), regardless of the setting of the \fBdoCleanup\fR key in the
config file.
.TP
\fB\-f\fR, \fB\-\-file\fR \fIconfig-file\fR
Use
.I config-file
as a configuration file.
.TP
\fB\-F\fR, \fB\-\-filesystem\fR \fIfilesystem\fR
Only operate on the named ZFS file system.
.TP
\fB\-h\fR, \fB\-\-help\fR
Show a summary of options and commands, then exit.
.TP
\fB\-s\fR, \fB\-\-no-snapshot\fR
Skip the snapshot phase, and continue to the next phase. All snapshot phases will be
skipped (for all vaults), regardless of the setting of the \fBdoSnapshot\fR key in the
config file.
.TP
\fB\-t\fR, \fB\-\-no-transmit\fR
Skip the transmit phase. All transmit phases will be
skipped (for all vaults), regardless of the setting of the \fBdoTransmit\fR key in the
config file.
.TP
\fB\-v\fR, \fB\-\-verbose\fR
Be a bit more verbose, and provide status messages during a run.
.TP
\fB\-V\fR, \fB\-\-version\fR
Print version information and exit.
.SH SEE ALSO
.nf
zebu.conf(5)
.SH AUTHOR
\fBzebu\fR was written by Mike Shuey <[email protected]> and is licensed under
the terms of the GNU Public License, version 2 or higher.
.SH "KNOWN ISSUES"
Zebu will detect a new filesystem created in a recursively-copied vault, and
copy it appropriately. However, deleted filesystems will not be similarly
detected; the replicated data must be manually deleted on the recipient.