Docker image that creates and streams a tar backup of a host volume to Amazon S3 storage.
- Lightweight: Based on the Alpine base image
- Fast: Backups are streamed directly to S3 with awscli
- Versatile: Can also be used with selfhosted S3-compatible services like minio
Run the automated build, specifying your AWS credentials, bucket name, and backup path.
docker run -it \
-e AWS_ACCESS_KEY_ID=ID \
-e AWS_SECRET_ACCESS_KEY=KEY \
-e BUCKET_NAME=backups \
-e BACKUP_NAME=backup \
-v /path/to/backup:/backup dokku/s3backup
Example with different region, different S3 storage class, different signature version and call to S3-compatible service (different endpoint url)
docker run -it \
-e AWS_ACCESS_KEY_ID=ID \
-e AWS_SECRET_ACCESS_KEY=KEY \
-e AWS_DEFAULT_REGION=us-east-1 \
-e AWS_SIGNATURE_VERSION=s3v4 \
-e S3_STORAGE_CLASS=STANDARD_IA \
-e ENDPOINT_URL=https://YOURAPIURL \
-e BUCKET_NAME=backups \
-e BACKUP_NAME=backup \
-v /path/to/backup:/backup dokku/s3backup
You can optionally encrypt your backup using GnuPG. To do so, set ENCRYPTION_KEY.
docker run -it \
-e AWS_ACCESS_KEY_ID=ID \
-e AWS_SECRET_ACCESS_KEY=KEY \
-e BUCKET_NAME=backups \
-e BACKUP_NAME=backup \
-e ENCRYPTION_KEY=your_secret_passphrase
-v /path/to/backup:/backup dokku/s3backup
First, build the image.
docker build -t s3backup .
Then run the image, specifying your AWS credentials, bucket name, and backup path.
docker run -it \
-e AWS_ACCESS_KEY_ID=ID \
-e AWS_SECRET_ACCESS_KEY=KEY \
-e BUCKET_NAME=backups \
-e BACKUP_NAME=backup \
-v /path/to/backup:/backup s3backup