Skip to content

Rules for building and handling Docker images with Bazel

License

Notifications You must be signed in to change notification settings

linkin7/rules_docker

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bazel Container Image Rules

Travis CI Bazel CI
Build Status Build status

Basic Rules

These rules used to be docker_build, docker_push, etc. and the aliases for these (mostly) legacy names still exist largely for backwards-compatibility. We also have early-stage oci_image, oci_push, etc. aliases for folks that enjoy the consistency of a consistent rule prefix. The only place the format-specific names currently do any more than alias things is in foo_push, where they also specify the appropriate format as which to publish the image.

Overview

This repository contains a set of rules for pulling down base images, augmenting them with build artifacts and assets, and publishing those images. These rules do not require / use Docker for pulling, building, or pushing images. This means:

  • They can be used to develop Docker containers on OSX without boot2docker or docker-machine installed. Note use of these rules on Windows is currently not supported.
  • They do not require root access on your workstation.

Also, unlike traditional container builds (e.g. Dockerfile), the Docker images produced by container_image are deterministic / reproducible.

To get started with building Docker images, check out the examples that build the same images using both rules_docker and a Dockerfile.

NOTE: container_push and container_pull make use of google/go-containerregistry for registry interactions.

Language Rules

It is notable that: cc_image, go_image, rust_image, and d_image also allow you to specify an external binary target.

Docker Rules

This repo now includes rules that provide additional functionality to install packages and run commands inside docker containers. These rules, however, require a docker binary is present and properly configured. These rules include:

Overview

In addition to low-level rules for building containers, this repository provides a set of higher-level rules for containerizing applications. The idea behind these rules is to make containerizing an application built via a lang_binary rule as simple as changing it to lang_image.

By default these higher level rules make use of the distroless language runtimes, but these can be overridden via the base="..." attribute (e.g. with a container_pull or container_image target).

Note also that these rules do not expose any docker related attributes. If you need to add a custom env or symlink to a lang_image, you must use container_image targets for this purpose. Specifically, you can use as base for your lang_image target a container_image target that adds e.g., custom env or symlink. Please see go_image (custom base) for an example.

Setup

Add the following to your WORKSPACE file to add the external repositories:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
  # Get copy paste instructions for the http_archive attributes from the
  # release notes at https://github.com/bazelbuild/rules_docker/releases
)

# OPTIONAL: Call this to override the default docker toolchain configuration.
# This call should be placed BEFORE the call to "container_repositories" below
# to actually override the default toolchain configuration.
# Note this is only required if you actually want to call
# docker_toolchain_configure with a custom attr; please read the toolchains
# docs in /toolchains/docker/ before blindly adding this to your WORKSPACE.
# BEGIN OPTIONAL segment:
load("@io_bazel_rules_docker//toolchains/docker:toolchain.bzl",
    docker_toolchain_configure="toolchain_configure"
)
docker_toolchain_configure(
  name = "docker_config",
  # OPTIONAL: Path to a directory which has a custom docker client config.json.
  # See https://docs.docker.com/engine/reference/commandline/cli/#configuration-files
  # for more details.
  client_config="<enter absolute path to your docker config directory here>",
  # OPTIONAL: Path to the docker binary.
  # Should be set explicitly for remote execution.
  docker_path="<enter absolute path to the docker binary (in the remote exec env) here>",
  # OPTIONAL: Path to the gzip binary.
  gzip_path="<enter absolute path to the gzip binary (in the remote exec env) here>",
  # OPTIONAL: Bazel target for the gzip tool.
  gzip_target="<enter absolute path (i.e., must start with repo name @...//:...) to an executable gzip target>",
  # OPTIONAL: Path to the xz binary.
  # Should be set explicitly for remote execution.
  xz_path="<enter absolute path to the xz binary (in the remote exec env) here>",
  # OPTIONAL: List of additional flags to pass to the docker command.
  docker_flags = [
    "--tls",
    "--log-level=info",
  ],

)
# End of OPTIONAL segment.

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)
container_repositories()

load("@io_bazel_rules_docker//repositories:deps.bzl", container_deps = "deps")

container_deps()

load(
    "@io_bazel_rules_docker//container:container.bzl",
    "container_pull",
)

container_pull(
  name = "java_base",
  registry = "gcr.io",
  repository = "distroless/java",
  # 'tag' is also supported, but digest is encouraged for reproducibility.
  digest = "sha256:deadbeef",
)

Known Issues

  • Bazel does not deal well with diamond dependencies.

If the repositories that are imported by container_repositories() have already been imported (at a different version) by other rules you called in your WORKSPACE, which are placed above the call to container_repositories(), arbitrary errors might occur. If you get errors related to external repositories, you will likely not be able to use container_repositories() and will have to import directly in your WORKSPACE all the required dependencies (see the most up to date impl of container_repositories() for details).

  • ImportError: No module named moves.urllib.parse

This is an example of an error due to a diamond dependency. If you get this error, make sure to import rules_docker before other libraries, so that six can be patched properly.

See bazelbuild#1022 for more details.

  • Ensure your project has a BUILD or BUILD.bazel file at the top level. This can be a blank file if necessary. Otherwise you might see an error that looks like:
Unable to load package for //:WORKSPACE: BUILD file not found in any of the following directories.

Using with Docker locally.

Suppose you have a container_image target //my/image:helloworld:

container_image(
    name = "helloworld",
    ...
)

You can load this into your local Docker client by running: bazel run my/image:helloworld.

For the lang_image targets, this will also run the container to maximize compatibility with lang_binary rules. You can suppress this behavior by passing the single flag: bazel run :foo -- --norun

Alternatively, you can build a docker load compatible bundle with: bazel build my/image:helloworld.tar. This will produce the file: bazel-bin/my/image/helloworld.tar, which you can load into your local Docker client by running: docker load -i bazel-bin/my/image/helloworld.tar. Building this target can be expensive for large images.

These work with both container_image, container_bundle, and the lang_image rules. For everything except container_bundle, the image name will be bazel/my/image:helloworld. For container_bundle, it will apply the tags you have specified.

Authentication

You can use these rules to access private images using standard Docker authentication methods. e.g. to utilize the Google Container Registry. See here for authentication methods.

See also:

Once you've setup your docker client configuration, see here for an example of how to use container_pull with custom docker authentication credentials and here for an example of how to use container_push with custom docker authentication credentials.

Varying image names

A common request from folks using container_push or container_bundle is to be able to vary the tag that is pushed or embedded. There are two options at present for doing this.

Stamping

The first option is to use stamping. Stamping is enabled when a supported attribute contains a python format placeholder (e.g. {BUILD_USER}).

# A common pattern when users want to avoid trampling
# on each other's images during development.
container_push(
  name = "publish",
  format = "Docker",

  # Any of these components may have variables.
  registry = "gcr.io",
  repository = "my-project/my-image",
  tag = "{BUILD_USER}",
)

The next natural question is: "Well what variables can I use?" This option consumes the workspace-status variables Bazel defines in stable-status.txt and volatile-status.txt. These files will appear in the target's runfiles:

$ bazel build //docker/testdata:push_stamp
...

$ cat bazel-bin/docker/testdata/push_stamp.runfiles/io_bazel_rules_docker/stable-status.txt
BUILD_EMBED_LABEL
BUILD_HOST bazel
BUILD_USER mattmoor

$ cat bazel-bin/docker/testdata/push_stamp.runfiles/io_bazel_rules_docker/volatile-status.txt
BUILD_TIMESTAMP 1498740967769

You can augment these variables via --workspace_status_command, including through the use of .bazelrc.

Make variables

The second option is to employ Makefile-style variables:

container_bundle(
  name = "bundle",

  images = {
    "gcr.io/$(project)/frontend:latest": "//frontend:image",
    "gcr.io/$(project)/backend:latest": "//backend:image",
  }
)

These variables are specified on the CLI using:

   bazel build --define project=blah //path/to:bundle

Debugging lang_image rules

By default the lang_image rules use the distroless base runtime images, which are optimized to be the minimal set of things your application needs at runtime. That can make debugging these containers difficult because they lack even a basic shell for exploring the filesystem.

To address this, we publish variants of the distroless runtime images tagged :debug, which are the exact-same images, but with additions such as busybox to make debugging easier.

For example (in this repo):

$ bazel run -c dbg testdata:go_image
...
INFO: Build completed successfully, 5 total actions

INFO: Running command line: bazel-bin/testdata/go_image
Loaded image ID: sha256:9c5c2167a1db080a64b5b401b43b3c5cdabb265b26cf7a60aabe04a20da79e24
Tagging 9c5c2167a1db080a64b5b401b43b3c5cdabb265b26cf7a60aabe04a20da79e24 as bazel/testdata:go_image
Hello, world!

$ docker run -ti --rm --entrypoint=sh bazel/testdata:go_image -c "echo Hello, busybox."
Hello, busybox.

Examples

container_image

container_image(
    name = "app",
    # References container_pull from WORKSPACE (above)
    base = "@java_base//image",
    files = ["//java/com/example/app:Hello_deploy.jar"],
    cmd = ["Hello_deploy.jar"]
)

Hint: if you want to put files in specific directories inside the image use pkg_tar rule to create the desired directory structure and pass that to container_image via tars attribute. Note you might need to set strip_prefix = "." or strip_prefix = "{some directory}" in your rule for the files to not be flattened. See Bazel upstream issue 2176 and rules_docker issue 317 for more details.

cc_image

To use cc_image, add the following to WORKSPACE:

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//cc:image.bzl",
    _cc_image_repos = "repositories",
)

_cc_image_repos()

Then in your BUILD file, simply rewrite cc_binary to cc_image with the following import:

load("@io_bazel_rules_docker//cc:image.bzl", "cc_image")

cc_image(
    name = "cc_image",
    srcs = ["cc_image.cc"],
    deps = [":cc_image_library"],
)

cc_image (external binary)

To use cc_image (or go_image, d_image, rust_image) with an external cc_binary (or the like) target, then your BUILD file should instead look like:

load("@io_bazel_rules_docker//cc:image.bzl", "cc_image")

cc_binary(
    name = "cc_binary",
    srcs = ["cc_binary.cc"],
    deps = [":cc_library"],
)

cc_image(
    name = "cc_image",
    binary = ":cc_binary",
)

If you need to modify somehow the container produced by cc_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example below.

py_image

To use py_image, add the following to WORKSPACE:

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//python:image.bzl",
    _py_image_repos = "repositories",
)

_py_image_repos()

Then in your BUILD file, simply rewrite py_binary to py_image with the following import:

load("@io_bazel_rules_docker//python:image.bzl", "py_image")

py_image(
    name = "py_image",
    srcs = ["py_image.py"],
    deps = [":py_image_library"],
    main = "py_image.py",
)

If you need to modify somehow the container produced by py_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example below.

If you are using py_image with a custom base that has python tools installed in a location different to the default base, please see Python tools.

py_image (fine layering)

For Python and Java's lang_image rules, you can factor dependencies that don't change into their own layers by overriding the layers=[] attribute. Consider this sample from the rules_k8s repository:

py_image(
    name = "server",
    srcs = ["server.py"],
    # "layers" is just like "deps", but it also moves the dependencies each into
    # their own layer, which can dramatically improve developer cycle time. For
    # example here, the grpcio layer is ~40MB, but the rest of the app is only
    # ~400KB.  By partitioning things this way, the large grpcio layer remains
    # unchanging and we can reduce the amount of image data we repush by ~99%!
    layers = [
        requirement("grpcio"),
        "//examples/hellogrpc/proto:py",
    ],
    main = "server.py",
)

You can also implement more complex fine layering strategies by using the py_layer rule and its filter attribute. For example:

# Suppose that we are synthesizing an image that depends on a complex set
# of libraries that we want to break into layers.
LIBS = [
    "//pkg/complex_library",
    # ...
]
# First, we extract all transitive dependencies of LIBS that are under //pkg/common.
py_layer(
    name = "common_deps",
    deps = LIBS,
    filter = "//pkg/common",
)
# Then, we further extract all external dependencies of the deps under //pkg/common.
py_layer(
    name = "common_external_deps",
    deps = [":common_deps"],
    filter = "@",
)
# We also extract all external dependencies of LIBS, which is a superset of
# ":common_external_deps".
py_layer(
    name = "external_deps",
    deps = LIBS,
    filter = "@",
)
# Finally, we create the image, stacking the above filtered layers on top of one
# another in the "layers" attribute.  The layers are applied in order, and any
# dependencies already added to the image will not be added again.  Therefore,
# ":external_deps" will only add the external dependencies not present in
# ":common_external_deps".
py_image(
    name = "image",
    deps = LIBS,
    layers = [
        ":common_external_deps",
        ":common_deps",
        ":external_deps",
    ],
    # ...
)

py3_image

To use a Python 3 runtime instead of the default of Python 2, use py3_image, instead of py_image. The other semantics are identical.

If you need to modify somehow the container produced by py3_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example below.

If you are using py3_image with a custom base that has python tools installed in a location different to the default base, please see Python tools.

nodejs_image

It is notable that unlike the other image rules, nodejs_image is not currently using the gcr.io/distroless/nodejs image for a handful of reasons. This is a switch we plan to make, when we can manage it. We are currently utilizing the gcr.io/google-appengine/debian9 image as our base.

To use nodejs_image, add the following to WORKSPACE:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

http_archive(
    name = "build_bazel_rules_nodejs",
    # Replace with a real SHA256 checksum
    sha256 = "{SHA256}"
    # Replace with a real release version
    urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/{VERSION}/rules_nodejs-{VERSION}.tar.gz"],
)


load("@build_bazel_rules_nodejs//:index.bzl", "npm_install")

# Install your declared Node.js dependencies
npm_install(
    name = "npm",
    package_json = "//:package.json",
    yarn_lock = "//:yarn.lock",
)

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//nodejs:image.bzl",
    _nodejs_image_repos = "repositories",
)

_nodejs_image_repos()

Note: See note about diamond dependencies in setup if you run into issues related to external repos after adding these lines to your WORKSPACE.

Then in your BUILD file, simply rewrite nodejs_binary to nodejs_image with the following import:

load("@io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")

nodejs_image(
    name = "nodejs_image",
    entry_point = "@your_workspace//path/to:file.js",
    # npm deps will be put into their own layer
    data = [":file.js", "@npm//some-npm-dep"],
    ...
)

If you need to modify somehow the container produced by nodejs_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example below.

go_image

To use go_image, add the following to WORKSPACE:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//go:image.bzl",
    _go_image_repos = "repositories",
)

_go_image_repos()

Note: See note about diamond dependencies in setup if you run into issues related to external repos after adding these lines to your WORKSPACE.

Then in your BUILD file, simply rewrite go_binary to go_image with the following import:

load("@io_bazel_rules_docker//go:image.bzl", "go_image")

go_image(
    name = "go_image",
    srcs = ["main.go"],
    importpath = "github.com/your/path/here",
)

Notice that it is important to explicitly build this target with the --platforms=@io_bazel_rules_go//go/toolchain:linux_amd64 flag as the binary should be built for Linux since it will run in a Linux container.

If you need to modify somehow the container produced by go_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see example below.

go_image (custom base)

To use a custom base image, with any of the lang_image rules, you can override the default base="..." attribute. Consider this modified sample from the distroless repository:

load("@rules_pkg//:pkg.bzl", "pkg_tar")

# Create a passwd file with a root and nonroot user and uid.
passwd_entry(
    username = "root",
    uid = 0,
    gid = 0,
    name = "root_user",
)

passwd_entry(
    username = "nonroot",
    info = "nonroot",
    uid = 1002,
    name = "nonroot_user",
)

passwd_file(
    name = "passwd",
    entries = [
        ":root_user",
        ":nonroot_user",
    ],
)

# Create a tar file containing the created passwd file
pkg_tar(
    name = "passwd_tar",
    srcs = [":passwd"],
    mode = "0o644",
    package_dir = "etc",
)

# Include it in our base image as a tar.
container_image(
    name = "passwd_image",
    base = "@go_image_base//image",
    tars = [":passwd_tar"],
    user = "nonroot",
)

# Simple go program to print out the username and uid.
go_image(
    name = "user",
    srcs = ["user.go"],
    # Override the base image.
    base = ":passwd_image",
)

java_image

To use java_image, add the following to WORKSPACE:

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//java:image.bzl",
    _java_image_repos = "repositories",
)

_java_image_repos()

Then in your BUILD file, simply rewrite java_binary to java_image with the following import:

load("@io_bazel_rules_docker//java:image.bzl", "java_image")

java_image(
    name = "java_image",
    srcs = ["Binary.java"],
    # Put these runfiles into their own layer.
    layers = [":java_image_library"],
    main_class = "examples.images.Binary",
)

If you need to modify somehow the container produced by java_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example.

war_image

To use war_image, add the following to WORKSPACE:

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//java:image.bzl",
    _java_image_repos = "repositories",
)

_java_image_repos()

Note: See note about diamond dependencies in setup if you run into issues related to external repos after adding these lines to your WORKSPACE.

Then in your BUILD file, simply rewrite java_war to war_image with the following import:

load("@io_bazel_rules_docker//java:image.bzl", "war_image")

war_image(
    name = "war_image",
    srcs = ["Servlet.java"],
    # Put these JARs into their own layers.
    layers = [
        ":java_image_library",
        "@javax_servlet_api//jar:jar",
    ],
)

The produced image uses Jetty 9.x to serve the web application. Servlets included in the web application need to follow the API specification 3.0. For best compatibility, use a Servlet dependency provided by the Jetty project.

A Servlet implementation needs to declare the @WebServlet annotation to be auto-discovered. The use of a web.xml to declare the Servlet URL mapping is not supported.

If you need to modify somehow the container produced by war_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example.

scala_image

To use scala_image, add the following to WORKSPACE:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# You *must* import the Scala rules before setting up the scala_image rules.
http_archive(
    name = "io_bazel_rules_scala",
    # Replace with a real SHA256 checksum
    sha256 = "{SHA256}"
    # Replace with a real commit SHA
    strip_prefix = "rules_scala-{HEAD}",
    urls = ["https://github.com/bazelbuild/rules_scala/archive/{HEAD}.tar.gz"],
)

load("@io_bazel_rules_scala//scala:scala.bzl", "scala_repositories")

scala_repositories()

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//scala:image.bzl",
    _scala_image_repos = "repositories",
)

_scala_image_repos()

Note: See note about diamond dependencies in setup if you run into issues related to external repos after adding these lines to your WORKSPACE.

Then in your BUILD file, simply rewrite scala_binary to scala_image with the following import:

load("@io_bazel_rules_docker//scala:image.bzl", "scala_image")

scala_image(
    name = "scala_image",
    srcs = ["Binary.scala"],
    main_class = "examples.images.Binary",
)

If you need to modify somehow the container produced by scala_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example.

groovy_image

To use groovy_image, add the following to WORKSPACE:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# You *must* import the Groovy rules before setting up the groovy_image rules.
http_archive(
    name = "io_bazel_rules_groovy",
    # Replace with a real SHA256 checksum
    sha256 = "{SHA256}"
    # Replace with a real commit SHA
    strip_prefix = "rules_groovy-{HEAD}",
    urls = ["https://github.com/bazelbuild/rules_groovy/archive/{HEAD}.tar.gz"],
)

load("@io_bazel_rules_groovy//groovy:groovy.bzl", "groovy_repositories")

groovy_repositories()

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//groovy:image.bzl",
    _groovy_image_repos = "repositories",
)

_groovy_image_repos()

Note: See note about diamond dependencies in setup if you run into issues related to external repos after adding these lines to your WORKSPACE.

Then in your BUILD file, simply rewrite groovy_binary to groovy_image with the following import:

load("@io_bazel_rules_docker//groovy:image.bzl", "groovy_image")

groovy_image(
    name = "groovy_image",
    srcs = ["Binary.groovy"],
    main_class = "examples.images.Binary",
)

If you need to modify somehow the container produced by groovy_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example.

rust_image

To use rust_image, add the following to WORKSPACE:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# You *must* import the Rust rules before setting up the rust_image rules.
http_archive(
    name = "rules_rust",
    # Replace with a real SHA256 checksum
    sha256 = "{SHA256}"
    # Replace with a real commit SHA
    strip_prefix = "rules_rust-{HEAD}",
    urls = ["https://github.com/bazelbuild/rules_rust/archive/{HEAD}.tar.gz"],
)

load("@rules_rust//rust:repositories.bzl", "rust_repositories")

rust_repositories()

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//rust:image.bzl",
    _rust_image_repos = "repositories",
)

_rust_image_repos()

Note: See note about diamond dependencies in setup if you run into issues related to external repos after adding these lines to your WORKSPACE.

Then in your BUILD file, simply rewrite rust_binary to rust_image with the following import:

load("@io_bazel_rules_docker//rust:image.bzl", "rust_image")

rust_image(
    name = "rust_image",
    srcs = ["main.rs"],
)

If you need to modify somehow the container produced by rust_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example.

d_image

To use d_image, add the following to WORKSPACE:

load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")

# You *must* import the D rules before setting up the d_image rules.
http_archive(
    name = "io_bazel_rules_d",
    # Replace with a real SHA256 checksum
    sha256 = "{SHA256}"
    # Replace with a real commit SHA
    strip_prefix = "rules_d-{HEAD}",
    urls = ["https://github.com/bazelbuild/rules_d/archive/{HEAD}.tar.gz"],
)

load("@io_bazel_rules_d//d:d.bzl", "d_repositories")

d_repositories()

load(
    "@io_bazel_rules_docker//repositories:repositories.bzl",
    container_repositories = "repositories",
)

container_repositories()

load(
    "@io_bazel_rules_docker//d:image.bzl",
    _d_image_repos = "repositories",
)

_d_image_repos()

Note: See note about diamond dependencies in setup if you run into issues related to external repos after adding these lines to your WORKSPACE.

Then in your BUILD file, simply rewrite d_binary to d_image with the following import:

load("@io_bazel_rules_docker//d:image.bzl", "d_image")

d_image(
    name = "d_image",
    srcs = ["main.d"],
)

If you need to modify somehow the container produced by d_image (e.g., env, symlink), see note above in Language Rules Overview about how to do this and see go_image (custom base) example.

NOTE: all application image rules support the args string_list attribute. If specified, they will be appended directly after the container ENTRYPOINT binary name.

container_bundle

container_bundle(
    name = "bundle",
    images = {
        # A set of images to bundle up into a single tarball.
        "gcr.io/foo/bar:bazz": ":app",
        "gcr.io/foo/bar:blah": "//my:sidecar",
        "gcr.io/foo/bar:booo": "@your//random:image",
    }
)

container_pull

In WORKSPACE:

container_pull(
    name = "base",
    registry = "gcr.io",
    repository = "my-project/my-base",
    # 'tag' is also supported, but digest is encouraged for reproducibility.
    digest = "sha256:deadbeef",
)

This can then be referenced in BUILD files as @base//image.

To get the correct digest one can run docker manifest inspect gcr.io/my-project/my-base:tag once experimental docker cli features are enabled.

See here for an example of how to use container_pull with custom docker authentication credentials.

container_push

This target pushes on bazel run :push_foo:

container_push(
   name = "push_foo",
   image = ":foo",
   format = "Docker",
   registry = "gcr.io",
   repository = "my-project/my-image",
   tag = "dev",
)

We also support the docker_push (from docker/docker.bzl) and oci_push (from oci/oci.bzl) aliases, which bake in the format = "..." attribute.

See here for an example of how to use container_push with custom docker authentication credentials.

container_push (Custom client configuration)

If you wish to use container_push using custom docker authentication credentials, in WORKSPACE:

# Download the rules_docker repository
http_archive(
    name = "io_bazel_rules_docker",
    ...
)

# Load the macro that allows you to customize the docker toolchain configuration.
load("@io_bazel_rules_docker//toolchains/docker:toolchain.bzl",
    docker_toolchain_configure="toolchain_configure"
)

docker_toolchain_configure(
  name = "docker_config",
  # Replace this with an absolute path to a directory which has a custom docker
  # client config.json. Note relative paths are not supported.
  # Docker allows you to specify custom authentication credentials
  # in the client configuration JSON file.
  # See https://docs.docker.com/engine/reference/commandline/cli/#configuration-files
  # for more details.
  client_config="/path/to/docker/client/config-dir",
)

In BUILD file:

load("@io_bazel_rules_docker//container:container.bzl", "container_push")

container_push(
   name = "push_foo",
   image = ":foo",
   format = "Docker",
   registry = "gcr.io",
   repository = "my-project/my-image",
   tag = "dev",
)

container_pull (DockerHub)

In WORKSPACE:

container_pull(
    name = "official_ubuntu",
    registry = "index.docker.io",
    repository = "library/ubuntu",
    tag = "14.04",
)

This can then be referenced in BUILD files as @official_ubuntu//image.

container_pull (Quay.io)

In WORKSPACE:

container_pull(
    name = "etcd",
    registry = "quay.io",
    repository = "coreos/etcd",
    tag = "latest",
)

This can then be referenced in BUILD files as @etcd//image.

container_pull (Bintray.io)

In WORKSPACE:

container_pull(
    name = "artifactory",
    registry = "docker.bintray.io",
    repository = "jfrog/artifactory-pro",
)

This can then be referenced in BUILD files as @artifactory//image.

container_pull (Gitlab)

In WORKSPACE:

container_pull(
    name = "gitlab",
    registry = "registry.gitlab.com",
    repository = "username/project/image",
    tag = "tag",
)

This can then be referenced in BUILD files as @gitlab//image.

container_pull (Custom client configuration)

If you specified a docker client directory using the client_config attribute to the docker toolchain configuration described here, you can use a container_pull that uses the authentication credentials from the specified docker client directory as follows:

In WORKSPACE:

load("@io_bazel_rules_docker//toolchains/docker:toolchain.bzl",
    docker_toolchain_configure="toolchain_configure"
)

# Configure the docker toolchain.
docker_toolchain_configure(
  name = "docker_config",
  # Path to the directory which has a custom docker client config.json with
  # authentication credentials for registry.gitlab.com (used in this example).
  client_config="/path/to/docker/client/config",
)

# Load the custom version of container_pull created by the docker toolchain
# configuration.
load("@docker_config//:pull.bzl", authenticated_container_pull="container_pull")

authenticated_container_pull(
    name = "gitlab",
    registry = "registry.gitlab.com",
    repository = "username/project/image",
    tag = "tag",
)

This can then be referenced in BUILD files as @gitlab//image.

NOTE: This should only be used if a custom client_config was set. If you want to use the DOCKER_CONFIG env variable or the default home directory use the standard container_pull rule.

NOTE: This will only work on systems with Python >2.7.6

Python tools

Starting with Bazel 0.25.0 it's possible to configure python toolchains for rules_docker.

To use these features you need to enable the flags in the .bazelrc file at the root of this project.

Use of these features require a python toolchain to be registered. //py_images/image.bzl:deps and //py3_images/image.bzl:deps register a default python toolchain (//toolchains/python:container_py_toolchain) that defines the path to python tools inside the default container used for these rules.

Known issues

If you are using a custom base for py_image or py3_image builds that has python tools installed in a different location to those defined in //toolchains/python:container_py_toolchain, you will need to create a toolchain that points to these paths and register it before the call to py*_images/image.bzl:deps in your WORKSPACE.

Use of python toolchain features, currently, only supports picking one version of python for execution of host tools. rules_docker heavily depends on execution of python host tools that are only compatible with python 2. Flags in the recommended .bazelrc file force all host tools to use python 2. If your project requires using host tools that are only compatible with python 3 you will not be able to use these features at the moment. We expect this issue to be resolved before use of python toolchain features becomes the default.

Updating the distroless base images.

The digest references to the distroless base images must be updated over time to pick up bug fixes and security patches. To facilitate this, the files containing the digest references are generated by tools/update_deps.py. To update all of the dependencies, please run (from the root of the repository):

./update_deps.sh

Image references should not be updated individually because these images have shared layers and letting them diverge could result in sub-optimal push and pull performance.

container_pull

container_pull(name, registry, repository, digest, tag)

A repository rule that pulls down a Docker base image in a manner suitable for use with container_image's base attribute.

NOTE: container_pull now supports authentication using custom docker client configuration. See here for details.

NOTE: Set PULLER_TIMEOUT env variable to change the default 600s timeout.

NOTE: Set DOCKER_REPO_CACHE env variable to make the container puller cache downloaded layers at the directory specified as a value to this env variable. The caching feature hasn't been thoroughly tested and may be thread unsafe. If you notice flakiness after enabling it, see the warning below on how to workaround it.

NOTE: container_pull is suspected to have thread safety issues. To ensure multiple container_pull(s) don't execute concurrently, please use the bazel startup flag --loading_phase_threads=1 in your bazel invocation.

Attributes
name

Name, required

Unique name for this repository rule.

registry

Registry Domain; required

The registry from which to pull the base image.

repository

Repository; required

The `repository` of images to pull from.

digest

string; optional

The `digest` of the Docker image to pull from the specified `repository`.

Note: For reproducible builds, use of `digest` is recommended.

tag

string; optional

The `tag` of the Docker image to pull from the specified `repository`. If neither this nor `digest` is specified, this attribute defaults to `latest`. If both are specified, then `tag` is ignored.

Note: For reproducible builds, use of `digest` is recommended.

os

string; optional

When the specified image refers to a multi-platform manifest list, the desired operating system. For example, linux or windows.

os_version

string; optional

When the specified image refers to a multi-platform manifest list, the desired operating system version. For example, 10.0.10586.

os_features

string list; optional

When the specified image refers to a multi-platform manifest list, the desired operating system features. For example, on Windows this might be ["win32k"].

architecture

string; optional

When the specified image refers to a multi-platform manifest list, the desired CPU architecture. For example, amd64 or arm.

cpu_variant

string; optional

When the specified image refers to a multi-platform manifest list, the desired CPU variant. For example, for ARM you may need to use v6 or v7.

platform_features

string list; optional

When the specified image refers to a multi-platform manifest list, the desired features. For example, this may include CPU features such as ["sse4", "aes"].

puller_darwin

label; optional

A Mac 64-bit binary that implements the functionality provided by //container/go/cmd/puller. Visible for testing purposes only.

puller_linux

label; optional

A Linux 64-bit binary that implements the functionality provided by //container/go/cmd/puller. Visible for testing purposes only.

docker_client_config

string; optional

Specifies the directory to look for the docker client configuration. Don't use this directly. Specify the docker configuration directory using a custom docker toolchain configuration. Look for the client_config attribute in docker_toolchain_configure here for details. See here for an example on how to use container_pull after configuring the docker toolchain

When left unspecified (ie not set explicitly or set by the docker toolchain), docker will use the directory specified via the DOCKER_CONFIG environment variable. If DOCKER_CONFIG isn't set, docker falls back to $HOME/.docker.

container_push

container_push(name, image, registry, repository, tag)

An executable rule that pushes a Docker image to a Docker registry on bazel run.

NOTE: container_push now supports authentication using custom docker client configuration. See here for details.

Attributes
name

Name, required

Unique name for this rule.

format

Kind, required

The desired format of the published image. Currently, this supports Docker and OCI

image

Label; required

The label containing a Docker image to publish.

registry

Registry Domain; required

The registry to which to publish the image.

This field supports stamp variables.

repository

Repository; required

The `repository` of images to which to push.

This field supports stamp variables.

tag

string; optional

The `tag` of the Docker image to push to the specified `repository`. This attribute defaults to `latest`.

This field supports stamp variables.

stamp

Bool; optional

Deprecated: it is now automatically inferred.

If true, enable use of workspace status variables (e.g. BUILD_USER, BUILD_EMBED_LABEL, and custom values set using --workspace_status_command) in tags.

These fields are specified in the tag using Python format syntax, e.g. example.org/{BUILD_USER}/image:{BUILD_EMBED_LABEL}.

container_layer

container_layer(data_path, directory, empty_dirs, files, mode, tars, debs, symlinks, env)

A rule that assembles data into a tarball which can be use as in layers attr in container_image rule.

Implicit output targets
name-layer.tar A tarball of current layer

A data tarball corresponding to the layer.

Attributes
name Name, required

A unique name for this rule.

data_path String, optional

Root path of the files.

The directory structure from the files is preserved inside the Docker image, but a prefix path determined by data_path is removed from the directory structure. This path can be absolute from the workspace root if starting with a `/` or relative to the rule's directory. A relative path may starts with "./" (or be ".") but cannot use go up with "..". By default, the data_path attribute is unused, and all files should have no prefix.

directory String, optional

Target directory.

The directory in which to expand the specified files, defaulting to '/'. Only makes sense accompanying one of files/tars/debs.

empty_dirs List of directories, optional

Directory to add to the layer.

A list of empty directories that should be created in the Docker image.

files List of files, optional

File to add to the layer.

A list of files that should be included in the Docker image.

mode String, default to 0o555

Set the mode of files added by the files attribute.

tars List of files, optional

Tar file to extract in the layer.

A list of tar files whose content should be in the Docker image.

debs List of files, optional

Debian packages to extract.

Deprecated: A list of debian packages that will be extracted in the Docker image. Note that this doesn't actually install the packages. Installation needs apt or apt-get which need to be executed within a running container which container_layer can't do.

symlinks Dictionary, optional

Symlinks to create in the Docker image.

symlinks = { "/path/to/link": "/path/to/target", ... },

env Dictionary from strings to strings, optional

Dictionary from environment variable names to their values when running the Docker image.

env = { "FOO": "bar", ... },

The values of this field support make variables (e.g., $(FOO)) and stamp variables; keys support make variables as well.

compression String, optional

Compression method for image layers. Currently only gzip is supported.

This affects the compressed layer, which is by the `container_push` rule.

compression = "gzip",

compression_options List of strings, optional

Command-line options for the compression tool. Possible values depend on `compression` method.

This affects the compressed layer, which is by the `container_push` rule.

compression_options = ["--fast"],

container_image

MOVED: See generated API documentation in docs/image.md

container_bundle

container_bundle(name, images)

A rule that aliases and saves N images into a single docker save tarball.

Toolchains
Attributes
name

Name, required

Unique name for this rule.

images

Map of Tag to image Label; required

A collection of the images to save into the tarball.

The keys are the tags with which to alias the image specified by the value. These tags may contain make variables ($FOO), and if stamp is set to true, may also contain workspace status variables ({BAR}).

The values may be the output of container_pull, container_image, or a docker save tarball.

stamp

Bool; optional

Deprecated: it is now automatically inferred.

If true, enable use of workspace status variables (e.g. BUILD_USER, BUILD_EMBED_LABEL, and custom values set using --workspace_status_command) in tags.

These fields are specified in the tag using Python format syntax, e.g. example.org/{BUILD_USER}/image:{BUILD_EMBED_LABEL}.

container_import

container_import(name, config, layers)

A rule that imports a docker image into our intermediate form.

Attributes
name

Name, required

Unique name for this rule.

config

The v2.2 image's json configuration; required

A json configuration file containing the image's metadata.

This appears in `docker save` tarballs as `.json` and is referenced by `manifest.json` in the config field.

layers

The list of layer `.tar`s or `.tar.gz`s; required

The list of layer .tar.gz files in the order they appear in the config.json's layer section, or in the order that they appear in docker save tarballs' manifest.json Layers field (these may or may not be gzipped). Note that the layers should each have a different basename.

container_load

container_load(name, file)

A repository rule that examines the contents of a docker save tarball and creates a container_import target. The created target can be referenced as @label_name//image.

Attributes
name

Name, required

Unique name for this rule.

file

The `docker save` tarball file; required

A label targeting a single file which is a compressed or uncompressed tar, as obtained through `docker save IMAGE`.

Adopters

Here's a (non-exhaustive) list of companies that use rules_docker in production. Don't see yours? You can add it in a PR!

About

Rules for building and handling Docker images with Bazel

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Starlark 60.0%
  • Go 15.1%
  • Python 14.9%
  • Shell 8.8%
  • Smarty 0.5%
  • Java 0.5%
  • Dockerfile 0.2%