Google Cloud Dataflow provides a simple, powerful programming model for building both batch and streaming parallel data processing pipelines. This repository hosts the open-sourced Cloud Dataflow SDK for Java, which can be used to run pipelines against the Google Cloud Dataflow Service.
The contents of this repository are also available as released artifacts in the Maven Central Repository. You can bypass this GitHub repository and depend directly on the released artifacts from Maven Central by adding the following dependency to development environments like Eclipse or Apache Maven:
<dependency>
<groupId>com.google.cloud.dataflow</groupId>
<artifactId>google-cloud-dataflow-java-sdk-all</artifactId>
<version>version_number</version>
</dependency>
Please replace version_number
with one of the supported versions from our
Release Notes.
The SDK is publicly available as a Beta release, and might be changed in backward-incompatible ways.
The Google Cloud Dataflow Service is also publicly available in Beta under the following conditions:
- Your use of Google Cloud Dataflow is governed by the Google Cloud Platform Terms of Service. The foregoing notwithstanding, Google Cloud Dataflow is currently in Beta release and might be changed in backward-incompatible ways. It is not subject to any SLA or deprecation policy and is not recommended for production use.
The key concepts in this programming model are:
PCollection
: represents a collection of data, which could be bounded or unbounded in size.PTransform
: represents a computation that transforms input PCollections into output PCollections.Pipeline
: manages a directed acyclic graph of PTransforms and PCollections that is ready for execution.PipelineRunner
: specifies where and how the pipeline should execute.
We provide three PipelineRunners:
- The
DirectPipelineRunner
runs the pipeline on your local machine. - The
DataflowPipelineRunner
submits the pipeline to the Dataflow Service, where it runs using managed resources in the Google Cloud Platform (GCP). - The
BlockingDataflowPipelineRunner
submits the pipeline to the Dataflow Service via theDataflowPipelineRunner
and then prints messages about the job status until the execution is complete.
The SDK is built to be extensible and support additional execution environments
beyond local execution and the Google Cloud Dataflow Service. In partnership
with Cloudera, you can run Dataflow pipelines on
an Apache Spark backend using the
SparkPipelineRunner
.
Additionally, you can run Dataflow pipelines on an
Apache Flink backend using the
FlinkPipelineRunner
.
This repository consists of three parts:
- The
SDK
module provides a set of basic Java APIs to program against. - The
Examples
module provides a few samples to get started. We recommend starting with theWordCount
example. - The
Contrib
directory hosts community-contributed Dataflow modules.
The following command will build both modules and install them in your local Maven repository:
mvn clean install
You can speed up the build and install process by using the following options:
-
To skip execution of the unit tests, run:
mvn install -DskipTests
-
While iterating on a specific module, use the following command to compile and reinstall it. For example, to reinstall the
examples
module, run:mvn install -pl examples
Be careful, however, as this command will use the most recently installed SDK from the local repository (or Maven Central) even if you have changed it locally.
If you are using Eclipse integrated development environment (IDE), please additionally review our Eclipse integration instructions.
After building and installing, you can execute the WordCount
and other
example pipelines by following the instructions in this
README.
We welcome all usage-related questions on Stack Overflow
tagged with google-cloud-dataflow
.
Please use issue tracker on GitHub to report any bugs, comments or questions regarding SDK development.