Google Cloud Dataflow provides a simple, powerful programming model for building both batch and streaming parallel data processing pipelines.
The Cloud Dataflow SDK is used to access the Google Cloud Dataflow service, which is currently in Alpha and restricted to whitelisted users.
The SDK is publicly available and can be used for local execution by anyone. Note, however, that the SDK is also an Alpha release and may change significantly over time. The SDK is built to be extensible and support additional execution environments ("runners") beyond local execution and the Google Cloud Dataflow service. As the product matures, we look forward to working with you to improve Cloud Dataflow.
The key concepts in this programming model are:
PCollection
: represents a collection of data, which could be bounded or unbounded in size.PTransform
: represents a computation that transforms input PCollections into output PCollections.Pipeline
: manages a directed acyclic graph of PTransforms and PCollections, which is ready for execution.PipelineRunner
: specifies where and how the pipeline should execute.
Currently there are three PipelineRunners
:
- The
DirectPipelineRunner
runs the pipeline on your local machine. - The
DataflowPipelineRunner
submits the pipeline to the Dataflow Service, where it runs using managed resources in the Google Cloud Platform. - The
BlockingDataflowPipelineRunner
submits the pipeline to the Dataflow Service via theDataflowPipelineRunner
and then prints messages about the job status until execution is complete.
The Dataflow Service is currently in the Alpha phase of development and access is limited to whitelisted users.
This repository consists of two modules:
SDK
module provides a set of basic Java APIs to program against.Examples
module provides a few samples to get started. We recommend starting with the WordCount example.
The following command will build both modules and install them in your local Maven repository:
mvn clean install
You can speed up the build and install process by using the following options:
-
To skip execution of the unit tests, run:
mvn install -DskipTests
-
While iterating on a specific module, use the following command to compile and reinstall it. For example, to reinstall the
examples
module, run:mvn install -pl examples
Be careful, however, as this command will use the most recently installed SDK from the local repository (or Maven Central) even if you have changed it locally.
-
To run Maven using multiple threads, run:
mvn -T 4 install
After building and installing, you can execute the WordCount
and other example
pipelines locally or in the cloud using Maven with command-line options.
To execute the WordCount pipeline locally (using the default
DirectPipelineRunner
) and write output to a local or
Google Cloud Storage (GCS) location, use the following command-line syntax:
mvn exec:java -pl examples \
-Dexec.mainClass=com.google.cloud.dataflow.examples.WordCount \
-Dexec.args="--output=[<LOCAL FILE> | gs://<OUTPUT PREFIX>]
If you have been whitelisted for Alpha access to the Dataflow Service and
followed the developer setup
steps, you can use the BlockingDataflowPipelineRunner
to execute the
WordCount
example in the Google Cloud Platform (GCP). In this case, you
specify your project name, pipeline runner, the GCS staging location (staging
location should be entered in the form of gs://bucket/staging-directory
),
and the GCS output (in the form of gs://bucket/filename_prefix
).
mvn exec:java -pl examples \
-Dexec.mainClass=com.google.cloud.dataflow.examples.WordCount \
-Dexec.args="--project=<YOUR GCP PROJECT NAME> --runner=BlockingDataflowPipelineRunner \
--stagingLocation=<GCS STAGING LOCATION> --output=<GCS OUTPUT PREFIX>"
Refer to Google Cloud Platform for general instructions on getting started with GCP.
Other examples can be run similarly by replacing the WordCount
class name with
BigQueryTornadoes
, DatastoreWordCount
, TfIdf
, TopWikipediaSessions
, etc. and
adjusting runtime options under the Dexec.args
parameter, as specified in the
example itself.