Using the Google Cloud Dataflow Runner
The Google Cloud Dataflow Runner uses the Cloud Dataflow managed service. When you run your pipeline with the Cloud Dataflow service, the runner uploads your executable code and dependencies to a Google Cloud Storage bucket and creates a Cloud Dataflow job, which executes your pipeline on managed resources in Google Cloud Platform.
The Cloud Dataflow Runner and service are suitable for large scale, continuous jobs, and provide:
- a fully managed service
- autoscaling of the number of workers throughout the lifetime of the job
- dynamic work rebalancing
The Beam Capability Matrix documents the supported capabilities of the Cloud Dataflow Runner.
Cloud Dataflow Runner prerequisites and setup
To use the Cloud Dataflow Runner, you must complete the following setup:
Select or create a Google Cloud Platform Console project.
Enable billing for your project.
Enable the required Google Cloud APIs: Cloud Dataflow, Compute Engine, Stackdriver Logging, Cloud Storage, Cloud Storage JSON, and Cloud Resource Manager. You may need to enable additional APIs (such as BigQuery, Cloud Pub/Sub, or Cloud Datastore) if you use them in your pipeline code.
Install the Google Cloud SDK.
Create a Cloud Storage bucket.
- In the Google Cloud Platform Console, go to the Cloud Storage browser.
- Click Create bucket.
- In the Create bucket dialog, specify the following attributes:
- Name: A unique bucket name. Do not include sensitive information in the bucket name, as the bucket namespace is global and publicly visible.
- Storage class: Multi-Regional
- Location: Choose your desired location
- Click Create.
For more information, see the Before you begin section of the Cloud Dataflow quickstarts.
Specify your dependency
When using Java, you must specify your dependency on the Cloud Dataflow Runner in your
<dependency> <groupId>org.apache.beam</groupId> <artifactId>beam-runners-google-cloud-dataflow-java</artifactId> <version>2.1.0</version> <scope>runtime</scope> </dependency>
This section is not applicable to the Beam SDK for Python.
Before running your pipeline, you must authenticate with the Google Cloud Platform. Run the following command to get Application Default Credentials.
gcloud auth application-default login
Pipeline options for the Cloud Dataflow Runner
When executing your pipeline with the Cloud Dataflow Runner (Java), consider these common pipeline options. When executing your pipeline with the Cloud Dataflow Runner (Python), consider these common pipeline options.
||The pipeline runner to use. This option allows you to determine the pipeline runner at runtime.||Set to
||The project ID for your Google Cloud Project.||If not set, defaults to the default project in the current environment. The default project is set via
||Whether streaming mode is enabled or disabled;
Path for temporary files. Must be a valid Google Cloud Storage URL that begins with
||No default value.|
||Cloud Storage bucket path for temporary files. Must be a valid Cloud Storage URL that begins with
||If not set, defaults to the value of
||Optional. Cloud Storage bucket path for staging your binary and any temporary files. Must be a valid Cloud Storage URL that begins with
If not set, defaults to a staging directory within
||Save the main session state so that pickled functions and classes defined in
||Override the default location from where the Beam SDK is downloaded. This value can be an URL, a Cloud Storage path, or a local path to an SDK tarball. Workflow submissions will download or copy the SDK tarball from this location. If set to the string
Additional information and caveats
Monitoring your job
While your pipeline executes, you can monitor the job’s progress, view details on execution, and receive updates on the pipeline’s results by using the Dataflow Monitoring Interface or the Dataflow Command-line Interface.
To block until your job completes, call
wait_until_finish on the
PipelineResult returned from
pipeline.run(). The Cloud Dataflow Runner prints job status updates and console messages while it waits. While the result is connected to the active job, note that pressing Ctrl+C from the command line does not cancel your job. To cancel the job, you can use the Dataflow Monitoring Interface or the Dataflow Command-line Interface.
If your pipeline uses an unbounded data source or sink, you must set the
streaming option to
The Beam SDK for Python does not currently support streaming pipelines.