Overview

The Apache Flink Runner can be used to execute Beam pipelines using Apache Flink. For execution you can choose between a cluster execution mode (e.g. Yarn/Kubernetes/Mesos) or a local embedded execution mode which is useful for testing pipelines.

The Flink Runner and Flink are suitable for large scale, continuous jobs, and provide:

Using the Apache Flink Runner

It is important to understand that the Flink Runner comes in two flavors:

  1. The original classic Runner which supports only Java (and other JVM-based languages)
  2. The newer portable Runner which supports Java/Python/Go

You may ask why there are two Runners?

Beam and its Runners originally only supported JVM-based languages (e.g. Java/Scala/Kotlin). Python and Go SDKs were added later on. The architecture of the Runners had to be changed significantly to support executing pipelines written in other languages.

If your applications only use Java, then you should currently go with the classic Runner. Eventually, the portable Runner will replace the classic Runner because it contains the generalized framework for executing Java, Python, Go, and more languages in the future.

If you want to run Python pipelines with Beam on Flink you want to use the portable Runner. For more information on portability, please visit the Portability page.

Consequently, this guide is split into two parts to document the classic and the portable functionality of the Flink Runner. Please use the switcher below to select the appropriate Runner:

Prerequisites and Setup

If you want to use the local execution mode with the Flink Runner you don’t have to complete any cluster setup. You can simply run your Beam pipeline. Be sure to set the Runner to FlinkRunnerPortableRunner.

To use the Flink Runner for executing on a cluster, you have to setup a Flink cluster by following the Flink Setup Quickstart.

Version Compatibility

The Flink cluster version has to match the minor version used by the FlinkRunner. The minor version is the first two numbers in the version string, e.g. in 1.7.0 the minor version is 1.7.

We try to track the latest version of Apache Flink at the time of the Beam release. A Flink version is supported by Beam for the time it is supported by the Flink community. The Flink community typially supports the last two minor versions. When support for a Flink version is dropped, it may be deprecated and removed also from Beam, with the exception of Beam LTS releases. LTS releases continue to receive bug fixes for long as the LTS support period.

To find out which version of Flink is compatible with Beam please see the table below:

Beam Version Flink Version Artifact Id
2.17.0-2.18.0 1.9.x beam-runners-flink-1.9
1.8.x beam-runners-flink-1.8
1.7.x beam-runners-flink-1.7
2.13.0 - 2.16.0 1.8.x beam-runners-flink-1.8
1.7.x beam-runners-flink-1.7
1.6.x beam-runners-flink-1.6
1.5.x beam-runners-flink_2.11
2.10.0 - 2.16.0 1.7.x beam-runners-flink-1.7
1.6.x beam-runners-flink-1.6
1.5.x beam-runners-flink_2.11
2.9.0 1.5.x beam-runners-flink_2.11
2.8.0
2.7.0
2.6.0
2.5.0 1.4.x with Scala 2.11 beam-runners-flink_2.11
2.4.0
2.3.0
2.2.0 1.3.x with Scala 2.10 beam-runners-flink_2.10
2.1.x
2.0.0 1.2.x with Scala 2.10 beam-runners-flink_2.10

For retrieving the right Flink version, see the Flink downloads page.

For more information, the Flink Documentation can be helpful.

Dependencies

You must specify your dependency on the Flink Runner in your pom.xml or build.gradle. Use the Beam version and the artifact id from the above table. For example:

<dependency>
  <groupId>org.apache.beam</groupId>
  <artifactId>beam-runners-flink-1.6</artifactId>
  <version>2.18.0</version>
</dependency>

You will need Docker to be installed in your execution environment. To develop Apache Beam with Python you have to install the Apache Beam Python SDK: pip install apache_beam. Please refer to the Python documentation on how to create a Python pipeline.

pip install apache_beam

For executing a pipeline on a Flink cluster you need to package your program along with all dependencies in a so-called fat jar. How you do this depends on your build system but if you follow along the Beam Quickstart this is the command that you have to run:

$ mvn package -Pflink-runner

Look for the output JAR of this command in the install apache_beam``target` folder.

The Beam Quickstart Maven project is setup to use the Maven Shade plugin to create a fat jar and the -Pflink-runner argument makes sure to include the dependency on the Flink Runner.

For running the pipeline the easiest option is to use the flink command which is part of Flink:

$ bin/flink run -c org.apache.beam.examples.WordCount /path/to/your.jar
--runner=FlinkRunner --other-parameters

Alternatively you can also use Maven’s exec command. For example, to execute the WordCount example:

mvn exec:java -Dexec.mainClass=org.apache.beam.examples.WordCount \
    -Pflink-runner \
    -Dexec.args="--runner=FlinkRunner \
      --inputFile=/path/to/pom.xml \
      --output=/path/to/counts \
      --flinkMaster=<flink master url> \
      --filesToStage=target/word-count-beam-bundled-0.1.jar"

If you have a Flink JobManager running on your local machine you can provide localhost:8081 for flinkMaster. Otherwise an embedded Flink cluster will be started for the job.

Starting with Beam 2.18.0, pre-built Docker images are available at Docker Hub.

JobService: Flink 1.7, Flink 1.8, Flink 1.9.

Beam SDK: Python 2.7, Python 3.5, Python 3.6, Python 3.7.

To run a pipeline on an embedded Flink cluster:

1. Start the JobService endpoint: docker run --net=host apachebeam/flink1.9_job_server:latest

The JobService is the central instance where you submit your Beam pipeline to. The JobService will create a Flink job for the pipeline and execute the job.

2. Submit the Python pipeline to the above endpoint by using the PortableRunner, job_endpoint set to localhost:8099 (this is the default address of the JobService), and environment_type set to LOOPBACK. For example:

import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions

options = PipelineOptions([
    "--runner=PortableRunner",
    "--job_endpoint=localhost:8099",
    "--environment_type=LOOPBACK"
])
with beam.Pipeline(options) as p:
    ...

To run on a separate Flink cluster:

1. Start a Flink cluster which exposes the Rest interface on localhost:8081 by default.

2. Start JobService with Flink Rest endpoint: docker run --net=host apachebeam/flink1.9_job_server:latest --flink-master=localhost:8081.

3. Submit the pipeline as above. Note however that environment_type=LOOPBACK is only intended for local testing. See here for details.

Steps 2 and 3 can be automated in Python by using the FlinkRunner, plus the optional flink_version and flink_master options, e.g.:

import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions

options = PipelineOptions([
    "--runner=FlinkRunner",
    "--flink_version=1.9",
    "--flink_master=localhost:8081",
    "--environment_type=LOOPBACK"
])
with beam.Pipeline(options=options) as p:
    ...

Additional information and caveats

Monitoring your job

You can monitor a running Flink job using the Flink JobManager Dashboard or its Rest interfaces. By default, this is available at port 8081 of the JobManager node. If you have a Flink installation on your local machine that would be http://localhost:8081. Note: When you use the [local] mode an embedded Flink cluster will be started which does not make a dashboard available.

Streaming Execution

If your pipeline uses an unbounded data source or sink, the Flink Runner will automatically switch to streaming mode. You can enforce streaming mode by using the --streaming flag.

Note: The Runner will print a warning message when unbounded sources are used and checkpointing is not enabled. Many sources like PubSubIO rely on their checkpoints to be acknowledged which can only be done when checkpointing is enabled for the FlinkRunner. To enable checkpointing, please set checkpointingIntervalcheckpointing_interval to the desired checkpointing interval in milliseconds.

When executing your pipeline with the Flink Runner, you can set these pipeline options.

The following list of Flink-specific pipeline options is generated automatically from the FlinkPipelineOptions reference class:

allowNonRestoredState Flag indicating whether non restored state is allowed if the savepoint contains state for an operator that is no longer part of the pipeline. Default: false
autoBalanceWriteFilesShardingEnabled Flag indicating whether auto-balance sharding for WriteFiles transform should be enabled. This might prove useful in streaming use-case, where pipeline needs to write quite many events into files, typically divided into N shards. Default behavior on Flink would be, that some workers will receive more shards to take care of than others. This cause workers to go out of balance in terms of processing backlog and memory usage. Enabling this feature will make shards to be spread evenly among available workers in improve throughput and memory usage stability. Default: false
autoWatermarkInterval The interval in milliseconds for automatic watermark emission.
checkpointTimeoutMillis The maximum time in milliseconds that a checkpoint may take before being discarded. Default: -1
checkpointingInterval The interval in milliseconds at which to trigger checkpoints of the running pipeline. Default: No checkpointing. Default: -1
checkpointingMode The checkpointing mode that defines consistency guarantee. Default: EXACTLY_ONCE
disableMetrics Disable Beam metrics in Flink Runner Default: false
executionModeForBatch Flink mode for data exchange of batch pipelines. Reference {@link org.apache.flink.api.common.ExecutionMode}. Set this to BATCH_FORCED if pipelines get blocked, see https://issues.apache.org/jira/browse/FLINK-10672 Default: PIPELINED
executionRetryDelay Sets the delay in milliseconds between executions. A value of {@code -1} indicates that the default value should be used. Default: -1
externalizedCheckpointsEnabled Enables or disables externalized checkpoints. Works in conjunction with CheckpointingInterval Default: false
failOnCheckpointingErrors Sets the expected behaviour for tasks in case that they encounter an error in their checkpointing procedure. If this is set to true, the task will fail on checkpointing error. If this is set to false, the task will only decline a the checkpoint and continue running. Default: true
filesToStage Jar-Files to send to all workers and put on the classpath. The default value is all files from the classpath.
flinkMaster Address of the Flink Master where the Pipeline should be executed. Can either be of the form "host:port" or one of the special values [local], [collection] or [auto]. Default: [auto]
latencyTrackingInterval Interval in milliseconds for sending latency tracking marks from the sources to the sinks. Interval value <= 0 disables the feature. Default: 0
maxBundleSize The maximum number of elements in a bundle. Default: 1000
maxBundleTimeMills The maximum time to wait before finalising a bundle (in milliseconds). Default: 1000
maxParallelism The pipeline wide maximum degree of parallelism to be used. The maximum parallelism specifies the upper limit for dynamic scaling and the number of key groups used for partitioned state. Default: -1
minPauseBetweenCheckpoints The minimal pause in milliseconds before the next checkpoint is triggered. Default: -1
numberOfExecutionRetries Sets the number of times that failed tasks are re-executed. A value of zero effectively disables fault tolerance. A value of -1 indicates that the system default value (as defined in the configuration) should be used. Default: -1
objectReuse Sets the behavior of reusing objects. Default: false
parallelism The degree of parallelism to be used when distributing operations onto workers. If the parallelism is not set, the configured Flink default is used, or 1 if none can be found. Default: -1
retainExternalizedCheckpointsOnCancellation Sets the behavior of externalized checkpoints on cancellation. Default: false
savepointPath Savepoint restore path. If specified, restores the streaming pipeline from the provided path.
shutdownSourcesOnFinalWatermark If set, shutdown sources when their watermark reaches +Inf. Default: false
stateBackendFactory Sets the state backend factory to use in streaming mode. Defaults to the flink cluster's state.backend configuration.
allow_non_restored_state Flag indicating whether non restored state is allowed if the savepoint contains state for an operator that is no longer part of the pipeline. Default: false
auto_balance_write_files_sharding_enabled Flag indicating whether auto-balance sharding for WriteFiles transform should be enabled. This might prove useful in streaming use-case, where pipeline needs to write quite many events into files, typically divided into N shards. Default behavior on Flink would be, that some workers will receive more shards to take care of than others. This cause workers to go out of balance in terms of processing backlog and memory usage. Enabling this feature will make shards to be spread evenly among available workers in improve throughput and memory usage stability. Default: false
auto_watermark_interval The interval in milliseconds for automatic watermark emission.
checkpoint_timeout_millis The maximum time in milliseconds that a checkpoint may take before being discarded. Default: -1
checkpointing_interval The interval in milliseconds at which to trigger checkpoints of the running pipeline. Default: No checkpointing. Default: -1
checkpointing_mode The checkpointing mode that defines consistency guarantee. Default: EXACTLY_ONCE
disable_metrics Disable Beam metrics in Flink Runner Default: false
execution_mode_for_batch Flink mode for data exchange of batch pipelines. Reference {@link org.apache.flink.api.common.ExecutionMode}. Set this to BATCH_FORCED if pipelines get blocked, see https://issues.apache.org/jira/browse/FLINK-10672 Default: PIPELINED
execution_retry_delay Sets the delay in milliseconds between executions. A value of {@code -1} indicates that the default value should be used. Default: -1
externalized_checkpoints_enabled Enables or disables externalized checkpoints. Works in conjunction with CheckpointingInterval Default: false
fail_on_checkpointing_errors Sets the expected behaviour for tasks in case that they encounter an error in their checkpointing procedure. If this is set to true, the task will fail on checkpointing error. If this is set to false, the task will only decline a the checkpoint and continue running. Default: true
files_to_stage Jar-Files to send to all workers and put on the classpath. The default value is all files from the classpath.
flink_master Address of the Flink Master where the Pipeline should be executed. Can either be of the form "host:port" or one of the special values [local], [collection] or [auto]. Default: [auto]
latency_tracking_interval Interval in milliseconds for sending latency tracking marks from the sources to the sinks. Interval value <= 0 disables the feature. Default: 0
max_bundle_size The maximum number of elements in a bundle. Default: 1000
max_bundle_time_mills The maximum time to wait before finalising a bundle (in milliseconds). Default: 1000
max_parallelism The pipeline wide maximum degree of parallelism to be used. The maximum parallelism specifies the upper limit for dynamic scaling and the number of key groups used for partitioned state. Default: -1
min_pause_between_checkpoints The minimal pause in milliseconds before the next checkpoint is triggered. Default: -1
number_of_execution_retries Sets the number of times that failed tasks are re-executed. A value of zero effectively disables fault tolerance. A value of -1 indicates that the system default value (as defined in the configuration) should be used. Default: -1
object_reuse Sets the behavior of reusing objects. Default: false
parallelism The degree of parallelism to be used when distributing operations onto workers. If the parallelism is not set, the configured Flink default is used, or 1 if none can be found. Default: -1
retain_externalized_checkpoints_on_cancellation Sets the behavior of externalized checkpoints on cancellation. Default: false
savepoint_path Savepoint restore path. If specified, restores the streaming pipeline from the provided path.
shutdown_sources_on_final_watermark If set, shutdown sources when their watermark reaches +Inf. Default: false
state_backend_factory Sets the state backend factory to use in streaming mode. Defaults to the flink cluster's state.backend configuration.

For general Beam pipeline options see the PipelineOptions reference.

Capability

The Beam Capability Matrix documents the capabilities of the classic Flink Runner.

The Portable Capability Matrix documents the capabilities of the portable Flink Runner.