Using the Apache Flink Runner

The old Flink Runner will eventually be replaced by the Portable Runner which enables to run pipelines in other languages than Java. Please see the Portability page for the latest state.

The Apache Flink Runner can be used to execute Beam pipelines using Apache Flink. When using the Flink Runner you will create a jar file containing your job that can be executed on a regular Flink cluster. It’s also possible to execute a Beam pipeline using Flink’s local execution mode without setting up a cluster. This is helpful for development and debugging of your pipeline.

The Flink Runner and Flink are suitable for large scale, continuous jobs, and provide:

The Beam Capability Matrix documents the supported capabilities of the Flink Runner.

If you want to use the local execution mode with the Flink runner to don’t have to complete any setup.

To use the Flink Runner for executing on a cluster, you have to setup a Flink cluster by following the Flink setup quickstart.

Version Compatibility

The Flink cluster version has to match the version used by the FlinkRunner. To find out which version of Flink please see the table below:

Beam version Flink version
2.8.0, 2.7.0, 2.6.0 1.5.x
2.5.0, 2.4.0, 2.3.0 1.4.x
2.2.0 1.3.x with Scala 2.10
2.2.0, 2.1.x 1.3.x with Scala 2.10
2.0.0 1.2.x with Scala 2.10

For retrieving the right version, see the Flink downloads page.

For more information, the Flink Documentation can be helpful.

Specify your dependency

When using Java, you must specify your dependency on the Flink Runner in your pom.xml.


This section is not applicable to the Beam SDK for Python.

For executing a pipeline on a Flink cluster you need to package your program along will all dependencies in a so-called fat jar. How you do this depends on your build system but if you follow along the Beam Quickstart this is the command that you have to run:

$ mvn package -Pflink-runner

The Beam Quickstart Maven project is setup to use the Maven Shade plugin to create a fat jar and the -Pflink-runner argument makes sure to include the dependency on the Flink Runner.

For actually running the pipeline you would use this command

$ mvn exec:java -Dexec.mainClass=org.apache.beam.examples.WordCount \
    -Pflink-runner \
    -Dexec.args="--runner=FlinkRunner \
      --inputFile=/path/to/pom.xml \
      --output=/path/to/counts \
      --flinkMaster=<flink master url> \

If you have a Flink JobManager running on your local machine you can give localhost:8081 for flinkMaster.

When executing your pipeline with the Flink Runner, you can set these pipeline options.

Field Description Default Value
runner The pipeline runner to use. This option allows you to determine the pipeline runner at runtime. Set to FlinkRunner to run using Flink.
streaming Whether streaming mode is enabled or disabled; true if enabled. Set to true if running pipelines with unbounded PCollections. false
flinkMaster The url of the Flink JobManager on which to execute pipelines. This can either be the address of a cluster JobManager, in the form "host:port" or one of the special Strings "[local]" or "[auto]". "[local]" will start a local Flink Cluster in the JVM while "[auto]" will let the system decide where to execute the pipeline based on the environment. [auto]
filesToStage Jar Files to send to all workers and put on the classpath. Here you have to put the fat jar that contains your program along with all dependencies. empty
parallelism The degree of parallelism to be used when distributing operations onto workers. 1
checkpointingInterval The interval between consecutive checkpoints (i.e. snapshots of the current pipeline state used for fault tolerance). -1L, i.e. disabled
numberOfExecutionRetries Sets the number of times that failed tasks are re-executed. A value of 0 effectively disables fault tolerance. A value of -1 indicates that the system default value (as defined in the configuration) should be used. -1
executionRetryDelay Sets the delay between executions. A value of -1 indicates that the default value should be used. -1
stateBackend Sets the state backend to use in streaming mode. The default is to read this setting from the Flink config. empty, i.e. read from Flink config

See the reference documentation for the FlinkPipelineOptionsPipelineOptions interface (and its subinterfaces) for the complete list of pipeline configuration options.

Additional information and caveats

Monitoring your job

You can monitor a running Flink job using the Flink JobManager Dashboard. By default, this is available at port 8081 of the JobManager node. If you have a Flink installation on your local machine that would be http://localhost:8081.

Streaming Execution

If your pipeline uses an unbounded data source or sink, the Flink Runner will automatically switch to streaming mode. You can enforce streaming mode by using the streaming setting mentioned above.