Using the Apache Flink Runner

The Apache Flink Runner can be used to execute Beam pipelines using Apache Flink. When using the Flink Runner you will create a jar file containing your job that can be executed on a regular Flink cluster. It’s also possible to execute a Beam pipeline using Flink’s local execution mode without setting up a cluster. This is helpful for development and debugging of your pipeline.

The Flink Runner and Flink are suitable for large scale, continuous jobs, and provide:

The Beam Capability Matrix documents the supported capabilities of the Flink Runner.

If you want to use the local execution mode with the Flink runner to don’t have to complete any setup.

To use the Flink Runner for executing on a cluster, you have to setup a Flink cluster by following the Flink setup quickstart.

To find out which version of Flink you need you can run this command to check the version of the Flink dependency that your project is using:

$ mvn dependency:tree -Pflink-runner |grep flink
...
[INFO] |  +- org.apache.flink:flink-streaming-java_2.10:jar:1.2.1:runtime
...

Here, we would need Flink 1.2.1. Please also note the Scala version in the dependency name. In this case we need to make sure to use a Flink cluster with Scala version 2.10.

For more information, the Flink Documentation can be helpful.

Specify your dependency

When using Java, you must specify your dependency on the Flink Runner in your pom.xml.

<dependency>
  <groupId>org.apache.beam</groupId>
  <artifactId>beam-runners-flink_2.10</artifactId>
  <version>2.1.0</version>
  <scope>runtime</scope>
</dependency>

This section is not applicable to the Beam SDK for Python.

For executing a pipeline on a Flink cluster you need to package your program along will all dependencies in a so-called fat jar. How you do this depends on your build system but if you follow along the Beam Quickstart this is the command that you have to run:

$ mvn package -Pflink-runner

The Beam Quickstart Maven project is setup to use the Maven Shade plugin to create a fat jar and the -Pflink-runner argument makes sure to include the dependency on the Flink Runner.

For actually running the pipeline you would use this command

$ mvn exec:java -Dexec.mainClass=org.apache.beam.examples.WordCount \
    -Pflink-runner \
    -Dexec.args="--runner=FlinkRunner \
      --inputFile=/path/to/pom.xml \
      --output=/path/to/counts \
      --flinkMaster=<flink master url> \
      --filesToStage=target/word-count-beam--bundled-0.1.jar"

If you have a Flink JobManager running on your local machine you can give localhost:6123 for flinkMaster.

When executing your pipeline with the Flink Runner, you can set these pipeline options.

Field Description Default Value
runner The pipeline runner to use. This option allows you to determine the pipeline runner at runtime. Set to FlinkRunner to run using Flink.
streaming Whether streaming mode is enabled or disabled; true if enabled. Set to true if running pipelines with unbounded PCollections. false
flinkMaster The url of the Flink JobManager on which to execute pipelines. This can either be the address of a cluster JobManager, in the form "host:port" or one of the special Strings "[local]" or "[auto]". "[local]" will start a local Flink Cluster in the JVM while "[auto]" will let the system decide where to execute the pipeline based on the environment. [auto]
filesToStage Jar Files to send to all workers and put on the classpath. Here you have to put the fat jar that contains your program along with all dependencies. empty
parallelism The degree of parallelism to be used when distributing operations onto workers. 1
checkpointingInterval The interval between consecutive checkpoints (i.e. snapshots of the current pipeline state used for fault tolerance). -1L, i.e. disabled
numberOfExecutionRetries Sets the number of times that failed tasks are re-executed. A value of 0 effectively disables fault tolerance. A value of -1 indicates that the system default value (as defined in the configuration) should be used. -1
executionRetryDelay Sets the delay between executions. A value of -1 indicates that the default value should be used. -1
stateBackend Sets the state backend to use in streaming mode. The default is to read this setting from the Flink config. empty, i.e. read from Flink config

See the reference documentation for the FlinkPipelineOptionsPipelineOptions interface (and its subinterfaces) for the complete list of pipeline configuration options.

Additional information and caveats

Monitoring your job

You can monitor a running Flink job using the Flink JobManager Dashboard. By default, this is available at port 8081 of the JobManager node. If you have a Flink installation on your local machine that would be http://localhost:8081.

Streaming Execution

If your pipeline uses an unbounded data source or sink, the Flink Runner will automatically switch to streaming mode. You can enforce streaming mode by using the streaming setting mentioned above.