Using the Apache Spark Runner

The Apache Spark Runner can be used to execute Beam pipelines using Apache Spark. The Spark Runner can execute Spark pipelines just like a native Spark application; deploying a self-contained application for local mode, running on Spark’s Standalone RM, or using YARN or Mesos.

The Spark Runner executes Beam pipelines on top of Apache Spark, providing:

The Beam Capability Matrix documents the currently supported capabilities of the Spark Runner.

Note: support for the Beam Model in streaming is currently experimental, follow-up in the mailing list for status updates.

Portability

The Spark runner comes in two flavors:

  1. A legacy Runner which supports only Java (and other JVM-based languages)
  2. A portable Runner which supports Java, Python, and Go

Beam and its Runners originally only supported JVM-based languages (e.g. Java/Scala/Kotlin). Python and Go SDKs were added later on. The architecture of the Runners had to be changed significantly to support executing pipelines written in other languages.

If your applications only use Java, then you should currently go with the legacy Runner. If you want to run Python or Go pipelines with Beam on Spark, you need to use the portable Runner. For more information on portability, please visit the Portability page.

This guide is split into two parts to document the legacy and the portable functionality of the Spark Runner. Please use the switcher below to select the appropriate Runner:

Spark Runner prerequisites and setup

The Spark runner currently supports Spark’s 2.x branch, and more specifically any version greater than 2.2.0.

You can add a dependency on the latest version of the Spark runner by adding to your pom.xml the following:

<dependency>
  <groupId>org.apache.beam</groupId>
  <artifactId>beam-runners-spark</artifactId>
  <version>2.15.0</version>
</dependency>

Deploying Spark with your application

In some cases, such as running in local mode/Standalone, your (self-contained) application would be required to pack Spark by explicitly adding the following dependencies in your pom.xml:

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-core_2.10</artifactId>
  <version>${spark.version}</version>
</dependency>

<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-streaming_2.10</artifactId>
  <version>${spark.version}</version>
</dependency>

And shading the application jar using the maven shade plugin:

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-shade-plugin</artifactId>
  <configuration>
    <createDependencyReducedPom>false</createDependencyReducedPom>
    <filters>
      <filter>
        <artifact>*:*</artifact>
        <excludes>
          <exclude>META-INF/*.SF</exclude>
          <exclude>META-INF/*.DSA</exclude>
          <exclude>META-INF/*.RSA</exclude>
        </excludes>
      </filter>
    </filters>
  </configuration>
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>shade</goal>
      </goals>
      <configuration>
        <shadedArtifactAttached>true</shadedArtifactAttached>
        <shadedClassifierName>shaded</shadedClassifierName>
        <transformers>
          <transformer
            implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
        </transformers>
      </configuration>
    </execution>
  </executions>
</plugin>

After running mvn package, run ls target and you should see (assuming your artifactId is beam-examples and the version is 1.0.0):

beam-examples-1.0.0-shaded.jar

To run against a Standalone cluster simply run:

spark-submit --class com.beam.examples.BeamPipeline --master spark://HOST:PORT target/beam-examples-1.0.0-shaded.jar --runner=SparkRunner

You will need Docker to be installed in your execution environment. To develop Apache Beam with Python you have to install the Apache Beam Python SDK: pip install apache_beam. Please refer to the Python documentation on how to create a Python pipeline.

pip install apache_beam

As of now you will need a copy of Apache Beam’s source code. You can download it on the Downloads page. In the future there will be pre-built Docker images available.

1. Start the JobService endpoint: ./gradlew :runners:spark:job-server:runShadow

The JobService is the central instance where you submit your Beam pipeline. The JobService will create a Spark job for the pipeline and execute the job. To execute the job on a Spark cluster, the Beam JobService needs to be provided with the Spark master address.

2. Submit the Python pipeline to the above endpoint by using the PortableRunner, job_endpoint set to localhost:8099 (this is the default address of the JobService), and environment_type set to LOOPBACK. For example:

import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions

options = PipelineOptions([
    "--runner=PortableRunner",
    "--job_endpoint=localhost:8099",
    "--environment_type=LOOPBACK"
])
with beam.Pipeline(options) as p:
    ...

Running on a pre-deployed Spark cluster

Deploying your Beam pipeline on a cluster that already has a Spark deployment (Spark classes are available in container classpath) does not require any additional dependencies. For more details on the different deployment modes see: Standalone, YARN, or Mesos.

1. Start a Spark cluster which exposes the master on port 7077 by default.

2. Start JobService that will connect with the Spark master: ./gradlew :runners:spark:job-server:runShadow -PsparkMasterUrl=spark://localhost:7077.

3. Submit the pipeline as above. Note however that environment_type=LOOPBACK is only intended for local testing. See here for details.

(Note that, depending on your cluster setup, you may need to change the environment_type option. See here for details.)

Pipeline options for the Spark Runner

When executing your pipeline with the Spark Runner, you should consider the following pipeline options.

Field Description Default Value
runner The pipeline runner to use. This option allows you to determine the pipeline runner at runtime. Set to SparkRunner to run using Spark.
sparkMaster The url of the Spark Master. This is the equivalent of setting SparkConf#setMaster(String) and can either be local[x] to run local with x cores, spark://host:port to connect to a Spark Standalone cluster, mesos://host:port to connect to a Mesos cluster, or yarn to connect to a yarn cluster. local[4]
storageLevel The StorageLevel to use when caching RDDs in batch pipelines. The Spark Runner automatically caches RDDs that are evaluated repeatedly. This is a batch-only property as streaming pipelines in Beam are stateful, which requires Spark DStream's StorageLevel to be MEMORY_ONLY. MEMORY_ONLY
batchIntervalMillis The StreamingContext's batchDuration - setting Spark's batch interval. 1000
enableSparkMetricSinks Enable reporting metrics to Spark's metrics Sinks. true
cacheDisabled Disable caching of reused PCollections for whole Pipeline. It's useful when it's faster to recompute RDD rather than save. false
Field Description Value
--runner The pipeline runner to use. This option allows you to determine the pipeline runner at runtime. Set to PortableRunner to run using Spark.
--job_endpoint Job service endpoint to use. Should be in the form hostname:port, e.g. localhost:3000 Set to match your job service endpoint (localhost:8099 by default)

Additional notes

Using spark-submit

When submitting a Spark application to cluster, it is common (and recommended) to use the spark-submit script that is provided with the spark installation. The PipelineOptions described above are not to replace spark-submit, but to complement it. Passing any of the above mentioned options could be done as one of the application-arguments, and setting --master takes precedence. For more on how to generally use spark-submit checkout Spark documentation.

Monitoring your job

You can monitor a running Spark job using the Spark Web Interfaces. By default, this is available at port 4040 on the driver node. If you run Spark on your local machine that would be http://localhost:4040. Spark also has a history server to view after the fact. Metrics are also available via REST API. Spark provides a metrics system that allows reporting Spark metrics to a variety of Sinks. The Spark runner reports user-defined Beam Aggregators using this same metrics system and currently supports GraphiteSink and CSVSink, and providing support for additional Sinks supported by Spark is easy and straight-forward. Spark metrics are not yet supported on the portable runner.

Streaming Execution

If your pipeline uses an UnboundedSource the Spark Runner will automatically set streaming mode. Forcing streaming mode is mostly used for testing and is not recommended. Streaming is not yet supported on the Spark portable runner.

Using a provided SparkContext and StreamingListeners

If you would like to execute your Spark job with a provided SparkContext, such as when using the spark-jobserver, or use StreamingListeners, you can’t use SparkPipelineOptions (the context or a listener cannot be passed as a command-line argument anyway). Instead, you should use SparkContextOptions which can only be used programmatically and is not a common PipelineOptions implementation. Provided SparkContext and StreamingListeners are not supported on the Spark portable runner.