Container environments
The Beam SDK runtime environment is isolated from other runtime systems because the SDK runtime environment is containerized with Docker. This means that any execution engine can run the Beam SDK.
This page describes how to customize, build, and push Beam SDK container images.
Before you begin, install Docker on your workstation.
Customizing container images
You can add extra dependencies to container images so that you don’t have to supply the dependencies to execution engines.
To customize a container image, either:
- Write a new Dockerfile on top of the original.
- Modify the original Dockerfile and reimage the container.
It’s often easier to write a new Dockerfile. However, by modifying the original Dockerfile, you can customize anything (including the base OS).
Writing new Dockerfiles on top of the original
- Pull a prebuilt SDK container image for your target language and version. The following example pulls the latest Python SDK:
docker pull apachebeam/python3.7_sdk
- Write a new Dockerfile that designates the original as its parent.
- Build a child image.
Modifying the original Dockerfile
- Clone the
beam
repository:git clone https://github.com/apache/beam.git
- Customize the Dockerfile. If you’re adding dependencies from PyPI, use
base_image_requirements.txt
instead. - Reimage the container.
Testing customized images
To test a customized image locally, run a pipeline with PortableRunner and set the --environment_config
flag to the image path:
python -m apache_beam.examples.wordcount \
--input=/path/to/inputfile \
--output /path/to/write/counts \
--runner=PortableRunner \
--job_endpoint=embed \
--environment_config=path/to/container/image
# Start a Flink job server on localhost:8099
./gradlew :runners:flink:1.7:job-server:runShadow
# Run a pipeline on the Flink job server
python -m apache_beam.examples.wordcount \
--input=/path/to/inputfile \
--output=/path/to/write/counts \
--runner=PortableRunner \
--job_endpoint=localhost:8099 \
--environment_config=path/to/container/image
# Start a Spark job server on localhost:8099
./gradlew :runners:spark:job-server:runShadow
# Run a pipeline on the Spark job server
python -m apache_beam.examples.wordcount \
--input=/path/to/inputfile \
--output=path/to/write/counts \
--runner=PortableRunner \
--job_endpoint=localhost:8099 \
--environment_config=path/to/container/image
To test a customized image on the Google Cloud Dataflow runner, use
DataflowRunner
with the beam_fn_api
experiment and set
worker_harness_container_image
to the custom container:
python -m apache_beam.examples.wordcount \
--input=path/to/inputfile \
--output=/path/to/write/counts \
--runner=DataflowRunner \
--project={gcp_project_id} \
--temp_location={gcs_location} \ \
--experiment=beam_fn_api \
--sdk_location=[…]/beam/sdks/python/container/py{version}/build/target/apache-beam.tar.gz \
--worker_harness_container_image=path/to/container/image
# The sdk_location option accepts four Python version variables: 2, 35, 36, and 37
Building container images
To build Beam SDK container images:
- Navigate to the local copy of your customized container image.
- Run Gradle with the
docker
target. If you’re building a child image, set the optional--file
flag to the new Dockerfile. If you’re building an image from an original Dockerfile, ignore the--file
flag and use a default repository:
# The default repository of each SDK
./gradlew [--file=path/to/new/Dockerfile] :sdks:java:container:docker
./gradlew [--file=path/to/new/Dockerfile] :sdks:go:container:docker
./gradlew [--file=path/to/new/Dockerfile] :sdks:python:container:py2:docker
./gradlew [--file=path/to/new/Dockerfile] :sdks:python:container:py35:docker
./gradlew [--file=path/to/new/Dockerfile] :sdks:python:container:py36:docker
./gradlew [--file=path/to/new/Dockerfile] :sdks:python:container:py37:docker
# Shortcut for building all four Python SDKs
./gradlew [--file=path/to/new/Dockerfile] :sdks:python:container buildAll
To examine the containers that you built, run docker images
from anywhere in the command line. If you successfully built all of the container images, the command prints a table like the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
apachebeam/java_sdk latest 16ca619d489e 2 weeks ago 550MB
apachebeam/python2.7_sdk latest b6fb40539c29 2 weeks ago 1.78GB
apachebeam/python3.5_sdk latest bae309000d09 2 weeks ago 1.85GB
apachebeam/python3.6_sdk latest 42faad307d1a 2 weeks ago 1.86GB
apachebeam/python3.7_sdk latest 18267df54139 2 weeks ago 1.86GB
apachebeam/go_sdk latest 30cf602e9763 2 weeks ago 124MB
Overriding default Docker targets
The default tag is latest
and the default repositories are in the Docker Hub apachebeam
namespace. The docker
command-line tool implicitly pushes container images to this location.
To tag a local image, set the docker-tag
option when building the container. The following command tags a Python SDK image with a date.
./gradlew :sdks:python:container:py2:docker -Pdocker-tag=2019-10-04
To change the repository, set the docker-repository-root
option to a new location. The following command sets the docker-repository-root
to a Bintray repository named apache
.
./gradlew :sdks:python:container:py2:docker -Pdocker-repository-root=$USER-docker-apache.bintray.io/beam/python
Pushing container images
After building a container image, you can store it in a remote Docker repository.
The following steps push a Python SDK image to the docker-root-repository
value.
- Sign in to your Docker registry:
docker login
- Navigate to the local copy of your container image and upload it to the remote repository:
docker push apachebeam/python2.7_sdk
To download the image again, run docker pull
:
docker pull apachebeam/python2.7_sdk
Note: After pushing a container image, the remote image ID and digest match the local image ID and digest.