Beam YAML Transform Index


AssertEqual

Asserts that the input contains exactly the elements provided.

This is primarily used for testing; it will cause the entire pipeline to fail if the input to this transform is not exactly the set of elements given in the config parameter.

As with Create, YAML/JSON-style mappings are interpreted as Beam rows, e.g.

type: AssertEqual
input: SomeTransform
config:
  elements:
     - {a: 0, b: "foo"}
     - {a: 1, b: "bar"}

would ensure that SomeTransform produced exactly two elements with values (a=0, b="foo") and (a=1, b="bar") respectively.

Configuration

  • elements Array[?] : The set of elements that should belong to the PCollection. YAML/JSON-style mappings will be interpreted as Beam rows.

Usage

type: AssertEqual
input: ...
config:
  elements:
  - element
  - element
  - ...

AssignTimestamps

Assigns a new timestamp each element of its input.

This can be useful when reading records that have the timestamp embedded in them, for example with various file types or other sources that by default set all timestamps to the infinite past.

Note that the timestamp should only be set forward, as setting it backwards may not cause it to hold back an already advanced watermark and the data could become droppably late.

Supported languages: generic, javascript, python.

Configuration

  • timestamp ? (Optional) : A field, callable, or expression giving the new timestamp.

  • language string (Optional) : The language of the timestamp expression.

  • error_handling Row (Optional) : Whether and how to handle errors during timestamp evaluation.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: AssignTimestamps
input: ...
config:
  timestamp: timestamp
  language: "language"
  error_handling:
    output: "output"

Combine

Groups and combines records sharing common fields.

Built-in combine functions are sum, max, min, all, any, mean, count, group, concat but custom aggregation functions can be used as well.

See also the documentation on YAML Aggregation.

Supported languages: calcite, generic, javascript, python, sql.

Configuration

  • group_by Array[string] : The field(s) to aggregate on.

  • combine Map[string, Map[string, ?]] : The aggregation function to use.

  • language string (Optional) : The language used to define (and execute) the custom callables in combine. Defaults to generic.

Usage

type: Combine
input: ...
config:
  group_by:
  - "group_by"
  - "group_by"
  - ...
  combine:
    a:
      a: combine_value_a_value_a
      b: combine_value_a_value_b
      c: ...
    b:
      a: combine_value_b_value_a
      b: combine_value_b_value_b
      c: ...
    c: ...
  language: "language"

Create

Creates a collection containing a specified set of elements.

This transform always produces schema'd data. For example

type: Create
config:
  elements: [1, 2, 3]

will result in an output with three elements with a schema of Row(element=int) whereas YAML/JSON-style mappings will be interpreted directly as Beam rows, e.g.

type: Create
config:
  elements:
     - {first: 0, second: {str: "foo", values: [1, 2, 3]}}
     - {first: 1, second: {str: "bar", values: [4, 5, 6]}}

will result in a schema of the form (int, Row(string, list[int])).

This can also be expressed as YAML

type: Create
config:
  elements:
    - first: 0
      second:
        str: "foo"
         values: [1, 2, 3]
    - first: 1
      second:
        str: "bar"
         values: [4, 5, 6]

Configuration

  • elements Array[?] : The set of elements that should belong to the PCollection. YAML/JSON-style mappings will be interpreted as Beam rows. Primitives will be mapped to rows with a single "element" field.

  • reshuffle boolean (Optional) : (optional) Whether to introduce a reshuffle (to possibly redistribute the work) if there is more than one element in the collection. Defaults to True.

Usage

type: Create
config:
  elements:
  - element
  - element
  - ...
  reshuffle: true|false

Enrichment

The Enrichment transform allows one to dynamically enhance elements in a pipeline by performing key-value lookups against external services like APIs or databases.

Example using BigTable:

- type: Enrichment
  config:
    enrichment_handler: 'BigTable'
    handler_config:
      project_id: 'apache-beam-testing'
      instance_id: 'beam-test'
      table_id: 'bigtable-enrichment-test'
      row_key: 'product_id'
    timeout: 30

For more information on Enrichment, see the Beam docs.

Configuration

  • enrichment_handler string : Specifies the source from where data needs to be extracted into the pipeline for enriching data. One of "BigQuery", "BigTable", "FeastFeatureStore" or "VertexAIFeatureStore".

  • handler_config Map[string, ?] : Specifies the parameters for the respective enrichment_handler in a YAML/JSON format. To see the full set of handler_config parameters, see their corresponding doc pages:

  • timeout double (Optional) : Timeout for source requests in seconds. Defaults to 30 seconds.

Usage

type: Enrichment
input: ...
config:
  enrichment_handler: "enrichment_handler"
  handler_config:
    a: handler_config_value_a
    b: handler_config_value_b
    c: ...
  timeout: timeout

Explode

Explodes (aka unnest/flatten) one or more fields producing multiple rows.

Given one or more fields of iterable type, produces multiple rows, one for each value of that field. For example, a row of the form ('a', [1, 2, 3]) would expand to ('a', 1), ('a', 2'), and ('a', 3) when exploded on the second field.

This is akin to a FlatMap when paired with the MapToFields transform.

See more complete documentation on YAML Mapping Functions.

Configuration

  • fields ? (Optional) : The list of fields to expand.

  • cross_product boolean (Optional) : If multiple fields are specified, indicates whether the full cross-product of combinations should be produced, or if the first element of the first field corresponds to the first element of the second field, etc. For example, the row (['a', 'b'], [1, 2]) would expand to the four rows ('a', 1), ('a', 2), ('b', 1), and ('b', 2) when cross_product is set to true but only the two rows ('a', 1) and ('b', 2) when it is set to false. Only meaningful (and required) if multiple rows are specified.

  • error_handling Row (Optional) : Whether and how to handle errors during iteration.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: Explode
input: ...
config:
  fields: fields
  cross_product: true|false
  error_handling:
    a: error_handling_value_a
    b: error_handling_value_b
    c: ...

ExtractWindowingInfo

Extracts the implicit windowing information from an element and makes it explicit as field(s) in the element itself.

The following windowing parameter values are supported:

  • timestamp: The event timestamp of the current element.
  • window_start: The start of the window iff it is an interval window.
  • window_end: The (exclusive) end of the window.
  • window_string: The string representation of the window.
  • window_type: The type of the window as a string.
  • winodw_object: The actual window object itself, as a Java or Python object.
  • pane_info: A schema'd representation of the current pane info, including its index, whether it was the last firing, etc.

As a convenience, a list rather than a mapping of fields may be provided, in which case the fields will be named according to the requested values.

Configuration

  • fields Map[string, string] (Optional) : A mapping of new field names to various windowing parameters, as documented above. If omitted, defaults to [timestamp, window_start, window_end].

Usage

type: ExtractWindowingInfo
input: ...
config:
  fields:
    a: "fields_value_a"
    b: "fields_value_b"
    c: ...

Filter

Keeps only records that satisfy the given criteria.

See more complete documentation on YAML Filtering.

Supported languages: calcite, generic, java, javascript, python, sql.

Configuration

  • keep ? (Optional) : An expression evaluating to true for those records that should be kept.

  • language string (Optional) : The language of the above expression. Defaults to generic.

  • error_handling Row (Optional) : Whether and where to output records that throw errors when the above expressions are evaluated.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: Filter
input: ...
config:
  keep: keep
  language: "language"
  error_handling:
    output: "output"

Flatten

Flattens multiple PCollections into a single PCollection.

The elements of the resulting PCollection will be the (disjoint) union of all the elements of all the inputs.

Note that in YAML transforms can always take a list of inputs which will be implicitly flattened.

Configuration

No configuration parameters.

Usage

type: Flatten
input: ...
config: ...

Join

Joins two or more inputs using a specified condition.

For example

type: Join
input:
  input1: SomeTransform
  input2: AnotherTransform
  input3: YetAnotherTransform
config:
  type: inner
  equalities:
    - input1: colA
      input2: colB
    - input2: colX
      input3: colY
  fields:
    input1: [colA, colB, colC]
    input2: {new_name: colB}

would perform an inner join on the three inputs satisfying the constraints that input1.colA = input2.colB and input2.colX = input3.colY emitting rows with colA, colB and colC from input1, the values of input2.colB as a field called new_name, and all the fields from input3.

Configuration

  • equalities ? (Optional) : The condition to join on. A list of sets of columns that should be equal to fulfill the join condition. For the simple scenario of joining on the same column across all inputs where the column name is the same, one can specify the column name as the equality rather than having to list it for every input.

  • type ? (Optional) : The type of join. Could be a string value in ["inner", "left", "right", "outer"] that specifies the type of join to be performed. For scenarios with multiple inputs to join where different join types are desired, specify the inputs to be outer joined. For example, {outer: [input1, input2]} means that input1 and input2 will be outer joined using the conditions specified, while other inputs will be inner joined.

  • fields Map[string, ?] (Optional) : The fields to be outputted. A mapping with the input alias as the key and the list of fields in the input to be outputted. The value in the map can either be a dictionary with the new field name as the key and the original field name as the value (e.g new_field_name: field_name), or a list of the fields to be outputted with their original names (e.g [col1, col2, col3]), or an '*' indicating all fields in the input will be outputted. If not specified, all fields from all inputs will be outputted.

Usage

type: Join
input: ...
config:
  equalities: equalities
  type: type
  fields:
    a: fields_value_a
    b: fields_value_b
    c: ...

LogForTesting

Logs each element of its input PCollection.

The output of this transform is a copy of its input for ease of use in chain-style pipelines.

Configuration

  • level string (Optional) : one of ERROR, INFO, or DEBUG, mapped to a corresponding language-specific logging level

  • prefix string (Optional) : an optional identifier that will get prepended to the element being logged

  • error_handling Row (Optional) : This option specifies whether and where to output error rows.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: LogForTesting
input: ...
config:
  level: "level"
  prefix: "prefix"
  error_handling:
    output: "output"

MLTransform

Configuration

  • write_artifact_location string (Optional)

  • read_artifact_location string (Optional)

  • transforms Array[?] (Optional)

Usage

type: MLTransform
input: ...
config:
  write_artifact_location: "write_artifact_location"
  read_artifact_location: "read_artifact_location"
  transforms:
  - transforms
  - transforms
  - ...

MapToFields

Creates records with new fields defined in terms of the input fields.

See more complete documentation on YAML Mapping Functions.

Supported languages: calcite, generic, java, javascript, python, sql.

Configuration

  • fields Map[string, ?] : The output fields to compute, each mapping to the expression or callable that creates them.

  • append boolean (Optional) : Whether to append the created fields to the set of fields already present, outputting a union of both the new fields and the original fields for each record. Defaults to False.

  • drop Array[string] (Optional) : If append is true, enumerates a subset of fields from the original record that should not be kept

  • language string (Optional) : The language used to define (and execute) the expressions and/or callables in fields. Defaults to generic.

  • dependencies Array[string] (Optional) : An optional list of extra dependencies that are needed for these UDFs. The interpretation of these strings is language-dependent.

  • error_handling Row (Optional) : Whether and where to output records that throw errors when the above expressions are evaluated.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: MapToFields
input: ...
config:
  fields:
    a: fields_value_a
    b: fields_value_b
    c: ...
  append: true|false
  drop:
  - "drop"
  - "drop"
  - ...
  language: "language"
  dependencies:
  - "dependencies"
  - "dependencies"
  - ...
  error_handling:
    output: "output"

Partition

Splits an input into several distinct outputs.

Each input element will go to a distinct output based on the field or function given in the by configuration parameter.

Supported languages: generic, javascript, python.

Configuration

  • by ? (Optional) : A field, callable, or expression giving the destination output for this element. Should return a string that is a member of the outputs parameter. If unknown_output is also set, other returns values are accepted as well, otherwise an error will be raised.

  • outputs Array[string] : The set of outputs into which this input is being partitioned.

  • unknown_output string (Optional) : (Optional) If set, indicates a destination output for any elements that are not assigned an output listed in the outputs parameter.

  • error_handling Row (Optional) : (Optional) Whether and how to handle errors during partitioning.

    Row fields:

    • output string : Name to use for the output error collection
  • language string (Optional) : (Optional) The language of the by expression.

Usage

type: Partition
input: ...
config:
  by: by
  outputs:
  - "outputs"
  - "outputs"
  - ...
  unknown_output: "unknown_output"
  error_handling:
    a: error_handling_value_a
    b: error_handling_value_b
    c: ...
  language: "language"

PyTransform

A Python PTransform identified by fully qualified name.

This allows one to import, construct, and apply any Beam Python transform. This can be useful for using transforms that have not yet been exposed via a YAML interface. Note, however, that conversion may be required if this transform does not accept or produce Beam Rows.

For example

type: PyTransform
config:
   constructor: apache_beam.pkg.mod.SomeClass
   args: [1, 'foo']
   kwargs:
     baz: 3

can be used to access the transform apache_beam.pkg.mod.SomeClass(1, 'foo', baz=3).

See also the documentation on Inlining Python.

Configuration

  • constructor string : Fully qualified name of a callable used to construct the transform. Often this is a class such as apache_beam.pkg.mod.SomeClass but it can also be a function or any other callable that returns a PTransform.

  • args Array[?] (Optional) : A list of parameters to pass to the callable as positional arguments.

  • kwargs Map[string, ?] (Optional) : A list of parameters to pass to the callable as keyword arguments.

Usage

type: PyTransform
input: ...
config:
  constructor: "constructor"
  args:
  - arg
  - arg
  - ...
  kwargs:
    a: kwargs_value_a
    b: kwargs_value_b
    c: ...

RunInference

A transform that takes the input rows, containing examples (or features), for use on an ML model. The transform then appends the inferences (or predictions) for those examples to the input row.

A ModelHandler must be passed to the model_handler parameter. The ModelHandler is responsible for configuring how the ML model will be loaded and how input data will be passed to it. Every ModelHandler has a config tag, similar to how a transform is defined, where the parameters are defined.

For example:

- type: RunInference
  config:
    model_handler:
      type: ModelHandler
      config:
        param_1: arg1
        param_2: arg2
        ...

By default, the RunInference transform will return the input row with a single field appended named by the inference_tag parameter ("inference" by default) that contains the inference directly returned by the underlying ModelHandler, after any optional postprocessing.

For example, if the input had the following:

Row(question="What is a car?")

The output row would look like:

Row(question="What is a car?", inference=...)

where the inference tag can be overridden with the inference_tag parameter.

However, if one specified the following transform config:

- type: RunInference
  config:
    inference_tag: my_inference
    model_handler: ...

The output row would look like:

Row(question="What is a car?", my_inference=...)

See more complete documentation on the underlying RunInference transform.

Preprocessing input data

In most cases, the model will be expecting data in a particular data format, whether it be a Python Dict, PyTorch tensor, etc. However, the outputs of all built-in Beam YAML transforms are Beam Rows. To allow for transforming the Beam Row into a data format the model recognizes, each ModelHandler is equipped with a preprocessing parameter for performing necessary data preprocessing. It is possible for a ModelHandler to define a default preprocessing function, but in most cases, one will need to be specified by the caller.

For example, using callable:

pipeline:
  type: chain

  transforms:
    - type: Create
      config:
        elements:
          - question: "What is a car?"
          - question: "Where is the Eiffel Tower located?"

    - type: RunInference
      config:
        model_handler:
          type: ModelHandler
          config:
            param_1: arg1
            param_2: arg2
            preprocess:
              callable: 'lambda row: {"prompt": row.question}'
            ...

In the above example, the Create transform generates a collection of two Beam Row elements, each with a single field - "question". The model, however, expects a Python Dict with a single key, "prompt". In this case, we can specify a simple Lambda function (alternatively could define a full function), to map the data.

Postprocessing predictions

It is also possible to define a postprocessing function to postprocess the data output by the ModelHandler. See the documentation for the ModelHandler you intend to use (list defined below under model_handler parameter doc).

In many cases, before postprocessing, the object will be a PredictionResult. # pylint: disable=line-too-long This type behaves very similarly to a Beam Row and fields can be accessed using dot notation. However, make sure to check the docs for your ModelHandler to see which fields its PredictionResult contains or if it returns a different object altogether.

For example:

- type: RunInference
  config:
    model_handler:
      type: ModelHandler
      config:
        param_1: arg1
        param_2: arg2
        postprocess:
          callable: |
            def fn(x: PredictionResult):
              return beam.Row(x.example, x.inference, x.model_id)
        ...

The above example demonstrates converting the original output data type (in this case it is PredictionResult), and converts to a Beam Row, which allows for easier mapping in a later transform.

File-based pre/postprocessing functions

For both preprocessing and postprocessing, it is also possible to specify a Python UDF (User-defined function) file that contains the function. This is possible by specifying the path to the file (local file or GCS path) and the name of the function in the file.

For example:

- type: RunInference
  config:
    model_handler:
      type: ModelHandler
      config:
        param_1: arg1
        param_2: arg2
        preprocess:
          path: gs://my-bucket/path/to/preprocess.py
          name: my_preprocess_fn
        postprocess:
          path: gs://my-bucket/path/to/postprocess.py
          name: my_postprocess_fn
        ...

Configuration

  • model_handler Map[string, ?] : Specifies the parameters for the respective enrichment_handler in a YAML/JSON format. To see the full set of handler_config parameters, see their corresponding doc pages:

  • inference_tag string (Optional) : The tag to use for the returned inference. Default is 'inference'.

  • inference_args Map[string, ?] (Optional) : Extra arguments for models whose inference call requires extra parameters. Make sure to check the underlying ModelHandler docs to see which args are allowed.

Usage

type: RunInference
input: ...
config:
  model_handler:
    a: model_handler_value_a
    b: model_handler_value_b
    c: ...
  inference_tag: "inference_tag"
  inference_args:
    a: inference_args_value_a
    b: inference_args_value_b
    c: ...

Sql

Configuration

Usage

type: Sql
input: ...
config: ...

StripErrorMetadata

Strips error metadata from outputs returned via error handling.

Generally the error outputs for transformations return information about the error encountered (e.g. error messages and tracebacks) in addition to the failing element itself. This transformation attempts to remove that metadata and returns the bad element alone which can be useful for re-processing.

For example, in the following pipeline snippet

- name: MyMappingTransform
  type: MapToFields
  input: SomeInput
  config:
    language: python
    fields:
      ...
    error_handling:
      output: errors

- name: RecoverOriginalElements
  type: StripErrorMetadata
  input: MyMappingTransform.errors

the output of RecoverOriginalElements will contain exactly those elements from SomeInput that failed to processes (whereas MyMappingTransform.errors would contain those elements paired with error information).

Note that this relies on the preceding transform actually returning the failing input in a schema'd way. Most built-in transformation follow the correct conventions.

Configuration

No configuration parameters.

Usage

type: StripErrorMetadata
input: ...

ValidateWithSchema

Validates each element of a PCollection against a json schema.

Configuration

  • schema Map[string, ?] : A json schema against which to validate each element.

  • error_handling Row (Optional) : Whether and how to handle errors during iteration. If this is not set, invalid elements will fail the pipeline, otherwise invalid elements will be passed to the specified error output along with information about how the schema was invalidated.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: ValidateWithSchema
input: ...
config:
  schema:
    a: schema_value_a
    b: schema_value_b
    c: ...
  error_handling:
    a: error_handling_value_a
    b: error_handling_value_b
    c: ...

WindowInto

A window transform assigning windows to each element of a PCollection.

The assigned windows will affect all downstream aggregating operations, which will aggregate by window as well as by key.

See the Beam documentation on windowing for more details.

Sizes, offsets, periods and gaps (where applicable) must be defined using a time unit suffix 'ms', 's', 'm', 'h' or 'd' for milliseconds, seconds, minutes, hours or days, respectively. If a time unit is not specified, it will default to 's'.

For example

windowing:
   type: fixed
   size: 30s

Note that any Yaml transform can have a windowing parameter, which is applied to its inputs (if any) or outputs (if there are no inputs) which means that explicit WindowInto operations are not typically needed.

Configuration

  • windowing ? (Optional) : the type and parameters of the windowing to perform

Usage

type: WindowInto
input: ...
config:
  windowing: windowing

ReadFromAvro

A PTransform for reading records from avro files.

Each record of the resulting PCollection will contain a single record read from a source. Records that are of simple types will be mapped to beam Rows with a single record field containing the records value. Records that are of Avro type RECORD will be mapped to Beam rows that comply with the schema contained in the Avro file that contains those records.

Configuration

  • path ? (Optional)

Usage

type: ReadFromAvro
config:
  path: path

WriteToAvro

A PTransform for writing avro files.

If the input has a schema, a corresponding avro schema will be automatically generated and used to write the output records.

Configuration

  • path ? (Optional)

Usage

type: WriteToAvro
input: ...
config:
  path: path

ReadFromBigQuery

Reads data from BigQuery.

Exactly one of table or query must be set. If query is set, neither row_restriction nor fields should be set.

Configuration

  • table string (Optional) : The table to read from, specified as DATASET.TABLE or PROJECT:DATASET.TABLE.

  • query string (Optional) : A query to be used instead of the table argument.

  • row_restriction string (Optional) : Optional SQL text filtering statement, similar to a WHERE clause in a query. Aggregates are not supported. Restricted to a maximum length for 1 MB.

  • fields Array[string] (Optional)

Usage

type: ReadFromBigQuery
config:
  table: "table"
  query: "query"
  row_restriction: "row_restriction"
  fields:
  - "field"
  - "field"
  - ...

WriteToBigQuery

Configuration

Usage

type: WriteToBigQuery
input: ...
config: ...

ReadFromCsv

A PTransform for reading comma-separated values (csv) files into a PCollection.

Configuration

  • path string : The file path to read from. The path can contain glob characters such as * and ?.

  • delimiter ? (Optional)

  • comment ? (Optional)

Usage

type: ReadFromCsv
config:
  path: "path"
  delimiter: delimiter
  comment: comment

WriteToCsv

A PTransform for writing a schema'd PCollection as a (set of) comma-separated values (csv) files.

Configuration

  • path string : The file path to write to. The files written will begin with this prefix, followed by a shard identifier (see num_shards) according to the file_naming parameter.

  • delimiter ? (Optional)

Usage

type: WriteToCsv
input: ...
config:
  path: "path"
  delimiter: delimiter

ReadFromIceberg

Reads an Apache Iceberg table.

See also the Apache Iceberg Beam documentation.

Configuration

  • table string : The identifier of the Apache Iceberg table. Example: "db.table1".

  • catalog_name string (Optional) : The name of the catalog. Example: "local".

  • catalog_properties Map[string, string] (Optional) : A map of configuration properties for the Apache Iceberg catalog. The required properties depend on the catalog. For more information, see CatalogUtil in the Apache Iceberg documentation.

  • config_properties Map[string, string] (Optional) : An optional set of Hadoop configuration properties. For more information, see CatalogUtil in the Apache Iceberg documentation.

Usage

type: ReadFromIceberg
config:
  table: "table"
  catalog_name: "catalog_name"
  catalog_properties:
    a: "catalog_properties_value_a"
    b: "catalog_properties_value_b"
    c: ...
  config_properties:
    a: "config_properties_value_a"
    b: "config_properties_value_b"
    c: ...

WriteToIceberg

Writes to an Apache Iceberg table.

See also the Apache Iceberg Beam documentation including the dynamic destinations section for use of the keep, drop, and only parameters.

Configuration

  • table string : The identifier of the Apache Iceberg table. Example: "db.table1".

  • catalog_name string (Optional) : The name of the catalog. Example: "local".

  • catalog_properties Map[string, string] (Optional) : A map of configuration properties for the Apache Iceberg catalog. The required properties depend on the catalog. For more information, see CatalogUtil in the Apache Iceberg documentation.

  • config_properties Map[string, string] (Optional) : An optional set of Hadoop configuration properties. For more information, see CatalogUtil in the Apache Iceberg documentation.

  • triggering_frequency_seconds int64 (Optional) : For streaming write pipelines, the frequency at which the sink attempts to produce snapshots, in seconds.

  • keep Array[string] (Optional) : An optional list of field names to keep when writing to the destination. Other fields are dropped. Mutually exclusive with drop and only.

  • drop Array[string] (Optional) : An optional list of field names to drop before writing to the destination. Mutually exclusive with keep and only.

  • only string (Optional) : The name of exactly one field to keep as the top level record when writing to the destination. All other fields are dropped. This field must be of row type. Mutually exclusive with drop and keep.

Usage

type: WriteToIceberg
input: ...
config:
  table: "table"
  catalog_name: "catalog_name"
  catalog_properties:
    a: "catalog_properties_value_a"
    b: "catalog_properties_value_b"
    c: ...
  config_properties:
    a: "config_properties_value_a"
    b: "config_properties_value_b"
    c: ...
  triggering_frequency_seconds: triggering_frequency_seconds
  keep:
  - "keep"
  - "keep"
  - ...
  drop:
  - "drop"
  - "drop"
  - ...
  only: "only"

ReadFromJdbc

Configuration

Usage

type: ReadFromJdbc
config: ...

WriteToJdbc

Configuration

Usage

type: WriteToJdbc
input: ...
config: ...

ReadFromJson

A PTransform for reading json values from files into a PCollection.

Configuration

  • path string : The file path to read from. The path can contain glob characters such as * and ?.

Usage

type: ReadFromJson
config:
  path: "path"

WriteToJson

A PTransform for writing a PCollection as json values to files.

Configuration

  • path string : The file path to write to. The files written will begin with this prefix, followed by a shard identifier (see num_shards) according to the file_naming parameter.

Usage

type: WriteToJson
input: ...
config:
  path: "path"

ReadFromKafka

Configuration

Usage

type: ReadFromKafka
config: ...

WriteToKafka

Configuration

Usage

type: WriteToKafka
input: ...
config: ...

ReadFromMySql

Configuration

Usage

type: ReadFromMySql
config: ...

WriteToMySql

Configuration

Usage

type: WriteToMySql
input: ...
config: ...

ReadFromOracle

Configuration

Usage

type: ReadFromOracle
config: ...

WriteToOracle

Configuration

Usage

type: WriteToOracle
input: ...
config: ...

ReadFromParquet

A PTransform for reading Parquet files.

Configuration

  • path ? (Optional)

Usage

type: ReadFromParquet
config:
  path: path

WriteToParquet

A PTransform for writing parquet files.

Configuration

  • path ? (Optional)

Usage

type: WriteToParquet
input: ...
config:
  path: path

ReadFromPostgres

Configuration

Usage

type: ReadFromPostgres
config: ...

WriteToPostgres

Configuration

Usage

type: WriteToPostgres
input: ...
config: ...

ReadFromPubSub

Reads messages from Cloud Pub/Sub.

Configuration

  • topic string (Optional) : Cloud Pub/Sub topic in the form "projects//topics/". If provided, subscription must be None.

  • subscription string (Optional) : Existing Cloud Pub/Sub subscription to use in the form "projects//subscriptions/". If not specified, a temporary subscription will be created from the specified topic. If provided, topic must be None.

  • format string : The expected format of the message payload. Currently suported formats are

    • RAW: Produces records with a single payload field whose contents are the raw bytes of the pubsub message.
    • STRING: Like RAW, but the bytes are decoded as a UTF-8 string.
    • AVRO: Parses records with a given Avro schema.
    • JSON: Parses records with a given JSON schema.
  • schema ? (Optional) : Schema specification for the given format.

  • attributes Array[string] (Optional) : List of attribute keys whose values will be flattened into the output message as additional fields. For example, if the format is raw and attributes is ["a", "b"] then this read will produce elements of the form Row(payload=..., a=..., b=...).

  • attributes_map string (Optional)

  • id_attribute string (Optional) : The attribute on incoming Pub/Sub messages to use as a unique record identifier. When specified, the value of this attribute (which can be any string that uniquely identifies the record) will be used for deduplication of messages. If not provided, we cannot guarantee that no duplicate data will be delivered on the Pub/Sub stream. In this case, deduplication of the stream will be strictly best effort.

  • timestamp_attribute string (Optional) : Message value to use as element timestamp. If None, uses message publishing time as the timestamp.

    Timestamp values should be in one of two formats:

    • A numerical value representing the number of milliseconds since the Unix epoch.
    • A string in RFC 3339 format, UTC timezone. Example: 2015-10-29T23:41:41.123Z. The sub-second component of the timestamp is optional, and digits beyond the first three (i.e., time units smaller than milliseconds) may be ignored.
  • error_handling Row (Optional) : This option specifies whether and where to output error rows.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: ReadFromPubSub
config:
  topic: "topic"
  subscription: "subscription"
  format: "format"
  schema: schema
  attributes:
  - "attribute"
  - "attribute"
  - ...
  attributes_map: "attributes_map"
  id_attribute: "id_attribute"
  timestamp_attribute: "timestamp_attribute"
  error_handling:
    output: "output"

WriteToPubSub

Writes messages to Cloud Pub/Sub.

Configuration

  • topic string : Cloud Pub/Sub topic in the form "/topics//".

  • format string : How to format the message payload. Currently suported formats are

    • RAW: Expects a message with a single field (excluding attribute-related fields) whose contents are used as the raw bytes of the pubsub message.
    • AVRO: Encodes records with a given Avro schema, which may be inferred from the input PCollection schema.
    • JSON: Formats records with a given JSON schema, which may be inferred from the input PCollection schema.
  • schema ? (Optional) : Schema specification for the given format.

  • attributes Array[string] (Optional) : List of attribute keys whose values will be pulled out as PubSub message attributes. For example, if the format is raw and attributes is ["a", "b"] then elements of the form Row(any_field=..., a=..., b=...) will result in PubSub messages whose payload has the contents of any_field and whose attribute will be populated with the values of a and b.

  • attributes_map string (Optional)

  • id_attribute string (Optional) : If set, will set an attribute for each Cloud Pub/Sub message with the given name and a unique value. This attribute can then be used in a ReadFromPubSub PTransform to deduplicate messages.

  • timestamp_attribute string (Optional) : If set, will set an attribute for each Cloud Pub/Sub message with the given name and the message's publish time as the value.

  • error_handling Row (Optional) : This option specifies whether and where to output error rows.

    Row fields:

    • output string : Name to use for the output error collection

Usage

type: WriteToPubSub
input: ...
config:
  topic: "topic"
  format: "format"
  schema: schema
  attributes:
  - "attribute"
  - "attribute"
  - ...
  attributes_map: "attributes_map"
  id_attribute: "id_attribute"
  timestamp_attribute: "timestamp_attribute"
  error_handling:
    output: "output"

ReadFromPubSubLite

Configuration

Usage

type: ReadFromPubSubLite
config: ...

WriteToPubSubLite

Configuration

Usage

type: WriteToPubSubLite
input: ...
config: ...

ReadFromSpanner

Configuration

Usage

type: ReadFromSpanner
config: ...

WriteToSpanner

Configuration

Usage

type: WriteToSpanner
input: ...
config: ...

ReadFromSqlServer

Configuration

Usage

type: ReadFromSqlServer
config: ...

WriteToSqlServer

Configuration

Usage

type: WriteToSqlServer
input: ...
config: ...

ReadFromText

Reads lines from a text files.

The resulting PCollection consists of rows with a single string filed named "line."

Configuration

  • path string : The file path to read from. The path can contain glob characters such as * and ?.

Usage

type: ReadFromText
config:
  path: "path"

WriteToText

Writes a PCollection to a (set of) text files(s).

The input must be a PCollection whose schema has exactly one field.

Configuration

  • path string : The file path to write to. The files written will begin with this prefix, followed by a shard identifier.

Usage

type: WriteToText
input: ...
config:
  path: "path"