apache_beam.ml.inference.base module

An extensible run inference transform.

Users of this module can extend the ModelLoader class for any MLframework. Then pass their extended ModelLoader object into RunInference to create a RunInference Beam transform for that framework.

The transform will handle standard inference functionality like metric collection, sharing model between threads and batching elements.

Note: This module is still actively being developed and users should have no expectation that these interfaces will not change.

class apache_beam.ml.inference.base.InferenceRunner[source]

Bases: object

Implements running inferences for a framework.

run_inference(batch: List[Any], model: Any) → Iterable[Any][source]

Runs inferences on a batch of examples and returns an Iterable of Predictions.

get_num_bytes(batch: Any) → int[source]

Returns the number of bytes of data for a batch.

get_metrics_namespace() → str[source]

Returns a namespace for metrics collected by RunInference transform.

class apache_beam.ml.inference.base.ModelLoader[source]

Bases: typing.Generic

Has the ability to load an ML model.

load_model() → T[source]

Loads and initializes a model for processing.

get_inference_runner() → apache_beam.ml.inference.base.InferenceRunner[source]

Returns an implementation of InferenceRunner for this model.

class apache_beam.ml.inference.base.RunInference(model_loader: apache_beam.ml.inference.base.ModelLoader, clock=None)[source]

Bases: apache_beam.transforms.ptransform.PTransform

An extensible transform for running inferences.

expand(pcoll: apache_beam.pvalue.PCollection) → apache_beam.pvalue.PCollection[source]