Uses models to do local and remote inference. A
RunInference transform performs inference on a
PCollection of examples using a machine learning (ML) model. The transform outputs a
PCollection that contains the input examples and output predictions.
You must have Apache Beam 2.40.0 or later installed to run these pipelines.
See more RunInference API pipeline examples.
The following examples show how to create pipelines that use the Beam RunInference API to make predictions based on models.
|PyTorch||PyTorch unkeyed model|
|PyTorch||PyTorch keyed model|
|Sklearn||Sklearn unkeyed model|
|Sklearn||Sklearn keyed model|
Last updated on 2023/03/20
Have you found everything you were looking for?
Was it all useful and clear? Is there anything that you would like to change? Let us know!