Uses models to do local and remote inference. A
RunInference transform performs inference on a
PCollection of examples using a machine learning (ML) model. The transform outputs a
PCollection that contains the input examples and output predictions. Avaliable in Apache Beam 2.40.0 and later versions.
The following examples show how to create pipelines that use the Beam RunInference API to make predictions based on models.
|PyTorch unkeyed model
|PyTorch keyed model
|Sklearn unkeyed model
|Sklearn keyed model
Last updated on 2024/02/23
Have you found everything you were looking for?
Was it all useful and clear? Is there anything that you would like to change? Let us know!