RunInference
|
Uses models to do local and remote inference. A RunInference
transform performs inference on a PCollection
of examples using a machine learning (ML) model. The transform outputs a PCollection
that contains the input examples and output predictions. Avaliable in Apache Beam 2.40.0 and later versions.
For more information about Beam RunInference APIs, see the About Beam ML page and the RunInference API pipeline examples.
Examples
The following examples show how to create pipelines that use the Beam RunInference API to make predictions based on models.
Framework | Example |
---|---|
PyTorch | PyTorch unkeyed model |
PyTorch | PyTorch keyed model |
Sklearn | Sklearn unkeyed model |
Sklearn | Sklearn keyed model |
Related transforms
Not applicable.
Pydoc |
Last updated on 2024/11/20
Have you found everything you were looking for?
Was it all useful and clear? Is there anything that you would like to change? Let us know!