apache_beam.ml.inference package
A package with various modules for running inferences and predictions on models. This package contains support for popular frameworks as well as an interface for adding unsupported frameworks.
Note: on top of the frameworks captured in submodules below, Beam also has a supported TensorFlow model handler via the tfx-bsl library. See https://beam.apache.org/documentation/ml/about-ml/#tensorflow for more information on using TensorFlow in Beam.
Submodules
- apache_beam.ml.inference.base module
PredictionResultRateLimitExceededModelMetadataRunInferenceDLQKeyModelPathMappingModelHandlerModelHandler.load_model()ModelHandler.run_inference()ModelHandler.get_num_bytes()ModelHandler.get_metrics_namespace()ModelHandler.get_resource_hints()ModelHandler.batch_elements_kwargs()ModelHandler.validate_inference_args()ModelHandler.update_model_path()ModelHandler.update_model_paths()ModelHandler.get_preprocess_fns()ModelHandler.get_postprocess_fns()ModelHandler.should_skip_batching()ModelHandler.set_environment_vars()ModelHandler.with_preprocess_fn()ModelHandler.with_postprocess_fn()ModelHandler.with_no_batching()ModelHandler.share_model_across_processes()ModelHandler.model_copies()ModelHandler.override_metrics()ModelHandler.should_garbage_collect_on_timeout()
RemoteModelHandlerKeyModelMappingKeyedModelHandlerKeyedModelHandler.load_model()KeyedModelHandler.run_inference()KeyedModelHandler.get_num_bytes()KeyedModelHandler.get_metrics_namespace()KeyedModelHandler.get_resource_hints()KeyedModelHandler.batch_elements_kwargs()KeyedModelHandler.validate_inference_args()KeyedModelHandler.update_model_paths()KeyedModelHandler.update_model_path()KeyedModelHandler.share_model_across_processes()KeyedModelHandler.model_copies()KeyedModelHandler.override_metrics()KeyedModelHandler.should_garbage_collect_on_timeout()
MaybeKeyedModelHandlerMaybeKeyedModelHandler.load_model()MaybeKeyedModelHandler.run_inference()MaybeKeyedModelHandler.get_num_bytes()MaybeKeyedModelHandler.get_metrics_namespace()MaybeKeyedModelHandler.get_resource_hints()MaybeKeyedModelHandler.batch_elements_kwargs()MaybeKeyedModelHandler.validate_inference_args()MaybeKeyedModelHandler.update_model_path()MaybeKeyedModelHandler.get_preprocess_fns()MaybeKeyedModelHandler.get_postprocess_fns()MaybeKeyedModelHandler.should_skip_batching()MaybeKeyedModelHandler.share_model_across_processes()MaybeKeyedModelHandler.model_copies()
OOMProtectedFnRunInferenceload_model_status()
- apache_beam.ml.inference.gemini_inference module
- apache_beam.ml.inference.huggingface_inference module
- apache_beam.ml.inference.model_manager module
- apache_beam.ml.inference.onnx_inference module
- apache_beam.ml.inference.pytorch_inference module
- apache_beam.ml.inference.sklearn_inference module
- apache_beam.ml.inference.tensorflow_inference module
- apache_beam.ml.inference.tensorrt_inference module
TensorRTEngineTensorRTEngineHandlerNumPyTensorRTEngineHandlerNumPy.load_model()TensorRTEngineHandlerNumPy.load_onnx()TensorRTEngineHandlerNumPy.build_engine()TensorRTEngineHandlerNumPy.run_inference()TensorRTEngineHandlerNumPy.get_num_bytes()TensorRTEngineHandlerNumPy.get_metrics_namespace()TensorRTEngineHandlerNumPy.validate_inference_args()
- apache_beam.ml.inference.utils module
- apache_beam.ml.inference.vertex_ai_inference module
- apache_beam.ml.inference.vllm_inference module
- apache_beam.ml.inference.xgboost_inference module