Class RemoteInference
java.lang.Object
org.apache.beam.sdk.ml.inference.remote.RemoteInference
A
PTransform for making remote inference calls to external machine learning services.
RemoteInference provides a framework for integrating remote ML model inference into
Apache Beam pipelines and handles the communication between pipelines and external inference
APIs.
Example: OpenAI Model Inference
// Create model parameters
OpenAIModelParameters params = OpenAIModelParameters.builder()
.apiKey("your-api-key")
.modelName("gpt-4")
.instructionPrompt("Analyse sentiment as positive or negative")
.build();
// Apply remote inference transform
PCollection<OpenAIModelInput> inputs = pipeline.apply(Create.of(
OpenAIModelInput.create("An excellent B2B SaaS solution that streamlines business processes efficiently."),
OpenAIModelInput.create("Really impressed with the innovative features!")
));
PCollection<Iterable<PredictionResult<OpenAIModelInput, OpenAIModelResponse>>> results =
inputs.apply(
RemoteInference.<OpenAIModelInput, OpenAIModelResponse>invoke()
.handler(OpenAIModelHandler.class)
.withParameters(params)
);
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic classRemoteInference.Invoke<InputT extends BaseInput,OutputT extends BaseResponse> -
Method Summary
Modifier and TypeMethodDescriptionstatic <InputT extends BaseInput,OutputT extends BaseResponse>
RemoteInference.Invoke<InputT, OutputT> invoke()Invoke the model handler with model parameters.
-
Method Details
-
invoke
public static <InputT extends BaseInput,OutputT extends BaseResponse> RemoteInference.Invoke<InputT,OutputT> invoke()Invoke the model handler with model parameters.
-