Package org.apache.beam.sdk.io.kafka
Class KafkaMetrics.KafkaMetricsImpl
java.lang.Object
org.apache.beam.sdk.io.kafka.KafkaMetrics.KafkaMetricsImpl
- All Implemented Interfaces:
KafkaMetrics
- Enclosing interface:
KafkaMetrics
Metrics of a batch of RPCs. Member variables are thread safe; however, this class does not have
atomicity across member variables.
Expected usage: A number of threads record metrics in an instance of this class with the
member methods. Afterwards, a single thread should call updateStreamingInsertsMetrics
which will export all counters metrics and RPC latency distribution metrics to the underlying
perWorkerMetrics
container. Afterwards, metrics should not be written/read from this
object.
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.beam.sdk.io.kafka.KafkaMetrics
KafkaMetrics.KafkaMetricsImpl, KafkaMetrics.NoOpKafkaMetrics
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptioncreate()
void
Export all metrics recorded in this instance to the underlyingperWorkerMetrics
containers.void
updateBacklogBytes
(String topicName, int partitionId, long backlog) This is for tracking backlog bytes to be added to the Metric Container at a later time.void
updateSuccessfulRpcMetrics
(String topic, Duration elapsedTime) Record the rpc status and latency of a successful Kafka poll RPC call.
-
Constructor Details
-
KafkaMetricsImpl
public KafkaMetricsImpl()
-
-
Method Details
-
create
-
updateSuccessfulRpcMetrics
Record the rpc status and latency of a successful Kafka poll RPC call.TODO(naireenhussain): It's possible that `isWritable().get()` is called before it's set to false in another thread, allowing an extraneous measurement to slip in, so perTopicRpcLatencies() isn't necessarily thread safe. One way to address this would be to add synchronized blocks to ensure that there is only one thread ever reading/modifying the perTopicRpcLatencies() map.
- Specified by:
updateSuccessfulRpcMetrics
in interfaceKafkaMetrics
-
updateBacklogBytes
This is for tracking backlog bytes to be added to the Metric Container at a later time.- Specified by:
updateBacklogBytes
in interfaceKafkaMetrics
- Parameters:
topicName
- topicNamepartitionId
- partitionIdbacklog
- backlog for the specific partitionID of topicName
-
flushBufferedMetrics
public void flushBufferedMetrics()Export all metrics recorded in this instance to the underlyingperWorkerMetrics
containers. This function will only report metrics once per instance. Subsequent calls to this function will no-op.- Specified by:
flushBufferedMetrics
in interfaceKafkaMetrics
-