Class SourceRDD.Bounded<T>

java.lang.Object
org.apache.spark.rdd.RDD<WindowedValue<T>>
org.apache.beam.runners.spark.io.SourceRDD.Bounded<T>
All Implemented Interfaces:
Serializable, org.apache.spark.internal.Logging, scala.Serializable
Enclosing class:
SourceRDD

public static class SourceRDD.Bounded<T> extends org.apache.spark.rdd.RDD<WindowedValue<T>>
A SourceRDD.Bounded reads input from a BoundedSource and creates a Spark RDD. This is the default way for the SparkRunner to read data from Beam's BoundedSources.
See Also:
  • Nested Class Summary

    Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging

    org.apache.spark.internal.Logging.SparkShellLoggingFilter
  • Constructor Summary

    Constructors
    Constructor
    Description
    Bounded(org.apache.spark.SparkContext sc, BoundedSource<T> source, org.apache.beam.runners.core.construction.SerializablePipelineOptions options, String stepName)
     
  • Method Summary

    Modifier and Type
    Method
    Description
    scala.collection.Iterator<WindowedValue<T>>
    compute(org.apache.spark.Partition split, org.apache.spark.TaskContext context)
     
    org.apache.spark.Partition[]
     

    Methods inherited from class org.apache.spark.rdd.RDD

    $plus$plus, aggregate, barrier, cache, cartesian, checkpoint, checkpointData, checkpointData_$eq, cleanShuffleDependencies, cleanShuffleDependencies$default$1, clearDependencies, coalesce, coalesce$default$2, coalesce$default$3, coalesce$default$4, collect, collect, collectPartitions, computeOrReadCheckpoint, conf, context, count, countApprox, countApprox$default$2, countApproxDistinct, countApproxDistinct, countApproxDistinct$default$1, countByValue, countByValue$default$1, countByValueApprox, countByValueApprox$default$2, countByValueApprox$default$3, creationSite, dependencies, distinct, distinct, distinct$default$2, doCheckpoint, doubleRDDToDoubleRDDFunctions, elementClassTag, filter, first, firstParent, flatMap, fold, foreach, foreachPartition, getCheckpointFile, getCreationSite, getDependencies, getNarrowAncestors, getNumPartitions, getOrCompute, getOutputDeterministicLevel, getPreferredLocations, getResourceProfile, getStorageLevel, glom, groupBy, groupBy, groupBy, groupBy$default$4, id, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, intersection, intersection, intersection, intersection$default$3, isBarrier, isBarrier_, isCheckpointed, isCheckpointedAndMaterialized, isEmpty, isLocallyCheckpointed, isReliablyCheckpointed, isTraceEnabled, iterator, keyBy, localCheckpoint, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, map, mapPartitions, mapPartitions$default$2, mapPartitionsInternal, mapPartitionsInternal$default$2, mapPartitionsWithEvaluator, mapPartitionsWithIndex, mapPartitionsWithIndex, mapPartitionsWithIndex$default$2, mapPartitionsWithIndexInternal, mapPartitionsWithIndexInternal$default$2, mapPartitionsWithIndexInternal$default$3, markCheckpointed, max, min, name, name_$eq, numericRDDToDoubleRDDFunctions, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, outputDeterministicLevel, parent, partitioner, partitions, persist, persist, pipe, pipe, pipe, pipe$default$2, pipe$default$3, pipe$default$4, pipe$default$5, pipe$default$6, pipe$default$7, preferredLocations, randomSampleWithRange, randomSplit, randomSplit$default$2, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToPairRDDFunctions$default$4, rddToSequenceFileRDDFunctions, reduce, repartition, repartition$default$2, retag, retag, sample, sample$default$3, saveAsObjectFile, saveAsTextFile, saveAsTextFile, scope, setName, sortBy, sortBy$default$2, sortBy$default$3, sparkContext, subtract, subtract, subtract, subtract$default$3, take, takeOrdered, takeSample, takeSample$default$3, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeAggregate, treeAggregate$default$4, treeReduce, treeReduce$default$2, union, unpersist, unpersist$default$1, withResources, withScope, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionsWithEvaluator, zipWithIndex, zipWithUniqueId

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
  • Constructor Details

    • Bounded

      public Bounded(org.apache.spark.SparkContext sc, BoundedSource<T> source, org.apache.beam.runners.core.construction.SerializablePipelineOptions options, String stepName)
  • Method Details

    • getPartitions

      public org.apache.spark.Partition[] getPartitions()
      Specified by:
      getPartitions in class org.apache.spark.rdd.RDD<WindowedValue<T>>
    • compute

      public scala.collection.Iterator<WindowedValue<T>> compute(org.apache.spark.Partition split, org.apache.spark.TaskContext context)
      Specified by:
      compute in class org.apache.spark.rdd.RDD<WindowedValue<T>>