public interface SparkPipelineOptions extends SparkCommonPipelineOptions
PipelineOptions handles Spark execution-related configurations, such as the
 master address, batch-interval, and other user-related knobs.SparkCommonPipelineOptions.TmpCheckpointDirFactoryPipelineOptions.AtomicLongFactory, PipelineOptions.CheckEnabled, PipelineOptions.DirectRunner, PipelineOptions.JobNameFactory, PipelineOptions.UserAgentFactoryDEFAULT_MASTER_URL| Modifier and Type | Method and Description | 
|---|---|
| java.lang.Long | getBatchIntervalMillis() | 
| java.lang.Long | getBundleSize() | 
| java.lang.Long | getCheckpointDurationMillis() | 
| java.lang.Long | getMaxRecordsPerBatch() | 
| java.lang.Long | getMinReadTimeMillis() | 
| java.lang.Double | getReadTimePercentage() | 
| java.lang.String | getStorageLevel() | 
| boolean | getUsesProvidedSparkContext() | 
| boolean | isCacheDisabled() | 
| static boolean | isLocalSparkMaster(SparkPipelineOptions options)Detects if the pipeline is run in spark local mode. | 
| static void | prepareFilesToStage(SparkPipelineOptions options)Classpath contains non jar files (eg. | 
| void | setBatchIntervalMillis(java.lang.Long batchInterval) | 
| void | setBundleSize(java.lang.Long value) | 
| void | setCacheDisabled(boolean value) | 
| void | setCheckpointDurationMillis(java.lang.Long durationMillis) | 
| void | setMaxRecordsPerBatch(java.lang.Long maxRecordsPerBatch) | 
| void | setMinReadTimeMillis(java.lang.Long minReadTimeMillis) | 
| void | setReadTimePercentage(java.lang.Double readTimePercentage) | 
| void | setStorageLevel(java.lang.String storageLevel) | 
| void | setUsesProvidedSparkContext(boolean value) | 
getCheckpointDir, getEnableSparkMetricSinks, getFilesToStage, getSparkMaster, setCheckpointDir, setEnableSparkMetricSinks, setFilesToStage, setSparkMasterisStreaming, setStreaminggetAppName, setAppNameas, getJobName, getOptionsId, getRunner, getStableUniqueNames, getTempLocation, getUserAgent, outputRuntimeOptions, setJobName, setOptionsId, setRunner, setStableUniqueNames, setTempLocation, setUserAgentpopulateDisplayData@Default.Long(value=500L) java.lang.Long getBatchIntervalMillis()
void setBatchIntervalMillis(java.lang.Long batchInterval)
@Default.String(value="MEMORY_ONLY") java.lang.String getStorageLevel()
void setStorageLevel(java.lang.String storageLevel)
@Default.Long(value=200L) java.lang.Long getMinReadTimeMillis()
void setMinReadTimeMillis(java.lang.Long minReadTimeMillis)
@Default.Long(value=-1L) java.lang.Long getMaxRecordsPerBatch()
void setMaxRecordsPerBatch(java.lang.Long maxRecordsPerBatch)
@Default.Double(value=0.1) java.lang.Double getReadTimePercentage()
void setReadTimePercentage(java.lang.Double readTimePercentage)
@Default.Long(value=-1L) java.lang.Long getCheckpointDurationMillis()
void setCheckpointDurationMillis(java.lang.Long durationMillis)
@Default.Long(value=0L) java.lang.Long getBundleSize()
@Experimental void setBundleSize(java.lang.Long value)
@Default.Boolean(value=false) boolean getUsesProvidedSparkContext()
void setUsesProvidedSparkContext(boolean value)
@Default.Boolean(value=false) boolean isCacheDisabled()
void setCacheDisabled(boolean value)
@Internal static boolean isLocalSparkMaster(SparkPipelineOptions options)
@Internal static void prepareFilesToStage(SparkPipelineOptions options)
SparkContext can handle
 this when running in local master, it's better not to include non-jars files in classpath.