Deprecated API
Contents
-
Deprecated PackagesPackageDescriptionUse Calcite SQL dialect. Beam ZetaSQL has been deprecated.
-
Deprecated InterfacesInterfaceDescriptionThis interface will no longer be the source of truth for worker logging configuration once jobs are executed using a dedicated SDK harness instead of user code being co-located alongside Dataflow worker code. Consider set corresponding options within
SdkHarnessOptions
to ensure forward compatibility.Users can achieve the same by providing this transform in aParDo
before using write in AvroIOAvroIO.write(Class)
.tests which use unbounded PCollections should be in the categoryUsesUnboundedPCollections
. Beyond that, it is up to the runner and test configuration to decide whether to run in streaming mode.
-
Deprecated ClassesClassDescriptionLegacy non-portable source which can be replaced by a DoFn with timers. https://jira.apache.org/jira/browse/BEAM-8353replace with a
DefaultJobBundleFactory
when appropriate if theEnvironmentFactory
is aDockerEnvironmentFactory
, or create anInProcessJobBundleFactory
and inline the creation of the environment if appropriate.To implement a coder, do not use anyCoder.Context
. Just implement only those abstract methods which do not accept aCoder.Context
and leave the default implementations for methods accepting aCoder.Context
.SeeAvroIO.parseAllGenericRecords(SerializableFunction)
for details.SeeAvroIO.readAll(Class)
for details.superseded bySqsIO.WriteBatches
SeeTextIO.readAll()
for details.new implementations should extend theGetterBasedSchemaProviderV2
class' methods which receiveTypeDescriptor
s instead of ordinaryClass
es as arguments, which permits to support generic type signatures during schema inferenceConsider using
ApproximateCountDistinct
in thezetasketch
extension module, which makes use of theHllCount
implementation.If
ApproximateCountDistinct
does not meet your needs then you can directly useHllCount
. Direct usage will also give you access to save intermediate aggregation result into a sketch for later processing.For example, to estimate the number of distinct elements in a
PCollection<String>
:
For more details about usingPCollection<String> input = ...; PCollection<Long> countDistinct = input.apply(HllCount.Init.forStrings().globally()).apply(HllCount.Extract.globally());
HllCount
and thezetasketch
extension module, see https://s.apache.org/hll-in-beam#bookmark=id.v6chsij1ixo7.UseTestPipeline
with theDirectRunner
.useTop.Natural
insteaduseTop.Reversed
insteadUse ParamWindowedValueCoder instead, it is a general purpose implementation of the same concept but makes timestamp, windows and pane info configurable.
-
Deprecated Enum ClassesEnum ClassDescriptionUse
Compression
insteaduseCompression
.UseCompression
.UseCompression
.UseCompression
instead.UseTestPipeline
with theDirectRunner
.
-
Deprecated FieldsFieldDescriptionUses the incorrect terminology.
PropertyNames.RESTRICTION_ENCODING
. Should be removed once non FnAPI SplittableDoFn expansion for Dataflow is removed.This will leak your SparkContext, any attempt to create a new SparkContext later will fail. Please useSparkContextFactory.setProvidedSparkContext(JavaSparkContext)
/SparkContextFactory.clearProvidedSparkContext()
instead to properly control the lifecycle of your context. Alternatively you may also provide a SparkContext usingSparkPipelineOptions.setUsesProvidedSparkContext(boolean)
together withSparkContextOptions.setProvidedSparkContext(JavaSparkContext)
and close that one appropriately. Tests of this module should useSparkContextRule
.Use STREAMING_ENGINE_EXPERIMENT instead.UseFieldSpecifierNotationLexer.VOCABULARY
instead.UseFieldSpecifierNotationParser.VOCABULARY
instead.
-
Deprecated MethodsMethodDescriptionthis method defaults the region to "us-central1". Prefer using the overload with an explicit regionId parameter.Please use setStateBackend below.Please use setStateBackend below.only implement and call
Coder.decode(InputStream)
only implement and callCoder.encode(Object value, OutputStream)
This method is to change in an unknown backwards incompatible way once support for this functionality is refined.This method is to change in an unknown backwards incompatible way once support for this functionality is refined.This method is to change in an unknown backwards incompatible way once support for this functionality is refined.Prefer implementing 'knownBuilderInstances'. This method will be removed in a future version of Beam.kept for backward API compatibility only.You can achieve The functionality ofAvroIO.parseAllGenericRecords(SerializableFunction)
usingFileIO
matching plusAvroIO.parseFilesGenericRecords(SerializableFunction)
()}. This is the preferred method to make composition explicit.AvroIO.ParseAll
will not receive upgrades and will be removed in a future version of Beam.You can achieve The functionality ofAvroIO.readAll(java.lang.Class<T>)
usingFileIO
matching plusAvroIO.readFiles(Class)
. This is the preferred method to make composition explicit.AvroIO.ReadAll
will not receive upgrades and will be removed in a future version of Beam.You can achieve The functionality ofAvroIO.readAllGenericRecords(String)
usingFileIO
matching plusAvroIO.readFilesGenericRecords(String)
. This is the preferred method to make composition explicit.AvroIO.ReadAll
will not receive upgrades and will be removed in a future version of Beam.You can achieve The functionality ofAvroIO.readAllGenericRecords(Schema)
usingFileIO
matching plusAvroIO.readFilesGenericRecords(Schema)
. This is the preferred method to make composition explicit.AvroIO.ReadAll
will not receive upgrades and will be removed in a future version of Beam.RecordFormatter will be removed in future versions.UseFileIO.write()
orFileIO.writeDynamic()
instead.UseFileIO.write()
orFileIO.writeDynamic()
instead.UseAvroIO.writeCustomType(Class)
instead and provide the custom record classUsed by Dataflow workerUseGcpOptions.getWorkerZone()
instead.UseGcpOptions.setWorkerZone(java.lang.String)
instead.useExecutorOptions.getScheduledExecutorService()
insteaduseExecutorOptions.setScheduledExecutorService(java.util.concurrent.ScheduledExecutorService)
instead. If set, it may result in multiple ExecutorServices, and therefore thread pools, in the runtime.UseGcsUtil.create(GcsPath, CreateOptions)
instead.UseGcsUtil.create(GcsPath, CreateOptions)
instead.to be removed once splitting/checkpointing are available in SDKs and rewinding in readers.This create function is used for Dataflow migration purpose only.UseSqsIO.writeBatches()
for more configuration options.useGenerateSequence
insteaduseGenerateSequence
and callGenerateSequence.withTimestampFn(SerializableFunction)
insteaduseGenerateSequence
insteaduseElasticsearchIO.BulkIO.withMaxParallelRequests(int)
instead.Avoid usage of this method: its effects are complex and it will be removed in future versions of Beam. Right now it exists for compatibility withWriteFiles
.UseBigQueryIO.read(SerializableFunction)
orBigQueryIO.readTableRows()
instead.BigQueryIO.readTableRows()
does exactly the same asBigQueryIO.read()
, howeverBigQueryIO.read(SerializableFunction)
performs better.please set the options directly in BigtableIO.please set the options directly in BigtableIO.read options are configured directly on BigtableIO.read(). UseBigtableIO.Read.populateDisplayData(DisplayData.Builder)
to view the current configurations.please set the configurations directly: BigtableIO.read().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId) and set credentials inPipelineOptions
.please set the configurations directly: BigtableIO.read().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId) and set credentials inPipelineOptions
.please set the configurations directly: BigtableIO.read().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId) and set credentials inPipelineOptions
.write options are configured directly on BigtableIO.write(). UseBigtableIO.Write.populateDisplayData(DisplayData.Builder)
to view the current configurations.please configure the write options directly: BigtableIO.write().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId) and set credentials inPipelineOptions
.please configure the write options directly: BigtableIO.write().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId) and set credentials inPipelineOptions
.please configure the write options directly: BigtableIO.write().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId) and set credentials inPipelineOptions
.This method has been deprecated in Beam 2.60.0. It does not have an effect.This method has been deprecated in Beam 2.60.0. It does not have an effect.the v1beta1 API for Cloud Pub/Sub is deprecated.the v1beta2 API for Cloud Pub/Sub is deprecated.the v1beta1 API for Cloud Pub/Sub is deprecated.the v1beta2 API for Cloud Pub/Sub is deprecated.This configuration has no effect, as tracing is not available.JdbcIO
is able to infer appropriate coders from other parameters.JdbcIO
is able to infer appropriate coders from other parameters.JdbcIO
is able to infer appropriate coders from other parameters.as of version 2.13. UseKafkaIO.Read.withConsumerConfigUpdates(Map)
insteadas of version 2.4. UseKafkaIO.Read.withTimestampPolicyFactory(TimestampPolicyFactory)
instead.as of version 2.4. UseKafkaIO.Read.withTimestampPolicyFactory(TimestampPolicyFactory)
instead.as of version 2.4. UseKafkaIO.Read.withTimestampPolicyFactory(TimestampPolicyFactory)
instead.as of version 2.4. UseKafkaIO.Read.withTimestampPolicyFactory(TimestampPolicyFactory)
instead.as of version 2.13. UseKafkaIO.Write.withProducerConfigUpdates(Map)
instead.useKafkaIO.WriteRecords
andProducerRecords
to set publish timestamp.as of version 2.13. UseKafkaIO.WriteRecords.withProducerConfigUpdates(Map)
instead.useProducerRecords
to set publish timestamp.OverrideSource.getOutputCoder()
instead.You can achieve The functionality ofTextIO.readAll()
usingFileIO
matching plusTextIO.readFiles()
. This is the preferred method to make composition explicit.TextIO.ReadAll
will not receive upgrades and will be removed in a future version of Beam.UseTestPipeline.newProvider(T)
for testingValueProvider
code.this should never be used - everyPipeline
has a registry throughout its lifetime.new implementations should overrideGetterBasedSchemaProvider.fieldValueGetters(TypeDescriptor, Schema)
and make this method throw anUnsupportedOperationException
new implementations should overrideGetterBasedSchemaProvider.fieldValueTypeInformations(TypeDescriptor, Schema)
and make this method throw anUnsupportedOperationException
new implementations should overrideGetterBasedSchemaProvider.schemaTypeCreator(TypeDescriptor, Schema)
and make this method throw anUnsupportedOperationException
Set the nullability on the elementType insteaduse schema options instead.use schema options instead.use schema options instead.Set the nullability on the valueType insteaduse schema options instead.use schema options instead.use schema options instead.PreferPAssert.IterableAssert.empty()
to this method.Object.equals(Object)
is not supported on PAssert objects. If you meant to test object equality, use a variant ofPAssert.PCollectionContentsAssert.containsInAnyOrder(T...)
instead.Object.hashCode()
is not supported on PAssert objects.This method permits aDoFn
to emit elements behind the watermark. These elements are considered late, and if behind theallowed lateness
of a downstreamPCollection
may be silently dropped. See https://github.com/apache/beam/issues/18065 for details on a replacement.useDoFn.Setup
orDoFn.StartBundle
instead. This method will be removed in a future release.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.UseTestPipeline
with theDirectRunner
.Instead, the PTransform should explicitly callPCollection.setCoder(org.apache.beam.sdk.coders.Coder<T>)
on the returned PCollection.Instead, the PTransform should explicitly callPCollection.setCoder(org.apache.beam.sdk.coders.Coder<T>)
on the returned PCollection.Instead, the PTransform should explicitly callPCollection.setCoder(org.apache.beam.sdk.coders.Coder<T>)
on the returned PCollection.this method simply returns this AsMap unmodifiedThis should not be used to obtain the output of any given application of thisPTransform
. That should be obtained by inspecting theTransformHierarchy.Node
that contains thisView.CreatePCollectionView
, as this view may have been replaced within pipeline surgery.please override verifyCompatibility to throw a useful error message; we will remove isCompatible at version 3.0.0This method permits a to elements to be emitted behind the watermark. These elements are considered late, and if behind theallowed lateness
of a downstreamPCollection
may be silently dropped. See https://github.com/apache/beam/issues/18065 for details on a replacement.This method permits a to elements to be emitted behind the watermark. These elements are considered late, and if behind theallowed lateness
of a downstreamPCollection
may be silently dropped. See https://github.com/apache/beam/issues/18065 for details on a replacement.this method will be removed entirely. ThePCollection
underlying a side input, including itsCoder
, is part of the side input's specification with aParDo
transform, which will obtain that information via a package-private channel.this method will be removed entirely. ThePCollection
underlying a side input, is part of the side input's specification with aParDo
transform, which will obtain that information via a package-private channel.this method will be removed entirely. ThePCollection
underlying a side input, including itsWindowingStrategy
, is part of the side input's specification with aParDo
transform, which will obtain that information via a package-private channel.APValue
always expands into itself. CallingPValue.expand()
on a PValue is almost never appropriate.
-
Deprecated ConstructorsConstructorDescriptionThis constructor is for migrating Dataflow purpose only.
-
Deprecated Enum Constants