public abstract static class BigtableIO.Write extends PTransform<PCollection<KV<com.google.protobuf.ByteString,java.lang.Iterable<com.google.bigtable.v2.Mutation>>>,PDone>
PTransform that writes to Google Cloud Bigtable. See the class-level Javadoc on
 BigtableIO for more information.BigtableIO, 
Serialized Formannotations, name, resourceHints| Constructor and Description | 
|---|
| Write() | 
| Modifier and Type | Method and Description | 
|---|---|
| PDone | expand(PCollection<KV<com.google.protobuf.ByteString,java.lang.Iterable<com.google.bigtable.v2.Mutation>>> input)Override this method to specify how this  PTransformshould be expanded on the givenInputT. | 
| @Nullable com.google.cloud.bigtable.config.BigtableOptions | getBigtableOptions()Deprecated. 
 write options are configured directly on BigtableIO.write(). Use  populateDisplayData(DisplayData.Builder)to view the current configurations. | 
| void | populateDisplayData(DisplayData.Builder builder)Register display data for the given transform or component. | 
| java.lang.String | toString() | 
| void | validate(PipelineOptions options)Called before running the Pipeline to verify this transform is fully and correctly specified. | 
| BigtableIO.Write | withAppProfileId(java.lang.String appProfileId)Returns a new  BigtableIO.Writethat will write using the specified app profile id. | 
| BigtableIO.Write | withAppProfileId(ValueProvider<java.lang.String> appProfileId)Returns a new  BigtableIO.Writethat will write using the specified app profile id. | 
| BigtableIO.Write | withAttemptTimeout(Duration timeout)Returns a new  BigtableIO.Writewith the attempt timeout. | 
| BigtableIO.Write | withBigtableOptions(com.google.cloud.bigtable.config.BigtableOptions.Builder optionsBuilder)Deprecated. 
 please configure the write options directly:
     BigtableIO.write().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId)
     and set credentials in  PipelineOptions. | 
| BigtableIO.Write | withBigtableOptions(com.google.cloud.bigtable.config.BigtableOptions options)Deprecated. 
 please configure the write options directly:
     BigtableIO.write().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId)
     and set credentials in  PipelineOptions. | 
| BigtableIO.Write | withBigtableOptionsConfigurator(SerializableFunction<com.google.cloud.bigtable.config.BigtableOptions.Builder,com.google.cloud.bigtable.config.BigtableOptions.Builder> configurator)Deprecated. 
 please configure the write options directly:
     BigtableIO.write().withProjectId(projectId).withInstanceId(instanceId).withTableId(tableId)
     and set credentials in  PipelineOptions. | 
| BigtableIO.Write | withEmulator(java.lang.String emulatorHost)Returns a new  BigtableIO.Writethat will use an official Bigtable emulator. | 
| BigtableIO.Write | withFlowControl(boolean enableFlowControl)Returns a new  BigtableIO.Writewith flow control enabled if enableFlowControl is
 true. | 
| BigtableIO.Write | withInstanceId(java.lang.String instanceId)Returns a new  BigtableIO.Writethat will write into the Cloud Bigtable instance
 indicated by given parameter, requireswithProjectId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>)to be called to determine the
 project. | 
| BigtableIO.Write | withInstanceId(ValueProvider<java.lang.String> instanceId)Returns a new  BigtableIO.Writethat will write into the Cloud Bigtable instance
 indicated by given parameter, requireswithProjectId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>)to be called to determine the
 project. | 
| BigtableIO.Write | withMaxBytesPerBatch(long size)Returns a new  BigtableIO.Writewith the max bytes a batch can have. | 
| BigtableIO.Write | withMaxElementsPerBatch(long size)Returns a new  BigtableIO.Writewith the max elements a batch can have. | 
| BigtableIO.Write | withMaxOutstandingBytes(long bytes)Returns a new  BigtableIO.Writewith the max number of outstanding bytes allowed
 before enforcing flow control. | 
| BigtableIO.Write | withMaxOutstandingElements(long count)Returns a new  BigtableIO.Writewith the max number of outstanding elements allowed
 before enforcing flow control. | 
| BigtableIO.Write | withOperationTimeout(Duration timeout)Returns a new  BigtableIO.Writewith the operation timeout. | 
| BigtableIO.Write | withoutValidation()Disables validation that the table being written to exists. | 
| BigtableIO.Write | withProjectId(java.lang.String projectId)Returns a new  BigtableIO.Writethat will write into the Cloud Bigtable project
 indicated by given parameter, requireswithInstanceId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>)to be called to determine the
 instance. | 
| BigtableIO.Write | withProjectId(ValueProvider<java.lang.String> projectId)Returns a new  BigtableIO.Writethat will write into the Cloud Bigtable project
 indicated by given parameter, requireswithInstanceId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>)to be called to determine the
 instance. | 
| BigtableIO.Write | withTableId(java.lang.String tableId)Returns a new  BigtableIO.Writethat will write to the specified table. | 
| BigtableIO.Write | withTableId(ValueProvider<java.lang.String> tableId)Returns a new  BigtableIO.Writethat will write to the specified table. | 
| BigtableIO.WriteWithResults | withWriteResults()Returns a  BigtableIO.WriteWithResultsthat will emit aBigtableWriteResultfor each batch of rows written. | 
addAnnotation, compose, compose, getAdditionalInputs, getAnnotations, getDefaultOutputCoder, getDefaultOutputCoder, getDefaultOutputCoder, getKindString, getName, getResourceHints, setResourceHints, validate@Deprecated public @Nullable com.google.cloud.bigtable.config.BigtableOptions getBigtableOptions()
populateDisplayData(DisplayData.Builder) to view the current configurations.public BigtableIO.Write withProjectId(ValueProvider<java.lang.String> projectId)
BigtableIO.Write that will write into the Cloud Bigtable project
 indicated by given parameter, requires withInstanceId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) to be called to determine the
 instance.
 Does not modify this object.
public BigtableIO.Write withProjectId(java.lang.String projectId)
BigtableIO.Write that will write into the Cloud Bigtable project
 indicated by given parameter, requires withInstanceId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) to be called to determine the
 instance.
 Does not modify this object.
public BigtableIO.Write withInstanceId(ValueProvider<java.lang.String> instanceId)
BigtableIO.Write that will write into the Cloud Bigtable instance
 indicated by given parameter, requires withProjectId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) to be called to determine the
 project.
 Does not modify this object.
public BigtableIO.Write withInstanceId(java.lang.String instanceId)
BigtableIO.Write that will write into the Cloud Bigtable instance
 indicated by given parameter, requires withProjectId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) to be called to determine the
 project.
 Does not modify this object.
public BigtableIO.Write withTableId(ValueProvider<java.lang.String> tableId)
BigtableIO.Write that will write to the specified table.
 Does not modify this object.
public BigtableIO.Write withTableId(java.lang.String tableId)
BigtableIO.Write that will write to the specified table.
 Does not modify this object.
public BigtableIO.Write withAppProfileId(ValueProvider<java.lang.String> appProfileId)
BigtableIO.Write that will write using the specified app profile id.
 Remember that in order to use single-row transactions, this must use a single-cluster routing policy.
Does not modify this object.
public BigtableIO.Write withAppProfileId(java.lang.String appProfileId)
BigtableIO.Write that will write using the specified app profile id.
 Remember that in order to use single-row transactions, this must use a single-cluster routing policy.
Does not modify this object.
@Deprecated public BigtableIO.Write withBigtableOptions(com.google.cloud.bigtable.config.BigtableOptions options)
PipelineOptions.withInstanceId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) and withProjectId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) respectively.
 Returns a new BigtableIO.Write that will write to the Cloud Bigtable instance
 indicated by the given options, and using any other specified customizations.
 
Does not modify this object.
@Deprecated public BigtableIO.Write withBigtableOptions(com.google.cloud.bigtable.config.BigtableOptions.Builder optionsBuilder)
PipelineOptions.withInstanceId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) and withProjectId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) respectively.
 Returns a new BigtableIO.Write that will write to the Cloud Bigtable instance
 indicated by the given options, and using any other specified customizations.
 
Clones the given BigtableOptions builder so that any further changes will have no
 effect on the returned BigtableIO.Write.
 
Does not modify this object.
@Deprecated public BigtableIO.Write withBigtableOptionsConfigurator(SerializableFunction<com.google.cloud.bigtable.config.BigtableOptions.Builder,com.google.cloud.bigtable.config.BigtableOptions.Builder> configurator)
PipelineOptions.BigtableIO.Write that will read from the Cloud Bigtable instance with
 customized options provided by given configurator.
 WARNING: instanceId and projectId should not be provided here and should be provided over
 withProjectId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>) and withInstanceId(org.apache.beam.sdk.options.ValueProvider<java.lang.String>).
 
Does not modify this object.
public BigtableIO.Write withoutValidation()
public BigtableIO.Write withEmulator(java.lang.String emulatorHost)
BigtableIO.Write that will use an official Bigtable emulator.
 This is used for testing.
public BigtableIO.Write withAttemptTimeout(Duration timeout)
BigtableIO.Write with the attempt timeout. Attempt timeout controls the
 timeout for each remote call.
 Does not modify this object.
public BigtableIO.Write withOperationTimeout(Duration timeout)
BigtableIO.Write with the operation timeout. Operation timeout has
 ultimate control over how long the logic should keep trying the remote call until it gives up
 completely.
 Does not modify this object.
public BigtableIO.Write withMaxElementsPerBatch(long size)
BigtableIO.Write with the max elements a batch can have. After this
 many elements are accumulated, they will be wrapped up in a batch and sent to Bigtable.
 Does not modify this object.
public BigtableIO.Write withMaxBytesPerBatch(long size)
BigtableIO.Write with the max bytes a batch can have. After this many
 bytes are accumulated, the elements will be wrapped up in a batch and sent to Bigtable.
 Does not modify this object.
public BigtableIO.Write withMaxOutstandingElements(long count)
BigtableIO.Write with the max number of outstanding elements allowed
 before enforcing flow control.
 Does not modify this object.
public BigtableIO.Write withMaxOutstandingBytes(long bytes)
BigtableIO.Write with the max number of outstanding bytes allowed
 before enforcing flow control.
 Does not modify this object.
public BigtableIO.Write withFlowControl(boolean enableFlowControl)
BigtableIO.Write with flow control enabled if enableFlowControl is
 true.
 When enabled, traffic to Bigtable is automatically rate-limited to prevent overloading
 Bigtable clusters while keeping enough load to trigger Bigtable Autoscaling (if enabled) to
 provision more nodes as needed. It is different from the flow control set by withMaxOutstandingElements(long) and withMaxOutstandingBytes(long), which is
 always enabled on batch writes and limits the number of outstanding requests to the Bigtable
 server.
 
Does not modify this object.
public BigtableIO.WriteWithResults withWriteResults()
BigtableIO.WriteWithResults that will emit a BigtableWriteResult
 for each batch of rows written.public PDone expand(PCollection<KV<com.google.protobuf.ByteString,java.lang.Iterable<com.google.bigtable.v2.Mutation>>> input)
PTransformPTransform should be expanded on the given
 InputT.
 NOTE: This method should not be called directly. Instead apply the PTransform should
 be applied to the InputT using the apply method.
 
Composite transforms, which are defined in terms of other transforms, should return the output of one of the composed transforms. Non-composite transforms, which do not apply any transforms internally, should return a new unbound output and register evaluators (via backend-specific registration methods).
expand in class PTransform<PCollection<KV<com.google.protobuf.ByteString,java.lang.Iterable<com.google.bigtable.v2.Mutation>>>,PDone>public void validate(PipelineOptions options)
PTransformBy default, does nothing.
validate in class PTransform<PCollection<KV<com.google.protobuf.ByteString,java.lang.Iterable<com.google.bigtable.v2.Mutation>>>,PDone>public void populateDisplayData(DisplayData.Builder builder)
PTransformpopulateDisplayData(DisplayData.Builder) is invoked by Pipeline runners to collect
 display data via DisplayData.from(HasDisplayData). Implementations may call super.populateDisplayData(builder) in order to register display data in the current namespace,
 but should otherwise use subcomponent.populateDisplayData(builder) to use the namespace
 of the subcomponent.
 
By default, does not register any display data. Implementors may override this method to provide their own display data.
populateDisplayData in interface HasDisplayDatapopulateDisplayData in class PTransform<PCollection<KV<com.google.protobuf.ByteString,java.lang.Iterable<com.google.bigtable.v2.Mutation>>>,PDone>builder - The builder to populate with display data.HasDisplayDatapublic final java.lang.String toString()
toString in class PTransform<PCollection<KV<com.google.protobuf.ByteString,java.lang.Iterable<com.google.bigtable.v2.Mutation>>>,PDone>