public abstract static class BigQueryIO.Write<T> extends PTransform<PCollection<T>,WriteResult>
BigQueryIO.write()
.Modifier and Type | Class and Description |
---|---|
static class |
BigQueryIO.Write.CreateDisposition
An enumeration type for the BigQuery create disposition strings.
|
static class |
BigQueryIO.Write.Method
Determines the method used to insert data in BigQuery.
|
static class |
BigQueryIO.Write.WriteDisposition
An enumeration type for the BigQuery write disposition strings.
|
name
Constructor and Description |
---|
Write() |
Modifier and Type | Method and Description |
---|---|
WriteResult |
expand(PCollection<T> input)
Override this method to specify how this
PTransform should be expanded
on the given InputT . |
ValueProvider<TableReference> |
getTable()
Returns the table reference, or
null . |
void |
populateDisplayData(DisplayData.Builder builder)
Register display data for the given transform or component.
|
BigQueryIO.Write<T> |
to(DynamicDestinations<T,?> dynamicDestinations)
Writes to the table and schema specified by the
DynamicDestinations object. |
BigQueryIO.Write<T> |
to(SerializableFunction<ValueInSingleWindow<T>,TableDestination> tableFunction)
Writes to table specified by the specified table function.
|
BigQueryIO.Write<T> |
to(java.lang.String tableSpec)
Writes to the given table, specified in the format described in
BigQueryHelpers.parseTableSpec(java.lang.String) . |
BigQueryIO.Write<T> |
to(TableReference table)
Writes to the given table, specified as a
TableReference . |
BigQueryIO.Write<T> |
to(ValueProvider<java.lang.String> tableSpec)
Same as
to(String) , but with a ValueProvider . |
void |
validate(PipelineOptions pipelineOptions)
Called before running the Pipeline to verify this transform is fully and correctly
specified.
|
BigQueryIO.Write<T> |
withCreateDisposition(BigQueryIO.Write.CreateDisposition createDisposition)
Specifies whether the table should be created if it does not exist.
|
BigQueryIO.Write<T> |
withCustomGcsTempLocation(ValueProvider<java.lang.String> customGcsTempLocation)
Provides a custom location on GCS for storing temporary files to be loaded via BigQuery
batch load jobs.
|
BigQueryIO.Write<T> |
withFailedInsertRetryPolicy(InsertRetryPolicy retryPolicy)
Specfies a policy for handling failed inserts.
|
BigQueryIO.Write<T> |
withFormatFunction(SerializableFunction<T,TableRow> formatFunction)
Formats the user's type into a
TableRow to be written to BigQuery. |
BigQueryIO.Write<T> |
withJsonSchema(java.lang.String jsonSchema)
Similar to
withSchema(TableSchema) but takes in a JSON-serialized TableSchema . |
BigQueryIO.Write<T> |
withJsonSchema(ValueProvider<java.lang.String> jsonSchema)
Same as
withJsonSchema(String) but using a deferred ValueProvider . |
BigQueryIO.Write<T> |
withJsonTimePartitioning(ValueProvider<java.lang.String> partitioning)
The same as
withTimePartitioning(com.google.api.services.bigquery.model.TimePartitioning) , but takes a JSON-serialized object. |
BigQueryIO.Write<T> |
withMethod(BigQueryIO.Write.Method method)
Choose the method used to write data to BigQuery.
|
BigQueryIO.Write<T> |
withNumFileShards(int numFileShards)
Control how many file shards are written when using BigQuery load jobs.
|
BigQueryIO.Write<T> |
withoutValidation()
Disables BigQuery table validation.
|
BigQueryIO.Write<T> |
withSchema(TableSchema schema)
Uses the specified schema for rows to be written.
|
BigQueryIO.Write<T> |
withSchema(ValueProvider<TableSchema> schema)
Same as
withSchema(TableSchema) but using a deferred ValueProvider . |
BigQueryIO.Write<T> |
withSchemaFromView(PCollectionView<java.util.Map<java.lang.String,java.lang.String>> view)
Allows the schemas for each table to be computed within the pipeline itself.
|
BigQueryIO.Write<T> |
withTableDescription(java.lang.String tableDescription)
Specifies the table description.
|
BigQueryIO.Write<T> |
withTimePartitioning(TimePartitioning partitioning)
Allows newly created tables to include a
TimePartitioning class. |
BigQueryIO.Write<T> |
withTimePartitioning(ValueProvider<TimePartitioning> partitioning)
Like
withTimePartitioning(TimePartitioning) but using a deferred
ValueProvider . |
BigQueryIO.Write<T> |
withTriggeringFrequency(Duration triggeringFrequency)
Choose the frequency at which file writes are triggered.
|
BigQueryIO.Write<T> |
withWriteDisposition(BigQueryIO.Write.WriteDisposition writeDisposition)
Specifies what to do with existing data in the table, in case the table already exists.
|
getAdditionalInputs, getDefaultOutputCoder, getDefaultOutputCoder, getDefaultOutputCoder, getKindString, getName, toString
public BigQueryIO.Write<T> to(java.lang.String tableSpec)
BigQueryHelpers.parseTableSpec(java.lang.String)
.public BigQueryIO.Write<T> to(TableReference table)
TableReference
.public BigQueryIO.Write<T> to(ValueProvider<java.lang.String> tableSpec)
to(String)
, but with a ValueProvider
.public BigQueryIO.Write<T> to(SerializableFunction<ValueInSingleWindow<T>,TableDestination> tableFunction)
ValueInSingleWindow
, so can be determined by the value or by the window.public BigQueryIO.Write<T> to(DynamicDestinations<T,?> dynamicDestinations)
DynamicDestinations
object.public BigQueryIO.Write<T> withFormatFunction(SerializableFunction<T,TableRow> formatFunction)
TableRow
to be written to BigQuery.public BigQueryIO.Write<T> withSchema(TableSchema schema)
The schema is required only if writing to a table that does not already exist, and
BigQueryIO.Write.CreateDisposition
is set to BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED
.
public BigQueryIO.Write<T> withSchema(ValueProvider<TableSchema> schema)
withSchema(TableSchema)
but using a deferred ValueProvider
.public BigQueryIO.Write<T> withJsonSchema(java.lang.String jsonSchema)
withSchema(TableSchema)
but takes in a JSON-serialized TableSchema
.public BigQueryIO.Write<T> withJsonSchema(ValueProvider<java.lang.String> jsonSchema)
withJsonSchema(String)
but using a deferred ValueProvider
.public BigQueryIO.Write<T> withSchemaFromView(PCollectionView<java.util.Map<java.lang.String,java.lang.String>> view)
The input is a map-valued PCollectionView
mapping string tablespecs to
JSON-formatted TableSchema
s. Tablespecs must be in the same format as taken by
to(String)
.
public BigQueryIO.Write<T> withTimePartitioning(TimePartitioning partitioning)
TimePartitioning
class. Can only be used
when writing to a single table. If to(SerializableFunction)
or
to(DynamicDestinations)
is used to write dynamic tables, time partitioning can be
directly in the returned TableDestination
.public BigQueryIO.Write<T> withTimePartitioning(ValueProvider<TimePartitioning> partitioning)
withTimePartitioning(TimePartitioning)
but using a deferred
ValueProvider
.public BigQueryIO.Write<T> withJsonTimePartitioning(ValueProvider<java.lang.String> partitioning)
withTimePartitioning(com.google.api.services.bigquery.model.TimePartitioning)
, but takes a JSON-serialized object.public BigQueryIO.Write<T> withCreateDisposition(BigQueryIO.Write.CreateDisposition createDisposition)
public BigQueryIO.Write<T> withWriteDisposition(BigQueryIO.Write.WriteDisposition writeDisposition)
public BigQueryIO.Write<T> withTableDescription(java.lang.String tableDescription)
public BigQueryIO.Write<T> withFailedInsertRetryPolicy(InsertRetryPolicy retryPolicy)
Currently this only is allowed when writing an unbounded collection to BigQuery. Bounded collections are written using batch load jobs, so we don't get per-element failures. Unbounded collections are written using streaming inserts, so we have access to per-element insert results.
public BigQueryIO.Write<T> withoutValidation()
public BigQueryIO.Write<T> withMethod(BigQueryIO.Write.Method method)
BigQueryIO.Write.Method
for
information and restrictions of the different methods.public BigQueryIO.Write<T> withTriggeringFrequency(Duration triggeringFrequency)
This is only applicable when the write method is set to BigQueryIO.Write.Method.FILE_LOADS
, and
only when writing an unbounded PCollection
.
Every triggeringFrequency duration, a BigQuery load job will be generated for all the data written since the last load job. BigQuery has limits on how many load jobs can be triggered per day, so be careful not to set this duration too low, or you may exceed daily quota. Often this is set to 5 or 10 minutes to ensure that the project stays well under the BigQuery quota. See Quota Policy for more information about BigQuery quotas.
@Experimental public BigQueryIO.Write<T> withNumFileShards(int numFileShards)
withTriggeringFrequency(org.joda.time.Duration)
. The default value is 1000.public BigQueryIO.Write<T> withCustomGcsTempLocation(ValueProvider<java.lang.String> customGcsTempLocation)
BigQueryIO
documentation for
discussion.public void validate(PipelineOptions pipelineOptions)
PTransform
By default, does nothing.
validate
in class PTransform<PCollection<T>,WriteResult>
public WriteResult expand(PCollection<T> input)
PTransform
PTransform
should be expanded
on the given InputT
.
NOTE: This method should not be called directly. Instead apply the
PTransform
should be applied to the InputT
using the apply
method.
Composite transforms, which are defined in terms of other transforms, should return the output of one of the composed transforms. Non-composite transforms, which do not apply any transforms internally, should return a new unbound output and register evaluators (via backend-specific registration methods).
expand
in class PTransform<PCollection<T>,WriteResult>
public void populateDisplayData(DisplayData.Builder builder)
PTransform
populateDisplayData(DisplayData.Builder)
is invoked by Pipeline runners to collect
display data via DisplayData.from(HasDisplayData)
. Implementations may call
super.populateDisplayData(builder)
in order to register display data in the current
namespace, but should otherwise use subcomponent.populateDisplayData(builder)
to use
the namespace of the subcomponent.
By default, does not register any display data. Implementors may override this method to provide their own display data.
populateDisplayData
in interface HasDisplayData
populateDisplayData
in class PTransform<PCollection<T>,WriteResult>
builder
- The builder to populate with display data.HasDisplayData
@Nullable public ValueProvider<TableReference> getTable()
null
.