@Experimental(value=SOURCE_SINK) public abstract static class FileIO.Write<DestinationT,UserT> extends PTransform<PCollection<UserT>,WriteFilesResult<DestinationT>>
FileIO.write()
and FileIO.writeDynamic()
.Modifier and Type | Class and Description |
---|---|
static interface |
FileIO.Write.FileNaming
A policy for generating names for shard files.
|
name
Constructor and Description |
---|
Write() |
Modifier and Type | Method and Description |
---|---|
FileIO.Write<DestinationT,UserT> |
by(Contextful<Contextful.Fn<UserT,DestinationT>> destinationFn)
Like
by(org.apache.beam.sdk.transforms.SerializableFunction<UserT, DestinationT>) , but with access to context such as side inputs. |
FileIO.Write<DestinationT,UserT> |
by(SerializableFunction<UserT,DestinationT> destinationFn)
Specifies how to partition elements into groups ("destinations").
|
static FileIO.Write.FileNaming |
defaultNaming(java.lang.String prefix,
java.lang.String suffix) |
static FileIO.Write.FileNaming |
defaultNaming(ValueProvider<java.lang.String> prefix,
ValueProvider<java.lang.String> suffix) |
WriteFilesResult<DestinationT> |
expand(PCollection<UserT> input)
Override this method to specify how this
PTransform should be expanded on the given
InputT . |
static FileIO.Write.FileNaming |
relativeFileNaming(ValueProvider<java.lang.String> baseDirectory,
FileIO.Write.FileNaming innerNaming) |
FileIO.Write<DestinationT,UserT> |
to(java.lang.String directory)
Specifies a common directory for all generated files.
|
FileIO.Write<DestinationT,UserT> |
to(ValueProvider<java.lang.String> directory)
Like
to(String) but with a ValueProvider . |
FileIO.Write<DestinationT,UserT> |
via(Contextful<Contextful.Fn<DestinationT,FileIO.Sink<UserT>>> sinkFn)
Like
via(Contextful, Contextful) , but the output type of the sink is the same as the
type of the input collection. |
<OutputT> FileIO.Write<DestinationT,UserT> |
via(Contextful<Contextful.Fn<UserT,OutputT>> outputFn,
Contextful<Contextful.Fn<DestinationT,FileIO.Sink<OutputT>>> sinkFn)
Specifies how to create a
FileIO.Sink for a particular destination and how to map the
element type to the sink's output type. |
<OutputT> FileIO.Write<DestinationT,UserT> |
via(Contextful<Contextful.Fn<UserT,OutputT>> outputFn,
FileIO.Sink<OutputT> sink)
Like
via(Contextful, Contextful) , but uses the same sink for all destinations. |
FileIO.Write<DestinationT,UserT> |
via(FileIO.Sink<UserT> sink)
Like
via(Contextful) , but uses the same FileIO.Sink for all destinations. |
FileIO.Write<DestinationT,UserT> |
withCompression(Compression compression)
Specifies to compress all generated shard files using the given
Compression and, by
default, append the respective extension to the filename. |
FileIO.Write<DestinationT,UserT> |
withDestinationCoder(Coder<DestinationT> destinationCoder)
Specifies a
Coder for the destination type, if it can not be inferred from by(org.apache.beam.sdk.transforms.SerializableFunction<UserT, DestinationT>) . |
FileIO.Write<DestinationT,UserT> |
withEmptyGlobalWindowDestination(DestinationT emptyWindowDestination)
If
withIgnoreWindowing() is specified, specifies a destination to be used in case
the collection is empty, to generate the (only, empty) output file. |
FileIO.Write<DestinationT,UserT> |
withIgnoreWindowing()
Deprecated.
Avoid usage of this method: its effects are complex and it will be removed in
future versions of Beam. Right now it exists for compatibility with
WriteFiles . |
FileIO.Write<DestinationT,UserT> |
withNaming(Contextful<Contextful.Fn<DestinationT,FileIO.Write.FileNaming>> namingFn)
Like
withNaming(SerializableFunction) but allows accessing context, such as side
inputs, from the function. |
FileIO.Write<DestinationT,UserT> |
withNaming(FileIO.Write.FileNaming naming)
Specifies a custom strategy for generating filenames.
|
FileIO.Write<DestinationT,UserT> |
withNaming(SerializableFunction<DestinationT,FileIO.Write.FileNaming> namingFn)
Specifies a custom strategy for generating filenames depending on the destination, similar to
withNaming(FileNaming) . |
FileIO.Write<DestinationT,UserT> |
withNumShards(int numShards)
Specifies to use a given fixed number of shards per window.
|
FileIO.Write<DestinationT,UserT> |
withNumShards(ValueProvider<java.lang.Integer> numShards)
Like
withNumShards(int) . |
FileIO.Write<DestinationT,UserT> |
withPrefix(java.lang.String prefix)
Specifies a common prefix to use for all generated filenames, if using the default file
naming.
|
FileIO.Write<DestinationT,UserT> |
withPrefix(ValueProvider<java.lang.String> prefix)
Like
withPrefix(String) but with a ValueProvider . |
FileIO.Write<DestinationT,UserT> |
withSharding(PTransform<PCollection<UserT>,PCollectionView<java.lang.Integer>> sharding)
Specifies a
PTransform to use for computing the desired number of shards in each
window. |
FileIO.Write<DestinationT,UserT> |
withSuffix(java.lang.String suffix)
Specifies a common suffix to use for all generated filenames, if using the default file
naming.
|
FileIO.Write<DestinationT,UserT> |
withSuffix(ValueProvider<java.lang.String> suffix)
Like
withSuffix(String) but with a ValueProvider . |
FileIO.Write<DestinationT,UserT> |
withTempDirectory(java.lang.String tempDirectory)
Specifies a directory into which all temporary files will be placed.
|
FileIO.Write<DestinationT,UserT> |
withTempDirectory(ValueProvider<java.lang.String> tempDirectory)
|
compose, compose, getAdditionalInputs, getDefaultOutputCoder, getDefaultOutputCoder, getDefaultOutputCoder, getKindString, getName, populateDisplayData, toString, validate
public static FileIO.Write.FileNaming defaultNaming(java.lang.String prefix, java.lang.String suffix)
public static FileIO.Write.FileNaming defaultNaming(ValueProvider<java.lang.String> prefix, ValueProvider<java.lang.String> suffix)
public static FileIO.Write.FileNaming relativeFileNaming(ValueProvider<java.lang.String> baseDirectory, FileIO.Write.FileNaming innerNaming)
public FileIO.Write<DestinationT,UserT> by(SerializableFunction<UserT,DestinationT> destinationFn)
public FileIO.Write<DestinationT,UserT> by(Contextful<Contextful.Fn<UserT,DestinationT>> destinationFn)
by(org.apache.beam.sdk.transforms.SerializableFunction<UserT, DestinationT>)
, but with access to context such as side inputs.public <OutputT> FileIO.Write<DestinationT,UserT> via(Contextful<Contextful.Fn<UserT,OutputT>> outputFn, Contextful<Contextful.Fn<DestinationT,FileIO.Sink<OutputT>>> sinkFn)
FileIO.Sink
for a particular destination and how to map the
element type to the sink's output type. The sink function must create a new FileIO.Sink
instance every time it is called.public <OutputT> FileIO.Write<DestinationT,UserT> via(Contextful<Contextful.Fn<UserT,OutputT>> outputFn, FileIO.Sink<OutputT> sink)
via(Contextful, Contextful)
, but uses the same sink for all destinations.public FileIO.Write<DestinationT,UserT> via(Contextful<Contextful.Fn<DestinationT,FileIO.Sink<UserT>>> sinkFn)
via(Contextful, Contextful)
, but the output type of the sink is the same as the
type of the input collection. The sink function must create a new FileIO.Sink
instance every
time it is called.public FileIO.Write<DestinationT,UserT> via(FileIO.Sink<UserT> sink)
via(Contextful)
, but uses the same FileIO.Sink
for all destinations.public FileIO.Write<DestinationT,UserT> to(java.lang.String directory)
withTempDirectory(java.lang.String)
.public FileIO.Write<DestinationT,UserT> to(ValueProvider<java.lang.String> directory)
to(String)
but with a ValueProvider
.public FileIO.Write<DestinationT,UserT> withPrefix(java.lang.String prefix)
withNaming(org.apache.beam.sdk.io.FileIO.Write.FileNaming)
.public FileIO.Write<DestinationT,UserT> withPrefix(ValueProvider<java.lang.String> prefix)
withPrefix(String)
but with a ValueProvider
.public FileIO.Write<DestinationT,UserT> withSuffix(java.lang.String suffix)
withNaming(org.apache.beam.sdk.io.FileIO.Write.FileNaming)
.public FileIO.Write<DestinationT,UserT> withSuffix(ValueProvider<java.lang.String> suffix)
withSuffix(String)
but with a ValueProvider
.public FileIO.Write<DestinationT,UserT> withNaming(FileIO.Write.FileNaming naming)
to(java.lang.String)
, if any.
Incompatible with withSuffix(java.lang.String)
.
This can only be used in combination with FileIO.write()
but not FileIO.writeDynamic()
.
public FileIO.Write<DestinationT,UserT> withNaming(SerializableFunction<DestinationT,FileIO.Write.FileNaming> namingFn)
withNaming(FileNaming)
.
This can only be used in combination with FileIO.writeDynamic()
but not FileIO.write()
.
public FileIO.Write<DestinationT,UserT> withNaming(Contextful<Contextful.Fn<DestinationT,FileIO.Write.FileNaming>> namingFn)
withNaming(SerializableFunction)
but allows accessing context, such as side
inputs, from the function.public FileIO.Write<DestinationT,UserT> withTempDirectory(java.lang.String tempDirectory)
public FileIO.Write<DestinationT,UserT> withTempDirectory(ValueProvider<java.lang.String> tempDirectory)
public FileIO.Write<DestinationT,UserT> withCompression(Compression compression)
Compression
and, by
default, append the respective extension to the filename.public FileIO.Write<DestinationT,UserT> withEmptyGlobalWindowDestination(DestinationT emptyWindowDestination)
withIgnoreWindowing()
is specified, specifies a destination to be used in case
the collection is empty, to generate the (only, empty) output file.public FileIO.Write<DestinationT,UserT> withDestinationCoder(Coder<DestinationT> destinationCoder)
Coder
for the destination type, if it can not be inferred from by(org.apache.beam.sdk.transforms.SerializableFunction<UserT, DestinationT>)
.public FileIO.Write<DestinationT,UserT> withNumShards(int numShards)
GroupByKey
operation.public FileIO.Write<DestinationT,UserT> withNumShards(@Nullable ValueProvider<java.lang.Integer> numShards)
withNumShards(int)
. Specifying null
means runner-determined sharding.public FileIO.Write<DestinationT,UserT> withSharding(PTransform<PCollection<UserT>,PCollectionView<java.lang.Integer>> sharding)
PTransform
to use for computing the desired number of shards in each
window.@Deprecated public FileIO.Write<DestinationT,UserT> withIgnoreWindowing()
WriteFiles
.public WriteFilesResult<DestinationT> expand(PCollection<UserT> input)
PTransform
PTransform
should be expanded on the given
InputT
.
NOTE: This method should not be called directly. Instead apply the PTransform
should
be applied to the InputT
using the apply
method.
Composite transforms, which are defined in terms of other transforms, should return the output of one of the composed transforms. Non-composite transforms, which do not apply any transforms internally, should return a new unbound output and register evaluators (via backend-specific registration methods).
expand
in class PTransform<PCollection<UserT>,WriteFilesResult<DestinationT>>