K - Type of keys to be read.V - Type of values to be read.public abstract static class HadoopFormatIO.Read<K,V> extends PTransform<PBegin,PCollection<KV<K,V>>>
PTransform that reads from any data source which implements Hadoop InputFormat. For
 e.g. Cassandra, Elasticsearch, HBase, Redis, Postgres, etc. See the class-level Javadoc on
 HadoopFormatIO for more information.HadoopFormatIO, 
Serialized Formname| Constructor and Description | 
|---|
| Read() | 
| Modifier and Type | Method and Description | 
|---|---|
| PCollection<KV<K,V>> | expand(PBegin input)Override this method to specify how this  PTransformshould be expanded on the givenInputT. | 
| abstract SerializableConfiguration | getConfiguration() | 
| <T> Coder<T> | getDefaultCoder(TypeDescriptor<?> typeDesc,
               CoderRegistry coderRegistry)Returns the default coder for a given type descriptor. | 
| abstract TypeDescriptor<?> | getinputFormatClass() | 
| abstract TypeDescriptor<?> | getinputFormatKeyClass() | 
| abstract TypeDescriptor<?> | getinputFormatValueClass() | 
| abstract SimpleFunction<?,K> | getKeyTranslationFunction() | 
| abstract TypeDescriptor<K> | getKeyTypeDescriptor() | 
| abstract SimpleFunction<?,V> | getValueTranslationFunction() | 
| abstract TypeDescriptor<V> | getValueTypeDescriptor() | 
| abstract org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO.Read.Builder<K,V> | toBuilder() | 
| void | validateTransform()Validates construction of this transform. | 
| HadoopFormatIO.Read<K,V> | withConfiguration(org.apache.hadoop.conf.Configuration configuration)Reads from the source using the options provided by the given configuration. | 
| HadoopFormatIO.Read<K,V> | withKeyTranslation(SimpleFunction<?,K> function)Transforms the keys read from the source using the given key translation function. | 
| HadoopFormatIO.Read<K,V> | withValueTranslation(SimpleFunction<?,V> function)Transforms the values read from the source using the given value translation function. | 
compose, compose, getAdditionalInputs, getDefaultOutputCoder, getDefaultOutputCoder, getDefaultOutputCoder, getKindString, getName, populateDisplayData, toString, validate@Nullable public abstract SerializableConfiguration getConfiguration()
@Nullable public abstract SimpleFunction<?,K> getKeyTranslationFunction()
@Nullable public abstract SimpleFunction<?,V> getValueTranslationFunction()
@Nullable public abstract TypeDescriptor<K> getKeyTypeDescriptor()
@Nullable public abstract TypeDescriptor<V> getValueTypeDescriptor()
@Nullable public abstract TypeDescriptor<?> getinputFormatClass()
@Nullable public abstract TypeDescriptor<?> getinputFormatKeyClass()
@Nullable public abstract TypeDescriptor<?> getinputFormatValueClass()
public abstract org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO.Read.Builder<K,V> toBuilder()
public HadoopFormatIO.Read<K,V> withConfiguration(org.apache.hadoop.conf.Configuration configuration)
public HadoopFormatIO.Read<K,V> withKeyTranslation(SimpleFunction<?,K> function)
public HadoopFormatIO.Read<K,V> withValueTranslation(SimpleFunction<?,V> function)
public PCollection<KV<K,V>> expand(PBegin input)
PTransformPTransform should be expanded on the given
 InputT.
 NOTE: This method should not be called directly. Instead apply the PTransform should
 be applied to the InputT using the apply method.
 
Composite transforms, which are defined in terms of other transforms, should return the output of one of the composed transforms. Non-composite transforms, which do not apply any transforms internally, should return a new unbound output and register evaluators (via backend-specific registration methods).
expand in class PTransform<PBegin,PCollection<KV<K,V>>>public void validateTransform()
public <T> Coder<T> getDefaultCoder(TypeDescriptor<?> typeDesc, CoderRegistry coderRegistry)