|Modifier and Type||Method and Description|
Writes to the sink without need to partition output into specified number of partitions.
Writes to the sink with partitioning by Task Id.
Following Hadoop configuration properties are required with this option:
mapreduce.job.reduces: Number of reduce tasks. Value is equal to number of write tasks which will be generated.
mapreduce.job.partitioner.class: Hadoop partitioner class which will be used for distributing of records among partitions.
This write operation doesn't do shuffle by the partition so it saves transfer time before write operation itself. As a consequence it generates random number of partitions.