пʼятницю, 23 січня 2015 р.

Composite join with MapReduce

As everyone knows, map-side join is the most effective techniques to join datasets on Hadoop. However, at the same time it gives a possibility to join ONE BIG dataset and ONE OR MORE SAMLL datasets. This is the limitation, because sometimes you wish to join TWI HUGE datasets. Typically, this is the use case for reducer-side join, but it cause Cartesian product and obviously we would like to ommit so heavy operation.

And this is time for Composite join: map-side join on huge datasets. In fact, both datasets must meet several requirements in this case:

  1. The datasets are all sorted by the join key
  2. Each dataset has the same number of file (you can achive that by setting reducers number)
  3. File N in each dataset contains the same join key K
  4. Each file is not splitable
In this case you can perform map join to join block from dataset A versus block from dataset B. Hadoop API provides CompositeInputFormat to achive this requirement. Example of usage:


// in job configuration you have to set
job.setInputFormatClass(CompositeInputFormat.class);
// inner - reference to inner join (you can specify outer as well)
// d1, d2 - Path to both datasets
job.getConfiguration().set(CompositeInputFormat.JOIN_EXPR, CompositeInputFormat.compose("inner", KeyValueTextInputFormat.class, d1, d2));
job.setNumReduceTasks(0);



The mapper with have key-value pair of type Text, TupleWritable:

@Override
public void map(Text key, TupleWritable value, Context ctx) {
  ...
}


Bonus: you can use this powerful feature with Hive! Composite join in Hive: To do that, the following hive properties must be set:
hive.input.format=org.apache.hadoop.give.ql.io.BucketizedHiveInputFormat;
hive.optimize.bucketmapjoin=truel
hive.optimize.bucketmapjoin.sortedmerge=true;


Ofcourse, it requires all the keys to be sorted in both tables and then must be bucketized in the same number of buckets

Немає коментарів:

Дописати коментар