вівторок, 9 червня 2015 р.

How to present XML in Hive flat table after XSLT transformation

Let's start from defining a task. Imaging that the dataset is a set of XML files and the requirement is to present some specific information from this file as simple flat structure. Let's illustrate:

Definetely, we can use SerDe for XML, but what if XML structure is not defined before hand and we want to give end-user a chance to control parsing process? One of possible solutions is to incorporate XSLT to transform XML to desired format.

пʼятниця, 30 січня 2015 р.

Demystify BloomFilter on Hadoop

I believe most of you have seen BloomFilter class. But how to correctly use it?

Accordint to Wikipedia, "Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a setFalse positive matches are possible, but false negatives are not, thus a Bloom filter has a 100% recall rate. In other words, a query returns either "possibly in set" or "definitely not in set"."

Also, I found this site wich give a very goo description of Bloom filter with perfect visualization, please check



As it is clear from Bloom filter definition, this datastructure can really help when we need to filter some records. Particularly, performing join: in this case we can transform small dataset into filter, and then apply filter on map stage in second MR, which perform a real join. In other words, we will have 2 MR when 1st is used for creating filter and 2nd is used to perform filtrtion on map and join on reduce.

Ok, first MepReduce contains 2 stages: mapper and reducer, because in result we should got exactly one Bloom filter object:

  1. initialize BloomFilter object as Mapper clas member: BloomFilter = new BloomFilter(10000, 10, hash.MURMUR_HASH)
  2. on each record, add it to filter: filter.add( new Key(str.getBytes()) );
  3. emmit data only in cleanup method, for example you can just write file withoutusing context at all

Your filter is prepared now, it can be desiarilized at any place and used for data filtration.


пʼятниця, 23 січня 2015 р.

Composite join with MapReduce

As everyone knows, map-side join is the most effective techniques to join datasets on Hadoop. However, at the same time it gives a possibility to join ONE BIG dataset and ONE OR MORE SAMLL datasets. This is the limitation, because sometimes you wish to join TWI HUGE datasets. Typically, this is the use case for reducer-side join, but it cause Cartesian product and obviously we would like to ommit so heavy operation.

And this is time for Composite join: map-side join on huge datasets. In fact, both datasets must meet several requirements in this case:

  1. The datasets are all sorted by the join key
  2. Each dataset has the same number of file (you can achive that by setting reducers number)
  3. File N in each dataset contains the same join key K
  4. Each file is not splitable
In this case you can perform map join to join block from dataset A versus block from dataset B. Hadoop API provides CompositeInputFormat to achive this requirement. Example of usage:


// in job configuration you have to set
job.setInputFormatClass(CompositeInputFormat.class);
// inner - reference to inner join (you can specify outer as well)
// d1, d2 - Path to both datasets
job.getConfiguration().set(CompositeInputFormat.JOIN_EXPR, CompositeInputFormat.compose("inner", KeyValueTextInputFormat.class, d1, d2));
job.setNumReduceTasks(0);



The mapper with have key-value pair of type Text, TupleWritable:

@Override
public void map(Text key, TupleWritable value, Context ctx) {
  ...
}


Bonus: you can use this powerful feature with Hive! Composite join in Hive: To do that, the following hive properties must be set:
hive.input.format=org.apache.hadoop.give.ql.io.BucketizedHiveInputFormat;
hive.optimize.bucketmapjoin=truel
hive.optimize.bucketmapjoin.sortedmerge=true;


Ofcourse, it requires all the keys to be sorted in both tables and then must be bucketized in the same number of buckets

Kafka web console with Docker

My first Docker file aims to run Kafka Web Console (application for monitoring Apache Kafka):


FROM ubuntu:trusty

RUN apt-get update;  apt-get install -y unzip  openjdk-7-jdk wget git docker.io

RUN wget http://downloads.typesafe.com/play/2.2.6/play-2.2.6.zip
RUN unzip play-2.2.6.zip -d /tmp

RUN wget https://github.com/claudemamo/kafka-web-console/archive/master.zip
RUN unzip master.zip -d /tmp

WORKDIR /tmp/kafka-web-console-master

CMD ../play-2.2.6/play "start -DapplyEvolutions.default=true"


Dockerfile might be buid with command:
docker build -t kafka/web-console:2.0 .
and run as:
docker run -i -t -p 9000:9000 kafka/web-console:2.0

At the end, Kafka Web Console will be available at host:9000 - zookeeper hosts must be  added and Kafka brokers will be discovered aautomatically

вівторок, 11 листопада 2014 р.

Spark and Location Sensitive Hashing, part 2

This is a second part of topic about Locality Sensitive Hashing, and here is example of creating working example using Apache Spark.

Let's start from definition of task: there are two datasets - bank accounts and web-site visitors. In common, they have only name, but it's possible misspeling. Let's consider the following example:

Bank Accounts

Name Tom Soyer Andy Bin Tom Wiscor Tomas Soyér
Credit score 10 20 30 40

Web-site Visitors

Name Tom Soyer Andrew Bin Tom Viscor Thomas Soyer
email 1@1 2@1 3@1 2@2

пʼятниця, 7 листопада 2014 р.

Spark and Location Sensitive Hashing, part 1

Location Sensitive Hashing is the name of special algorithm designed to address complexity of BigData processing.



Let's consider the follwoing example: assume we have two independent systems, one is web-application that gets user's profile from social network, second system is online payment system. Our idea is merge profiles from social network and payment system. Of course, the social network user might not be presented in payment system at all, cerate accounts in different time and definetely we don't have foreign key to match them exactly. There are two possible issues:

  • there are two huge data sets that must be merged
  • an user's name might look different in social network  and payment system 
The naive approach is to compare social network user and payment system user names, calculate Hamming distance between them and pick up the most similar pair as successfuly matched. The biggest issue here is O(n2) complexity of this approach.

We want to minimize a number of comparison between two datasets. Hopefully, this issue was resolved by inventing Location Sensitive Hashing algorithm. Let's consider simple hashing: 
f(str) → x
we can calculate hashing function f on string (user name from profile) s and get integer x; then we need to compare Hamming distances only for strings which have the same x. The issue here is to pick up very good hashing function, which is almost impossible. Hopefully, we are not limited by one function: we can apply several/tens/hundreds hashing functions - in this case we would have data duplication, because one string would be assigned to several buckets (hash value). It would increase the number of useles comparisons, but at the some moment we would have a bigger chance to get succesful comparison.

However, it wouldn't work good enough, because names might have misprintings and using special lettern in social profile when only traditional latin in payments system or vice versa. n-grams and minhashing might come in handy in this situation. The main idea is to get all possible n-grams for string and apply minhashing algorithm to them. In result, we aims to get a set of new hash codes based on n-grams and make comparison of string that was placed into the same buckets based on these hashcodes.

Step by step algorithm is next:

  1. Define a collection of hash functions
  2. Calculate minhash function on n-gramm of profile by minhash algo
  3. Based on equals hashcodes get pairs of similar profiles from social and payment networks
  4. Calculate Hamming distance in pairs to select the most similar matching for each case

In next part: source code example and implementation over Apache Spark

пʼятниця, 10 жовтня 2014 р.

Tuning the MapReduce job

java.lang.OutOfMemoryError: GC overhead limit exceeded
 that's what I got yesterday while running my new shining MapReduce job.

OutOfMemory in java has different reasons: no more memory available, or GC was called to often (my case), no more free PermGem space, etc.

To get more information, about JVM internals we have to tune JVM runing. I'm using Hortonworks distribution, so I went to Ambari, MapReduce configuration tab and found mapreduce.reduce.java.opts  This property is responsible for reducer's JVM configuration. Let's add GarbageCollector loggining
-verbose:gc -Xloggc:/tmp/@taskid@.gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
We set up to write GC log to local filesystem in folder tmp, file name - taskId + gc extension.

In general, the following properties are important for JVM tuning:

  • mapred.child.java.opts - Provides JVM options to pass to map and reduce tasks. Usually includes the -Xmx option to specify the maximum heap size. May also specify -Xms to specify the start heap size. 
  • mapreduce.map.java.opts - Overrides mapred.child.java.opts for map tasks.
  • mapreduce.reduce.java.opts - Overrides mapred.child.java.opts for reduce tasks.
After entering new value for property, the MapReduce service must be restarted (Hortonworks reming with yellow button "Restart"). Only after restart changes woulb be applied. Next step is to run map reduce job, and in result the logs per task woulb be placed into tmp folder on each node.

It'a but diffiulty to read the log, but hopefulyl several UI tools exist on the market. i prefer the open sourced GCViewer, which is java application and doesn't require instalation. It supports wide range of JVM, moreove it has command line interface for generation reports - so automation for getting reports might be applied.

The open GC log gets the detail overview of memory state:

Legend:

  • Green line that shows the length of all GCs
  • Magenta area that shows the size of the tenured generation (not available without PrintGCDetails)
  • Orange area that shows the size of the young generation (not available without PrintGCDetails)
  • Blue line that shows used heap size