пʼятниця, 29 липня 2016 р.

Protecting Spark UI, part 2: servlet filter

In the previous post it was described how to configure simple NGINX instance to add basic auth to Spark job. In this part let see what Spark's suggest itself by implementing filter.

Filter is an special class which participate in Java servlet lifecycle and is called on each request (and even response). Using filter a resource can be protected by basic authentication from unauthorized access. According to documentation the filter must be implemented and then passed (full name) as a parameter. Let's pass valid username and password through environment variables, it must be good enough, as it equals to the approach used to pass AWS credentials for instance. Obviously, this env variable must be set on the instance where driver is supposed to be run. Another option is to pass them as arguments into filter using spark..params param1=value1 param2=value2 ...

Let's imagine our class in the package my.company.filters (and using several helpers, like commons-codec, commons-lang)

public class BasicAuthFilter implements Filter {

  private String login;
  private String pass;

  // this method is called one time on Filter creation
  public void init(FilterConfig config) {
     this.login = System.getenv("SPARK_LOGIN");
     this.pass = System.getenv("SPARK_PASS");

  public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {

     HttpServletRequest hreq = (HttpServletRequest) req;
     HttpServletResponse hres = (HttpServletResponse) res;

     String auth = hreq.getHeader( "Authorization" );
     if ( auth != null ) {
        int index = auth.indexOf(' ');
        if ( index > 0 ) {
          String[] creds = StringUtils.split( new String( Base64(auth.substring(index)), Charset.UTF_8), ':' );
          if ( creds.length == 2 && login.equals(creds[0]) && pass.equals( creds[1] ) ) {
              //  auth passed successfully

     hres.setHeader( "WWW-Authenticate", "Basic realm=\"ProtectedSpark\"" );
     hres.sendError( HttpServletResponse.SC_UNAUTHORIZED );



Ok, next step is to build JAR (pack this filter into JAR). After that, we can run our job in secured manner: execute spark-submit and pass newly assembled jar with flag --jars and through configuration (*.conf file or --conf param) pass full class path: spark.ui.filters=my.company.filters.BasicAuthFilter

Protecting Spark UI, part 1: nginx

Apache Spark WEB UI is a descent place to check cluster health and monitor job performance, starting point for almost every performance optimization. A guys from Databricks hardworking on improvements of UI from version to version.
But it still have one issue which I'm facing on every project and which must be resolver every time: I'm talking about publicity of this information, everyone how can reach the port (defaults, 8080 or 4040) can then access UI, and all information there (and there are a lot of stuff you want to keep private).

There are several solution to deal with it:

  1. Close all ports and configure nginx to listen specific port and forward requests (of course w/ basic authentication)
  2. protect UI using Spark's built-in method: implementing own filter
In this post, let's start from How to protect Spark UI with NGINX?

The instruction below is suitable for protecting standalone spark Web UI when job is executed in client mode (so you can predict where driver is up and run).

Let's assume that there is a node with both spark and nginx installed (obviously they can be on different nodes).

First of all, close all spark related ports (and there are a lot of them): they must be still accessible in-network. In Amazon, it easy to do with security groups: just specify appropriate CIDR mask for each inbound rule, for instance Next, open 2 ports not used by Spark, but which you're going to make accessible to get into spark master ui or spark driver ui: just for example let's assume it's 2020 and 2020.

Now the small part left: configure nginx to perform basic auth and forward requests to Spark UI. In this case nginx is in provate network, so request will be handled by Spark and UI actually presented to end user. 

Before configuring nginx itself, the file to keep proper configuration must be created:
It's simple to do with htpasswd tool, can be installed by running   sudo yum install -y httpd-tools

Then generate password and store it into a file (user name will be spark and passowrd entered in CLI):
sudo htpasswd -c /etc/nginx/.htpasswd spark

Last step is to create proper nginx configuration (the eample is only to forward all request on Spark Master 8080 to 2000):
vi /etc/nginx/nginx2001.conf

  events {
     worker_connections 1000;

  server {
listen 2020;

auth_basic "Private Beta"; auth_basic_user_file /etc/nginx/.htpasswd;

location / {
proxy_pass http://localhost:8080;


Actually, that's it. After that we just need to start nginx
nginx -c /etc/nginx/nginx2001.conf

And point prowser to HOST:2020 to be asked enter credentials and only after that be redirected to Spark Master UI.

вівторок, 6 жовтня 2015 р.

Apache Zeppelin: impressions

A notebooks are getting more and more attraction from data analytics, data scientists and developers. Jupiter is a famous notebooks created by Python guys and widely adopted among different users. At the same time, the new notebook provider was recently born: Apache Zeppelin with main focus on integration with BigData technology stack.

In fact, Apache Zeppelin provides build-in integration with Apache Spark (and SparkSQL), Apache Flink, Hive, Ignite, Tajo (does someone outside South Korea is using that?), definitely markdown and html, and event AngularJS. It's good part about Zeppelin. Also, Ambari integration give a possibility to install Zeppelin in "a couple clicks" and get access through Amabari Views.  And practically it works very well:

And now I'd like to focus on the what's wrong with Apache Zeppelin:

1) Security. Zeppelin 0.5 doesn't have security. Anybody can open any notebook, view and edit that. It doesn't work for enterprises, moreover it doesn't work even for RnD. I want to have protected notebooks, I want to have roles and groups, and give notebook only to specific group of people for specific set of actions.
2) Workspace. One-level list of notebooks, really? That's awful. Guys, add possibility to combine them in folders of folders and etc, it's really important. Also, only one way to backup notebooks, is to backups underlying folders from filesystem. Not very good, UI button is required at least.
3) Security 2. I've already written about notebooks security, but data on storage is also must be protected. Currently Zeppelin run everything as ZEPPELIN user, and I have to share data with ZEPPELIN users which is not what I want to do. So, it makes sense for each notebook to provide a setting "run as" to specify specific user for this research. Enterprises really value that.

Personally I also tried to make it works on Docker (more or less it works) and EMR (failed, and everybody failed as far as I know).

To sum up: Zeppelin is an interesting and promising product, but it has to much weakness to be seriously used and consider for production projects, specially for enterprises. So, in technology radar I can definitely put Zeppelin into the section "Be informed"

понеділок, 13 липня 2015 р.

How to waste the whole day with Spark Streaming and HBase

The "funny" story how to waste the whole day debugging resolving simple case... tips and tricks :)

Spark Streaming application hangs out on action and nothing is changing during hours.
The long story is: custom Receiver accept events from external source and store them to RDD (actually, DStream) for future processing. When I run it I noticed that action hung out! And what was a really scare: the messages were read from source. After spending couple hours trying to find the issue with Reciever, I realized it works fine and finally found the issue ... ... in how I run the job!

I did it in local environment first and submit it to YARN:
--num-executors 2
It fact, it doesn't work for me because no one worker (spark executor) was able to start! So, just by increasing number of executors to 3, I was able to make everything working.

HBase related Spark Streaming application hangs out and nothing is changing during hours.
Again, the long story is then Spark Streaming application hangs out as soon as it touch HBase. I spent several hours (again) and I was really surprised when found the reason: HBase connection was broken. OMG! I haven't seen any errors or warning related to HBase connection in logs... what is the reason? In fact, HBase tried to establish connection again and again without throwing an error. Consider the following a piece of code (grey - my original part, when blue - an update that helped me to overcome the issue):
     Configuration config = HBaseConfiguration.create();
  config.set(HConstants.ZOOKEEPER_QUORUM, "host:port");
  config.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/hbase");
  config.set("hbase.client.retries.number", Integer.toString(3));
  config.set("zookeeper.session.timeout", Integer.toString(60000));
  config.set("zookeeper.recovery.retry", Integer.toString(0));

It really helps because of the number of retries was limited. Default value is 35 and can definitely confuse.

вівторок, 9 червня 2015 р.

How to present XML in Hive flat table after XSLT transformation

Let's start from defining a task. Imaging that the dataset is a set of XML files and the requirement is to present some specific information from this file as simple flat structure. Let's illustrate:

Definetely, we can use SerDe for XML, but what if XML structure is not defined before hand and we want to give end-user a chance to control parsing process? One of possible solutions is to incorporate XSLT to transform XML to desired format.

пʼятниця, 30 січня 2015 р.

Demystify BloomFilter on Hadoop

I believe most of you have seen BloomFilter class. But how to correctly use it?

Accordint to Wikipedia, "Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a setFalse positive matches are possible, but false negatives are not, thus a Bloom filter has a 100% recall rate. In other words, a query returns either "possibly in set" or "definitely not in set"."

Also, I found this site wich give a very goo description of Bloom filter with perfect visualization, please check

As it is clear from Bloom filter definition, this datastructure can really help when we need to filter some records. Particularly, performing join: in this case we can transform small dataset into filter, and then apply filter on map stage in second MR, which perform a real join. In other words, we will have 2 MR when 1st is used for creating filter and 2nd is used to perform filtrtion on map and join on reduce.

Ok, first MepReduce contains 2 stages: mapper and reducer, because in result we should got exactly one Bloom filter object:

  1. initialize BloomFilter object as Mapper clas member: BloomFilter = new BloomFilter(10000, 10, hash.MURMUR_HASH)
  2. on each record, add it to filter: filter.add( new Key(str.getBytes()) );
  3. emmit data only in cleanup method, for example you can just write file withoutusing context at all

Your filter is prepared now, it can be desiarilized at any place and used for data filtration.

пʼятниця, 23 січня 2015 р.

Composite join with MapReduce

As everyone knows, map-side join is the most effective techniques to join datasets on Hadoop. However, at the same time it gives a possibility to join ONE BIG dataset and ONE OR MORE SAMLL datasets. This is the limitation, because sometimes you wish to join TWI HUGE datasets. Typically, this is the use case for reducer-side join, but it cause Cartesian product and obviously we would like to ommit so heavy operation.

And this is time for Composite join: map-side join on huge datasets. In fact, both datasets must meet several requirements in this case:

  1. The datasets are all sorted by the join key
  2. Each dataset has the same number of file (you can achive that by setting reducers number)
  3. File N in each dataset contains the same join key K
  4. Each file is not splitable
In this case you can perform map join to join block from dataset A versus block from dataset B. Hadoop API provides CompositeInputFormat to achive this requirement. Example of usage:

// in job configuration you have to set
// inner - reference to inner join (you can specify outer as well)
// d1, d2 - Path to both datasets
job.getConfiguration().set(CompositeInputFormat.JOIN_EXPR, CompositeInputFormat.compose("inner", KeyValueTextInputFormat.class, d1, d2));

The mapper with have key-value pair of type Text, TupleWritable:

public void map(Text key, TupleWritable value, Context ctx) {

Bonus: you can use this powerful feature with Hive! Composite join in Hive: To do that, the following hive properties must be set:

Ofcourse, it requires all the keys to be sorted in both tables and then must be bucketized in the same number of buckets