Posts

Showing posts from May, 2014

(Update) Maven tip for building "fat" jars and making them slim

Image
The other day I was working on MapReduce code over HBase tables and I discovered something really cool. Usually I'd have to package all HBase, Zookeeper, etc libraries else I'd get a ClassNotFoundException. I found this tip in HBase: Definitive Guide book. Apparently, if you specify scope "provided" in your Maven pom.xml file, Maven will not package the jars but it will expect that the jars are available on the cluster's classpath. I will save you my poor interpretation of this feature and point you to the Maven documentation. The feature is called Dependency Scope . This is how I define my dependencies now: So just to give you an idea, my jar size before adding this tag was 44Mb and after, it was 11Kb. Definitely saves time on transmitting the jars back and forth. Granted, this may not be a new tip to most people, I actually have seen this feature used when I was playing with Apache Storm, specifically the  storm-starter project but it never occurred to...

Book review: Apache Hadoop YARN

I've been looking for a comprehensive book on Apache Hadoop 2 and Yarn architecture, there are a few MEAPs available. This book in particular was finally released a few months back with all complete chapters. As all Hortonworks documentation, this book is well written and very easy to read. The choice to choose this book over others was simple. On top of that, it's written by the Hadoop committers so it's basically from the "horse's mouth". The current edition of the book has 12 chapters with additional material. The first chapter goes into history of how Hadoop came about and challenges the team at Yahoo had faced early in Hadoop history. This chapter opened up my eyes on how grandiose the project architecture was in the past and what it's become. It is very easy to take things for granted and this chapter does a great job explaining the choices the team had made. Chapter 2 gives a quick intro on how to deploy a single node cluster and start playing with ...

Another paper on Infosphere Streams vs. Storm

I found this recent paper mentioned in Storm mailing lists on yet another performance comparison of Streams and Storm. Giving credit to IBM, this time around it seems paper is written by developers and not sales people. Here's the direct link to the pdf. Most of the paper was based on v. 0.8 of Storm but at the end of the paper comparison with Storm 0.9.0.1 was also referenced. Comparison was done using Storm 0.8 with ZeroMQ and Storm 0.9 with Netty for transfer protocol. It is an interesting read for a change. I am also surprised to see Apache Avro used for serialization. I will not cloud your judgement by stating my opinion but I remain skeptical of these papers. I urge Storm community to offer its findings from own comparison. One thing I'd like to state is that again IBM claims it is much faster to implement a use-case using Streams over Storm and from my own experience, I was able to install Storm 0.8, configure my IDE to develop and test topologies, implement my use case...

IBM white paper on InforSphere Streams vs. Storm

IBM has a white paper from 2013 comparing Streams with Storm. As expected, the paper is full of marketing mambo jumbo applicable for suits. I usually try to avoid such information but I couldn't resist. I have to give IBM credit for even having such a paper, to acknowledge Storm as a market leader, even if the motive is somewhat shady. The paper is a bit outdated, stating Storm is GPL licensed software and has no market leading companies behind it. If you haven't heard, Hortonworks has picked up Storm and has some committers dedicated to it. It's also part of Hortonworks Data Platform stack as of v. 2.1. In addition to that, Storm is now a top level Apache project and no longer GPL. 0MQ messaging is now a second class citizen in favor of Netty, a fully native Java stack. The argument of the paper is that Storm lacks enterprise support. Hortonworks will gladly provide it. Either way, this paper is kind of expected from a large vendor like IBM. I'm in no role suitable to ...