At the end of December AWS announced that HBase on EMR supported S3 as data store. That’s great news because it means one doesn’t have to keep around an HDFS cluster with 3x replication, which is not only costly but it comes with its own operational burden .
At the same time we had some use cases that could have been addressed with a key-value store and this seemed like a good opportunity to give HBase a try.
What is HBase?HBase is an open source, non-relational, distributed key-value store which traditionally runs on top of HDFS. It provides a fault-tolerant, efficient way of storing large quantities of sparse data using column-based compression and storage.
In addition, it provides fast lookup of data thanks to indexing and in-memory cache. HBase is optimized for sequential write operations, and is highly efficient for batch inserts, updates, and deletes. HBase also supports cell versioning so one can look up and use several previous versions of a cell or a row.
The system can be imagined as a distributed log-structured merge tree and is ultimately an open source implementation of Google’s BigTable whitepaper . A HBase table is partitioned horizontally in so called regions , which contains all rows between the region’s start and end key. Region Servers are responsible to serve regions while the HBase master handles region assignments and DDL operations.
A region server has:
a BlockCache which serves as a LRU read cache; a BucketCache (EMR version only), which caches reads on local disk; a WAL , used to store writes not yet persisted to HDFS/S3 and stored on HDFS; a MemStore per column family (a collection of columns); a MemStore is a write cache which, once it accumulated enough data, is written to a store file; a store file stores rows as sorted key values on HDFS/S3;
HBase architecture with HDFS storage
This is just a 10000-foot overview of the system and there are many articles out there that go into important details, like store file compaction.

EMR’s HBase architecture with S3 storage and BucketCache
One nice property of HBase is that it guarantees linearizable consistency, i.e. if operation B started after operation A successfully completed, then operation B must see the the system in the same state as it was on completion of operation A, or a newer state. That’s easy to do since each row can only be served by one region server.
Why isn’t Parquet good enough?Many of our datasets are stored on S3 in Parquet form. Parquet is a great format for typical analytical workloads where one needs all the data for a particular subset of measurements. On the other hand, it isn’t really optimized for finding needles in haystacks; partitioning and sorting can help alleviate this issue only so much.
As some of our analysts have the need to efficiently access the telemetry history for a very small and well-defined sub-population of our user base (think of test-pilot clients before they enrolled), a key-value store like HBase or DynamoDB fits that requirement splendidly.
HBase stores and compresses the data per column-family, unlike Parquet which does the same per column. That means the system will read way more data than it is actually needed if only a small subset of columns is read during a full scan. And no, you can’t just have a column family for each individual column as column families are flushed in concert. Furthermore, HBase doesn’t have a concept of types unlike Parquet. Both the key and the value are just bytes and it’s up to the user to interpret those bytes accordingly.
It turns out that Mozilla’s telemetry data was once stored in HBase! If you knew that then you have been around at Mozilla much longer than I have. That approach was later abandoned as keeping around mostly un-utilized data in HDFS was costly and furthermore, typical analytical workloads involving large scans were slow.
Wouldn’t it be nice to have the best of both worlds: efficient scans and fast look-ups? It turns out there is one open system out there currently being developed that aims to feel that gap. Apache Kudu provides a combination of fast inserts/updates and efficient columnar scans to enable multiple real-time analytic workloads across a single storage layer, but I feel it’s not ready for prime time just yet.
What about DyamoDB?DymanoDB is a managed key value store. Leaving aside operational costs, it’s a fair question to wonder how much it differs in terms of pricing for our example use case.
The data we are planning to store has a compressed size of about 200 GB (~ 1.2 TB uncompressed) per day for 400 Millions key-value pairs of about 3 KB each uncompressed. As we are planning to keep around the data for 90 days, the total size of the table would amount to 18 TB.
HBase costsLet’s say the machines we want to use for the HBase cluster are m4.xlarge which have 16 GB of RAM. As suggested in Hortonwork’s HBase guidelines , each machine could ideally serve about 50 regions. By dividing the the table into say 1000 regions, each region would have a size of 18 GB, which is still in the recommended maximum region size. Since each machine can serve about 50 regions, and we have 1000 regions, it means our cluster should ideally have a size of 20.
Using on-demand EMR prices the cluster would have a monthly cost of:

This is an upper bound as reserved or spot instances cost less.
The daily batch job that pushes data to HBase uses 5 c3.4xlarge machines and takes 5 hours, so it would have a monthly cost of:

To keep around about 18 TB of data on S3 we will need 378 $ at 0.021 $ per GB. Note that this doesn’t include the price for the requests which is rather difficult to calculate, albeit low.
In total we have a cost of about 5500 $ for the HBase solution.
DynamoDB costsDynamoDB’s pricing is based on the desired request throughput the system needs to have. The throughput is measured in capacity units . Let’s assume that one write request per second corresponds to 3 write capacity units as one unit of write capacity is limited to items of up to 1 KB in size and we are dealing with items of about 3 KB in size. Let’s also assume that we want to use a batch job, equivalent to the one used for HBase, to push the data into the store. Which means that we want enough write capacity to shovel 400M pings in 5 hours:
which amounts to about 6510 $ a month. Note that this is just the cost to push the data in and it’s not considering the cost for reads all day round.
The cost of the storage, assuming the compression ratio is the same as with HBase, is:

Finally, if we consider also the cost of the batch job (788 $) we have a total spending of about 11800 $.
In conclusion the HBase solution is cheaper and more flexible. For example, one could keep around historical data on S3 and not have an HBase cluster serving it until it’s really needed. The con is that HBase isn’t automagically managed and as such it requires operational effort.
How do I use this stuff?We have created a mirrored view in HBase of the main summary dataset which is accessible through a python API . The API allows one to retrieve the history of main pings for a small subset of client ids sorted by activity date:
view = HBaseMainSummaryView() history = view.get(sc, ["00000000-0000-0000-0000-000000000000"])We haven’t yet decided if this is something we want to support and keep around and we will make that decision once we have an understanding of its usefulness it provides to our analysts.