Ponderings, insights and industry updates

Big Data with small bill

June 24, 2021

Author: David Sztykman |

Tags: , , ,

A month ago I started indexing Certificate transparency logs into Hydrolix.
I’ve already created a blog post about this that you can view here.

This one is more about the performance and TCO of Hydrolix for this dataset.

But first what does a single certificate entry look like :

For the last 30 days we have ingested:

  • 262,026,996 logs entries
  • raw data sent to Hydrolix is 497,664GB

One of the advantage of Hydrolix is our compression mechanism; currently the whole dataset with every field index is 37,712GB which is more than 92% compression!

Hydrolix is storing the data into s3 bucket, so storing this whole dataset is about 1$ per month (we are purely talking about storage cost here).

Query Performance

Hydrolix is stateless, you can spin up different server type and number to query the data.

For this example I have been using 3 – c5n.9XLarge servers from AWS.

Let’s start with a query looking for *kafka* in my array data.leaf_cert.all_domains.

Here the response is 678822 rows and it took 1.195 sec to send the response.
Hydrolix has processed 261.88 million rows which are 21.03GB
We have been processing data at 219.10 million rows/s., 17.59 GB/s

The first question is why if the whole dataset is 38GB did we only processed 21.03GB?
Because Hydrolix uses a column storage, this means we retrieve only the column we need to execute the query.
This column here is data.leaf_cert.all_domains.

Another example this time using predicate pushdown on top of the current query I also added another filter data.update_type = "PrecertLogEntry"

As you can see we are processing only 16.39GB this time, because we use the filter data.update_type = PrecertLogEntry, so we are only fetching rows in the column data.leaf_cert.all_domains when this condition is true. This reduces even more the amount of data we transfer.

Let’s do another query this time let’s count the number of certificates group by certificate authority issuer:

Here the response is 252 rows and it took 1.137 sec to send the response.
Hydrolix has processed 262.34 million rows which are 5.97GB
We have been processing data at 230.66 million rows/s., 5.25 GB/s

Again we only fetch the column we need to send the response.

By default Hydrolix uses timestamp as primary key to index the data.

Getting data from yesterday vs 3 weeks ago is the same process, we fetch the proper partition from s3.

In this example I can move into my 30d of data as quickly as if we have one year or more.

You can actually play with this dashboard and see from yourself here

Hydrolix differentiates its approach by leveraging cloud storage and fetch only the data needed to do the query.
With our compression algorithm we have blazing fast performance as you saw earlier.

By retrieving the data quickly we can avoid storing the data locally; this is the network performance we achieve for this example:

Share Now