Bigtable Icon

Google Cloud Bigtable – Expect the Unexpected

5 ​​min

This blog post demonstrates three pitfalls when working with Google Cloud Bigtable, that have a quite counter-intuitive behaviour. I will show you how and when they can occur and how to avoid the problems they cause. 

1. If you define a maximum number of versions, you do not expect more versions

The first pitfall occurs when you define a maximum number of versions. For example, you create a new table with a column family cf and the garbage-collection policy maxversions=1. This means for every cell, only one version should be stored.

So, what happens if you write multiple times in the same cell and query the row afterwards?

In fact, all written versions will be returned, not just the latest. Older versions will be removed only when garbage collection occurs. This may take up to a week.

If you want to prevent these unexpected results, you have to add filters to your queries, e. g. in this case:

This applies to all types of garbage-collection policies. However, you do not have to add these filters if you are using the HBase API.

Read more about garbage collection in Cloud Bigtable:

2. If some functionality is not implemented, you expect the application to fail

…or at least throw an exception.

This pitfall is very common if you are used to HBase and you communicate with Bigtable through the HBase API. Although the common perception is that HBase is just an open-source implementation of Google’s Bigtable, there are many features in the HBase API which are not available in Bigtable. An overview about the differences between HBase and Cloud Bigtable is provided on: However, this list is still not complete, because there are also some minor differences in the behaviour of some queries or filters on runtime.

But the really dangerous thing about using the HBase API with Bigtable is that your code will run flawlessly, even though you are using unimplemented features.

For instance, you have implemented a reverse scan, which is unfortunately not supported in Bigtable:

If you run this code against Cloud Bigtable you would expect the application to crash, throw an exception or at least log a warning. But… nothing happens. Bigtable just ignores the functions which are not implemented and, in this case, runs an ordinary scan. This applies also to unsupported filters. So be careful if you are using the HBase API and check the documentation.

3. You expect conditional writes to be atomic

This may be the most surprising pitfall in this list. The official documentation promises some mutations to be atomic, including conditional writes:

“[…] Mutations are then committed to specific columns in the row only when certain conditions, checked by the filter, are met. This process of checking and then writing is completed as a single, atomic action.[…]” (

But in fact, atomicity is not guaranteed for conditional writes!

I was able to implement a piece of code, which leads to undeterministic results.

The program executes two conditional writes to the same cell in parallel. The code should ensure that any of these writes is only applied if the current stored version is lower than the version to be written. The following has been implemented with version 1.18.0 of the Bigtable Java client and tested using a single node Bigtable cluster (without replication):

If you run this code multiple times, in some cases a lower version value will override a higher one, which of course should never happen. Also, this would not be possible if the write operations were executed atomically.

I contacted the official Google Cloud Bigtable support and they were also able to reproduce this issue. Furthermore, they agreed to my assumption that atomicity cannot be guaranteed in Cloud Bigtable. Here is a statement from the support team:

“For ConditionalRowMutation we use single conditional row mutation (should be atomic) to acquire lock and also single mutation to keep it alive. However, in some conditions, like the cluster being overloaded, or the request taking longer than expected, locking in Bigtable will become non-exclusive.”

Nevertheless, Google Cloud Bigtable is a great tool to handle low-latency requests on huge amounts of data. But I recommend you to read the documentation carefully, test your code (if possible against real instances) and – expect the unexpected!

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.

Ähnliche Artikel