Death of an ELK title

Death of an ELK?

7 ​​min

Dieser Blogartikel ist älter als 5 Jahre – die genannten Inhalte sind eventuell überholt.

This article will guide you through updating the ELK stack from version 1.x to 2.x, taking into account the correct order of its components Elasticsearch, Logstash and Kibana.

The ELK stack became popular in recent years as a centralized log solution. Based on open source tools ELK enables the collection, the storage and the analysis of log files big data style.

As part of our inovex operations team we run ELK at a customer’s site. It stores about 11 terabytes of log data from the last 30 days. The Elasticsearch cluster consists of four data nodes and one master node. Kibana is distributed to two systems behind a load balancer and using nginx to proxy_pass requests. Logstash is distributed to ten systems using redis as cache for incoming log events.

End of Life?

As this infrastructure was built last year it is now past its prime. We see the following versions deployed in production:

  • Elasticsearch 1.7.2
  • Logstash 1.5.3
  • Kibana 4.1.13

These versions are pretty common now as they were cutting edge when people started to employ the ELK stack back in 2015. Yet there is a problem: Both the installed versions of Logstash and Kibana are already close to their end of life date, meaning no one will provide any updates after this point. Elasticsearch 1.7 will be maintained until January 2017.

In this article we will demonstrate how to upgrade the single software components of an ELK stack to their latest released versions and provide you with all links necessary. Our target versions are:

  • Elasticsearch 2.4.1
  • Logstash 2.4.0
  • Kibana 4.6.1

While it’s not difficult to upgrade the software itself (e.g. yum update Logstash on a Centos/Red Hat system) there are some prerequisites to meet.


Let’s start at the top of the stack – the front-end. As mentioned, we aim for the latest version 4.6.1. So checking the support matrix page we see this version only works with Elasticsearch 2.4. As there are no further requirements an upgrade is easy. It just has to be scheduled after the Elasticsearch update.


We want to update Logstash from 1.5.3 to 2.4.0 – the breaking changes are documented on the Elastic website. One major change was affecting the Elasticsearch output plugin. So before you work your update magic, change the Logstash config to something like this:

After this it’s just a matter of your package manager upgrading to the new Logstash version. Keep in mind that Logstash is capable to work with any released Elasticsearch version. So it’s possible to upgrade without any dependency to the Elasticsearch upgrade.


First of all check the breaking changes doc. At this point it get’s tricky as we go from Elasticsearch 1.7 to 2.4, which means a lot of breaking changes.

There is a migration plugin to guide you through the migration from Elasticsearch 1.x to 2.x. This plugin, current version 1.18, checks the Elasticsearch configuration and the stored data. It can be installed via:

It will look similar to this.

Elasticsearch Migration Plugin shows that an upgrade isn't possible yet.
Elasticsearch Migration Plugin shows that an upgrade isn’t possible yet.

Here are the main changes:

  • The Elasticsearch option network.publish_host config is renamed to
  • In case you need the Elasticsearch API for local API calls, e.g. for monitoring purposes, it is necessary to bind explicitly to

  • Starting with Elasticsearch 2.0 it is not allowed to have dots in field names. In case you need dots in field names there is a specific start option that allows this: just start the Elasticsearch process with the start option -Dmapper.allow_dots_in_name=true. There might be the need for dots in field names if you are processing log events without content filtering from some application logs into an Elasticsearch index. But keep in mind: if there are any dots in field names Elasticsearch 2.x won’t start at all.

After applying the changes the migration plugin will go from red to green:

Elasticsearch Migration Plugin shows that an upgrade is possible.
Elasticsearch Migration Plugin shows that an upgrade is possible.

Keep in mind: This migration plugin just checks the Elasticsearch config and data stored. If you use other apps connecting to your Elasticsearch cluster you have to check them separately.

Full cluster restart

For the sake of this example let’s assume we run on more than one Elasticsearch host. Say we have an Elasticsearch cluster with 3 nodes that share all shards and each shard has one replica. In this case it’s necessary to do a full cluster restart in order to upgrade all nodes. Otherwise just stop Elasticsearch, do an upgrade (e.g. yum upgrade Elasticsearch) and restart.

As we’ve got a cluster that will go from Elasticsearch 1.7.2 to 2.4.0 we must consider one constraint: there is no interoperability between these versions. There is a thorough guide that will show and explain the necessary steps, here’s the gist of it:

  1. Disable shard allocation within your cluster. This reduces the time until the cluster is fully recovered afterwards.
  2. Perform a synced flush to stop optimization on shards.
  3. Remove the migration plugin. Elasticsearch won’t start if its still installed:  /usr/share/Elasticsearch/bin/plugin -r migration
  4. Stop all the nodes.
  5. Upgrade and start the nodes. Starting dedicated master nodes first will speed up the cluster start.
  6. Reactivate shard allocation.
  7. Wait until your cluster becomes green again.
  8. Don’t forget to update Kibana so that your ELK is usable again.

What we learned the hard way

Despite all the reading and preparations there were some pitfalls we stepped right in. These were:

  • After the update our Elasticsearch snapshot repository wasn’t accessible to all cluster nodes, so we weren’t able to generate backups. To solve this we had to recreate the repository.
  • We use the Kopf-Plugin to manage our Elasticsearch cluster. Oddly enough nobody provided the config of this plugin with the cluster name. This causes Elasticsearch to refuse to start at the new version.


To wrap it up:

  • Start updating Logstash as it’s compatible with newer Elasticsearch versions.
  • Check breaking changes at your Elasticsearch (cluster) with the migration tool.
  • Upgrade the Elasticsearch cluster.
  • Upgrade Kibana.

Get in touch

For all your Big Data needs visit our website, drop us an Email at or call +49 721 619 021-0.

We’re hiring!

Looking for a change? We’re hiring Big Data Systems Engineers who have mastered the ELK stack. Apply now!

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Ähnliche Artikel