Articles about: logging



It’s been over a year since I’ve started using ELK stack for logging purpose. In the meantime, I was able to successfully introduce it in a few development teams, totally changing the way we are working with application logs. Everything is working fine and the only downside so far was the need for periodical maintenance work. By maintenance, I mean removing old indices. If the disk free space drops below certain level the ElasticSearch stops working correctly. ... Read More


A few months ago I published “Demystifying ELK stack” article that summarizes my knowledge about setting up and configuring the system for collecting, processing and presenting logs, based on Filebeat, Logstash, Kibana, and Elasticsearch. Since then I’ve learned a few new DevOps things which help me and my teammates to work more effectively with ELK. I think they’re worth sharing.

... Read More


Let’s assume that your system consists of a few microservices. Everything must have high availability so each microservice has at least two active instances on separate machines and everything must be multiplied by the number of testing and production related environments. When there is a situation that requires log analysis you have to skip from server to server looking for the file with desired information. You browse each file using some kind of notepad-based editor and if the files weight hundreds of megabytes it’s quite a challenge. ... Read More


Over a year ago I heard for the first time about the ELK stack. Since then I’ve had an opportunity to help five teams to implements ELK as a part of their development process (one team is using it on production, the rest of them so far only in development environment). ELK stands for ElasticSearch-Logstash-Kibana and it’s a set of services that helps to improve productivity in the area of logging, covering aspects of collecting, processing, storing and presenting log data. ... Read More