The Pitfalls of Docker Logging: Common Mistakes and Best Practices

Vikas Yadav
3 min readFeb 6, 2024



Containers have revolutionized how we build, ship, and run applications, but it also brings its own set of challenges, especially when it comes to logging. There are some common mistakes that a lot of people do when it comes to handling logs. Let’s explore them and some best practices to ensure our logs are safe, organized, and useful.


When using Docker containers, two common mistakes can make log management a nightmare:

  1. Logging to Files Inside Containers: It might seem straightforward to log directly to files within the container. However, this approach is risky because the logs are as ephemeral as the container itself. If the container goes away, so do your logs.
  2. Relying Solely on stdout Logs: Docker captures everything sent to stdout and stderr, making it tempting to use this as the sole logging method. However, once the container gets deleted — we’ll loose these logs as well.

Logging is a very crucial component of any application. We need to store the logs and also ensure that they are searchable for multiple things like

  • Debugging — investigate issues
  • Compliance — Some applications have to comply with certain standards which makes it mandatory to store the logs for certain intervals.


To tackle these challenges, we propose a more robust approach to logging in Docker:

1: Mount a Volume for Log Files

By mounting a volume from the host to the container, logs are written to the host’s filesystem, ensuring persistence and safety. Here’s how you can do it:

docker run -d --name my-container -v /path/on/host:/path/in/container my-image

This command links a host directory to a container directory, preserving logs on the host. Just having the logs on filesystem is not enough. Ensure you implement logrotation as well.

2: Leverage the Syslog Driver

Directing stdout logs to syslog will ensure that the logs stay even if the container gets removed.

docker run -d --name my-container --log-driver=syslog my-image

You can also host a syslog server on a different node and provide its address to ship logs to it. For more instructions about that — refer to

3: Centralized Logging with Export Tools

To achieve centralized logging, you can export stdout logs from containers to a centralized logging system using various tools. This approach allows for better log management, analysis, and retention policies. Here are some steps to integrate centralized logging:

  1. Choose a Centralized Logging Service: Select a centralized logging system that fits your needs, such as ELK Stack (Elasticsearch, Logstash, Kibana), Fluentd, Loki or Graylog. These services are designed to aggregate logs from various sources, including Docker containers.
  2. Configure Log Export: Use Docker’s logging drivers or third-party tools to forward logs from each container to your centralized logging system.
  3. Set Up Log Processing and Analysis: Once your logs are centralized, use the tools provided by your logging system to filter, search, and analyze log data. This can help you quickly identify issues, understand application behavior, and improve system performance.

My favorite tool for Centralised Logging is Loki. It is a lightweight log aggregation tool that uses very low memory and compute. It does the job every time and has a very neat integration with Grafana.

Checkout instructions on how to use loki with docker here:


Effective log management is crucial for maintaining the health and performance of Dockerized applications. By avoiding common pitfalls and adopting best practices like volume mounting, syslog redirection or centralized logging, we can ensure our logging infrastructure is robust, scalable, and reliable.