← All Articles
Last updated: 2026-03-30

Docker Container Won't Start? A Systematic Debugging Guide

Systematic approach to debugging Docker containers. Exit codes, logs, compose issues, permission problems — all explained with commands.

TL;DR

When a Docker container won't start, work through this sequence: check docker logs for application errors, use docker inspect to examine the container's state and exit code, verify port availability and volume permissions, and check disk space. Most startup failures come down to five things: application errors (exit code 1), missing binaries (127), out-of-memory kills (137), permission issues, or port conflicts. This guide walks through each scenario with concrete commands.

Prerequisites

Step-by-Step Debugging Guide

1. Reading Container Logs

Always start here. Container logs capture everything written to stdout and stderr by your application's main process.

# View all logs from a container (including stopped ones)
docker logs my-container

# Show only the last 50 lines
docker logs --tail 50 my-container

# Follow logs in real-time (like tail -f)
docker logs --tail 50 -f my-container

# Show timestamps alongside each log line
docker logs -t my-container

# View logs since a specific time
docker logs --since 2025-01-01T10:00:00 my-container

If the container exited immediately, the logs often contain the exact error message. If the logs are empty, the problem is likely at the container runtime level (wrong entrypoint, missing binary, or permission denied on the executable).

2. Inspecting Container State

docker inspect gives you the full picture of what Docker knows about the container, including its configuration, state, and network settings.

# Full JSON output for a container
docker inspect my-container

# Get just the state (running, exited, exit code, error)
docker inspect --format '{{json .State}}' my-container | jq .

# Get the exit code directly
docker inspect --format '{{.State.ExitCode}}' my-container

# Check the OOM killer flag
docker inspect --format '{{.State.OOMKilled}}' my-container

# See the exact command being run
docker inspect --format '{{.Config.Cmd}}' my-container

# Check the entrypoint
docker inspect --format '{{.Config.Entrypoint}}' my-container

# View mount/volume configuration
docker inspect --format '{{json .Mounts}}' my-container | jq .

3. Understanding Exit Codes

The exit code tells you how the process died. This table covers the most common ones:

Exit CodeMeaningTypical Cause
0SuccessThe process completed normally. If the container "won't stay running," your entrypoint might be a script that finishes instead of blocking.
1Application errorGeneric failure. Check docker logs for the actual error message.
126Permission deniedThe entrypoint file exists but is not executable.
127Command not foundThe binary in CMD/ENTRYPOINT does not exist in the container. Common with multi-stage builds where you forgot to copy the binary.
137SIGKILL (OOM)The container exceeded its memory limit and was killed by the OOM killer. Confirm with docker inspect --format '{{.State.OOMKilled}}'.
139SIGSEGVSegmentation fault in the application. Often caused by running an amd64 binary on arm64 or vice versa.
143SIGTERMThe process received a termination signal. Normal during docker stop, but unexpected during startup could mean an orchestrator is killing it.

4. Interactive Debugging with exec

If the container is running but misbehaving, drop into it:

# Open a shell inside a running container
docker exec -it my-container /bin/sh

# If sh isn't available, try bash
docker exec -it my-container /bin/bash

# Run a single diagnostic command
docker exec my-container cat /etc/os-release
docker exec my-container ls -la /app/

If the container exits immediately and you can't exec into it, override the entrypoint to keep it alive:

# Start the image with a shell instead of the normal entrypoint
docker run -it --entrypoint /bin/sh my-image

# Or keep it alive with a sleep so you can exec in
docker run -d --entrypoint sleep my-image 3600
docker exec -it $(docker ps -q -f ancestor=my-image) /bin/sh

5. Docker Compose Issues

Port Conflicts

If a service fails because the port is already in use:

# Find what is using a specific port
lsof -i :8080

# On Linux, alternatively:
ss -tlnp | grep 8080

# Kill the process if needed
kill -9 $(lsof -t -i :8080)

Network Problems

# List all Docker networks
docker network ls

# Inspect a specific network to see connected containers
docker network inspect my-app_default

# Test connectivity between containers
docker exec my-container ping other-container

# Check DNS resolution inside a container
docker exec my-container nslookup other-container

depends_on vs. Healthchecks

depends_on only waits for the container to start, not for the application inside to be ready. A database container might take 10 seconds to initialize, but your app tries to connect immediately.

# docker-compose.yml with proper healthcheck
services:
  db:
    image: postgres:16
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  app:
    image: my-app
    depends_on:
      db:
        condition: service_healthy

Volume Permissions

# Check ownership of mounted volumes
docker exec my-container ls -la /data/

# Check which user the container process runs as
docker exec my-container id

# Fix ownership on the host
sudo chown -R 1000:1000 ./data/

6. Permission Problems

Permission issues are among the most frustrating Docker problems, especially with mounted volumes.

# Bad: files copied as root, then switched to non-root user who can't read them
FROM node:20-alpine
COPY . /app
USER node
CMD ["node", "/app/server.js"]

# Good: set ownership explicitly
FROM node:20-alpine
COPY --chown=node:node . /app
WORKDIR /app
USER node
CMD ["node", "server.js"]

When bind-mounting host directories, the UID inside the container must match the file owner on the host:

# Check UID of the container user
docker exec my-container id
# uid=1000(node) gid=1000(node)

# Ensure host files match
ls -ln ./data/
# If mismatch, fix it:
sudo chown -R 1000:1000 ./data/

7. Multi-Stage Build Issues

Multi-stage builds are a common source of "container starts but binary/file is missing" bugs.

# Common mistake: forgetting to copy from the build stage
FROM golang:1.22 AS builder
WORKDIR /src
COPY . .
RUN go build -o /app/server .

FROM alpine:3.19
# BUG: forgot to copy the binary!
CMD ["/app/server"]
# Exit code 127: /app/server not found

# Fix:
FROM alpine:3.19
COPY --from=builder /app/server /app/server
CMD ["/app/server"]

Another common issue: building on a glibc-based image (Debian/Ubuntu) and running on a musl-based image (Alpine). The binary will segfault (exit code 139) or fail to find shared libraries. Either use the same base, or compile statically:

# For Go: build a fully static binary
CGO_ENABLED=0 go build -o /app/server .

8. Disk Space Issues

Docker can silently consume enormous amounts of disk space. When the disk is full, containers may fail to start with cryptic errors.

# Check Docker's disk usage with a breakdown
docker system df

# Detailed view
docker system df -v

# Remove unused data (stopped containers, dangling images, unused networks)
docker system prune

# Also remove unused volumes (CAUTION: this deletes data)
docker system prune --volumes

# Remove all unused images, not just dangling ones
docker system prune -a

# Check host disk space
df -h /var/lib/docker

9. Environment Variable Debugging

Misconfigured environment variables are a silent killer. The app starts but connects to the wrong database, uses the wrong API key, or falls back to defaults.

# List all env vars inside a running container
docker exec my-container env

# Check a specific variable
docker exec my-container printenv DATABASE_URL

# View env vars from docker inspect (works on stopped containers too)
docker inspect --format '{{range .Config.Env}}{{println .}}{{end}}' my-container

# Check if .env file is being loaded in Compose
docker compose config

docker compose config is especially useful: it renders the final resolved YAML with all variable substitutions applied, so you can verify that ${VARS} resolved correctly.

Troubleshooting Quick Reference

SymptomFirst CommandLikely Cause
Container exits immediatelydocker logs my-containerApplication crash or entrypoint exiting
Exit code 137, no error in logsdocker inspect --format '{{.State.OOMKilled}}'Memory limit exceeded
"port already in use"lsof -i :PORTAnother process or container on that port
"permission denied"docker exec my-container ls -la /pathUID mismatch between container user and file owner
"no such file or directory"docker exec my-container ls /app/Missing COPY in Dockerfile or wrong path
Container healthy but app unreachabledocker network inspectNetwork misconfiguration or wrong published port
"no space left on device"docker system dfDocker images/volumes consuming all disk
Compose services can't find each otherdocker compose configServices on different networks or typo in service name

Prevention

Need Expert Help?

Can't find the bug? I debug it live via screen share. €49, 30 min.

Book Now — €49

100% money-back guarantee

HR

Harald Roessler

Infrastructure Engineer with 20+ years experience. Founder of DSNCON GmbH.