Containers, images, Compose, multi-stage builds, networking, security, and debugging.
Welcome everyone! Today we're diving into Docker containerization, one of the most transformative technologies in modern software development. Docker lets us package our applications with all their dependencies into lightweight containers that can run consistently anywhere — whether that's your laptop, a staging server, or production in the cloud. By the end of this talk, you'll understand how to build, ship, and run containerized applications with confidence. Let's get started.
So what exactly is Docker? Let's break down the four core concepts you'll be working with. First, containers themselves are lightweight, isolated environments that share the host operating system kernel, making them much more efficient than traditional virtual machines. Second, images are read-only templates with your application, dependencies, and configuration baked in — think of them as the blueprint for your containers. Third, registries like Docker Hub store and distribute these images so teams can share them. And finally, Docker Compose lets you define entire multi-container applications using a single YAML file, making complex architectures manageable.
Looking at this architecture diagram, you can see how everything fits together. At the base, we have the host operating system running the Docker Engine. The Docker Engine then manages multiple containers — here we have a Node.js API, a PostgreSQL database, a Redis cache, and an Nginx proxy. Notice how all these containers connect through a shared network bridge, allowing them to communicate with each other. Finally, port mapping connects services from the bridge to the host, making them accessible from outside. This shared-kernel architecture is what makes containers so lightweight compared to virtual machines — each container doesn't need its own full operating system.
Now let's look at writing a production-ready Dockerfile. This is a multi-stage build — notice we have two FROM statements. In stage one, we use a Node Alpine image as our builder, install all dependencies including dev dependencies, copy our TypeScript source code, and compile it. Then in stage two, our production stage, we start fresh with a clean Alpine image. We create a non-root user for security, copy only the compiled output and production dependencies from the builder stage, and configure health checks. This approach gives us a much smaller final image since we're leaving behind all the build tools and source code. The final container runs as a non-root user and exposes port 3000 with automatic health monitoring.
Let's walk through the essential Docker commands you'll use daily. Looking at this terminal output, we start by building an image tagged as myapp version 1.0 — that takes about twelve seconds. Next, docker images shows us the resulting image is 148 megabytes. We then run the container in detached mode with port 3000 mapped, and docker ps confirms it's up and running. Finally, docker logs shows us the last five log entries — we can see the server started, connected to the database and Redis, passed its health check, and is ready to accept connections. These commands form the core workflow for building, running, and monitoring containerized applications.
Containers are ephemeral by default — when they're removed, their data disappears. That's where volumes come in. Looking at the diagram, we have three types of storage. Named volumes are managed by Docker and can be shared between containers — perfect for databases. Bind mounts map host directories directly into containers — ideal for development when you want live code syncing. And tmpfs mounts exist only in memory. Now looking at this docker-compose file, we're using a named volume called pgdata for PostgreSQL to ensure our database survives container restarts. For the app service, we're bind-mounting the source directory for hot-reloading during development, while using an anonymous volume to preserve the node_modules from the image. The callout here emphasizes the key difference: named volumes persist across container lifecycles, bind mounts sync with your host, and tmpfs is temporary.
Docker provides several networking modes for different use cases. We have four cards here explaining each type. Bridge networks are the default — containers on the same bridge can reach each other by name, which is perfect for most applications. Host networks share the host's network stack entirely, offering maximum performance but less isolation. Overlay networks span multiple Docker hosts, which you'll need for Docker Swarm or multi-node deployments. And none networks provide complete isolation with no network access at all. Looking at the terminal commands, we create a custom bridge network called app-net, then start a PostgreSQL container and our API container on that network. The ping command at the bottom demonstrates service discovery — the API container can reach the database by its container name, and Docker resolves that to the internal IP address automatically.
Multi-stage builds are crucial for creating lean production images. Looking at this Dockerfile, each FROM statement starts a completely fresh image layer. In stage one, we install all dependencies including dev dependencies. Stage two uses that first stage as a base and compiles our TypeScript application. But here's the magic — in stage three, our production stage, we start fresh again and only install production dependencies. Then we copy just the compiled output from the builder stage using the copy from flag. Everything else — the TypeScript compiler, source files, dev dependencies — gets left behind and doesn't make it into the final image. As the callout explains, this approach typically results in images that are five to ten times smaller than single-stage builds. You're shipping only what you need to run, nothing more.
Docker Compose really shines when orchestrating multi-container applications. Looking at this compose file, we're defining three services. The API service builds using a specific multi-stage target, exposes port 3000, and importantly uses depends_on with a health check condition to ensure the database is fully ready before starting. We're also setting resource limits — half a CPU core and 512 megabytes of memory maximum. The database service uses a named volume for persistence and includes a health check that runs every five seconds. We're also mounting an initialization SQL script that runs on first startup. The cache service is Redis configured with memory limits and an eviction policy. Now looking at the terminal output, when we run docker compose up, it builds our API, waits for the database health check to pass, then starts all three services. Docker compose ps shows everything running with the correct port mappings.
Security should never be an afterthought when working with containers. These four cards highlight the essentials. First and most important — never run containers as root. Create a dedicated user with limited privileges to minimize damage if a container is compromised. Second, mount the container filesystem as read-only whenever possible, using tmpfs for directories that need write access. Third, scan your images for vulnerabilities in your CI pipeline before deploying to production — catch known CVEs in base images and dependencies early. And fourth, use minimal base images like Alpine or distroless. Fewer packages means a smaller attack surface and significantly smaller image sizes. Each of these practices adds a layer of defense, and together they dramatically improve your container security posture.
When things go wrong, you need to know how to debug containers effectively. Looking at these terminal commands, docker top shows us the processes running inside a container and their resource usage. Docker exec with the interactive flag gives you a shell inside a running container so you can explore the filesystem and test commands. Docker stats displays real-time CPU, memory, and network metrics for all your containers — here we see our API using about 87 megabytes of its 512 megabyte limit. Docker inspect dumps all the metadata and configuration in JSON format. And docker cp lets you extract files from a container for local analysis. The callout here is important — even when a container has crashed and stopped, its filesystem still exists. You can still pull logs, copy files out, or even commit the stopped container to a new image for deeper investigation.
Image size directly impacts deployment speed and storage costs, so optimization matters. This table shows six key techniques. Alpine base images are five to ten times smaller than standard images. Multi-stage builds drop all your build tools from the final image. Smart layer ordering with package files copied before source code means faster rebuilds when only code changes. A dockerignore file excludes unnecessary files from the build context. Combining RUN commands reduces layer count. And crucially, cleaning up in the same layer where you install packages prevents cache from lingering. Looking at the dockerignore example, we're excluding node_modules, git history, and documentation. The stats at the bottom tell the story — a typical Node image is nearly a gigabyte, but with Alpine and multi-stage builds, we can get that same application down to just 52 megabytes. For compiled languages like Go using distroless images, you can hit 18 megabytes.
Let's wrap up with a quick reference of the commands you'll use most often. Docker build creates images from Dockerfiles, and you'll typically use the tag flag to name your image. Docker run creates and starts containers, with flags for detached mode, port mapping, and volume mounts. Docker compose up starts your entire multi-container application, often with the detached and build flags. Docker exec runs commands inside running containers — the interactive terminal flags are essential for getting a shell. Docker logs shows container output, with options to follow logs in real-time or show just the last few lines. Docker stop gracefully shuts down containers, sending a termination signal before forcing shutdown. And docker system prune cleans up unused images, containers, and networks — use the all flag to be aggressive about reclaiming disk space. Keep this reference handy as you build your Docker workflow.
Hands-on implementation guides with detailed code examples, step-by-step instructions, and expanded explanations for each topic.