Everything comes together here. The goal: an environment so well-defined you can hand it to a colleague or recreate it in 5 minutes.
Containers with Docker
Why Docker? Your simulation runs identically everywhere. Dependencies are frozen. You share your setup via a Dockerfile, not a wiki page of instructions.
Core Commands
docker run hello-world
docker run -it ubuntu:22.04 bash
docker run -it -v $(pwd):/workspace -w /workspace python:3.12 bash
docker ps
docker ps -a
docker stop <id>
docker rm <id>
docker images
Dockerfile
A Dockerfile defines your environment as code:
FROM python:3.12-slim
WORKDIR /app
RUN apt-get update && apt-get install -y gcc libhdf5-dev && rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "simulate.py"]
Build and run:
docker build -t my-simulation .
docker run my-simulation
docker run -v $(pwd)/results:/app/results my-simulation
docker compose
For multi-container setups, use docker compose:
version: '3.8'
services:
simulation:
build: .
volumes:
- ./data:/app/data
- ./results:/app/results
database:
image: postgres:16
environment:
POSTGRES_PASSWORD: mysecret
volumes:
pgdata:
docker compose up
docker compose up -d
docker compose down
docker compose logs -f
Key takeaway: A Dockerfile is the gold standard for reproducible environments. If it's not in a Dockerfile, it's not reproducible.
Build Tools: make and cmake
make (Makefile)
A Makefile gives your project a standard interface — anyone can run make build without reading your setup docs:
build:
mkdir -p build
cd build && cmake .. && make
run: build
./build/simulate --config config.ini
clean:
rm -rf build/
setup:
python3 -m venv venv
venv/bin/pip install -r requirements.txt
.PHONY: build run clean setup
Tip: Even for Python projects, a Makefile with make test, make lint, make run is clean documentation of how to work with your project.
cmake
For C/C++ projects, cmake generates platform-specific build files:
mkdir build && cd build
cmake ..
make -j4
System Monitoring
Before, during, and after running jobs, keep an eye on your system:
df -h
du -sh ./results/
free -h
htop
vmstat 1
tail -f simulation.log
journalctl -f
Warning: Always check df -h before running jobs that produce large output. Running out of disk mid-simulation is a painful way to lose hours of compute.
VS Code Dev Containers
Dev Containers let your entire team develop inside identical Docker environments. Create .devcontainer/devcontainer.json:
{
"name": "Simulation Dev",
"build": {"dockerfile": "../Dockerfile"},
"extensions": ["ms-python.python"],
"postCreateCommand": "pip install -r requirements.txt"
}
When a teammate opens the project in VS Code, they get the exact same environment automatically.
AI in the Terminal
AI tools are becoming part of the engineering workflow. Use them wisely:
gh copilot suggest "how do I rsync only files newer than 7 days"
gh copilot explain "find . -name '*.log' -mtime +30 -delete"
cat simulation.log | llm "what does this error mean"
Tip: The real skill with AI tools is understanding the environment well enough to know whether AI output is correct and safe. Everything in this course builds that understanding.
Try It Yourself
Containerize a Python script from scratch.
- Create a file called
simulate.py:import platform print(f"Running on: {platform.system()}") print("Simulation complete.") - Create
requirements.txt(can be empty for now). - Write a
Dockerfilebased on the example above. - Build:
docker build -t my-simulation . - Run:
docker run my-simulation - Observe that it says "Running on: Linux" — even on Mac or Windows.
Hint
Install Docker Desktop from docker.com if you haven't already. It works on Mac, Windows, and Linux.
Quick Quiz
What is the key advantage of a Docker container vs. running directly on the server?
Answer
B) A container packages your code together with all its dependencies, so it runs identically regardless of the host system's software configuration.