Post

Container-only philosophy for Software Engineer and AI Agents

Container-only philosophy for Software Engineer and AI Agents

Container-Only Philosophy for Software Engineers and AI Agents

Modern software development is still surprisingly dependent on the developer’s machine. Installing Node, Java, PostgreSQL, Redis, Kafka, specific SDKs, specific compilers, and then trying to align versions across teams is one of the most persistent sources of friction in engineering. The result is predictable: broken environments, “works on my machine” bugs, dependency conflicts, and onboarding processes that take hours or days.

Container Philosophy

The container-only philosophy: host stays clean, everything runs in containers.

The container-only philosophy proposes a simple alternative: do not install development dependencies on the host machine at all. Instead, run everything inside containers. The host machine becomes only a runtime for containers. The development environment becomes reproducible, disposable, and consistent.

This approach is not merely a tooling choice. It is a mindset about isolation, reproducibility, and simplicity.


Core Principles

The philosophy can be summarized in a few rules.

1. The host machine should stay clean. The only essential dependency is a container runtime (for example Docker or Podman). Everything else runs inside containers.

2. Every project defines its environment explicitly. The environment is described in version-controlled files such as Dockerfile or docker-compose.yml.

3. Development and runtime environments should match. The same container used locally should be close to what runs in production.

4. Environments must be disposable. Containers should be easy to recreate, destroy, or upgrade without impacting the host machine.

5. Version conflicts must be impossible. Different projects can use different runtime versions without interfering with each other.


Running Tools Without Installing Them

One of the simplest demonstrations of this philosophy is running common development tools without installing them locally.

Example: Running Node and npm

Instead of installing Node on the machine:

1
docker run -it --rm -v $(pwd):/app -w /app node:20 npm install

This command runs the official Node container, mounts the current directory, and executes npm install.

Running tests:

1
docker run -it --rm -v $(pwd):/app -w /app node:20 npm test

Running a development server:

1
docker run -it --rm -p 3000:3000 -v $(pwd):/app -w /app node:20 npm run dev

Node never touches the host machine. The project controls its runtime version through the container image.

This completely eliminates problems like:

  • Node version mismatches
  • npm global packages
  • corrupted environments

Running Databases Without Installation

Databases are another common source of setup complexity. Installing PostgreSQL locally often leads to port conflicts, version mismatches, and configuration drift.

With containers, running PostgreSQL becomes trivial.

1
2
3
4
5
docker run -d \
  --name postgres-dev \
  -e POSTGRES_PASSWORD=devpass \
  -p 5432:5432 \
  postgres:16

A developer now has a fully functional PostgreSQL instance.

When the container is removed, the environment disappears cleanly.

Stopping it:

1
2
docker stop postgres-dev
docker rm postgres-dev

Starting again is deterministic and fast.


Complex Infrastructure With Docker Compose

Real applications rarely depend on a single service. Modern systems often include multiple components:

  • Application service
  • Database
  • Cache
  • Messaging system
  • Background workers

Container orchestration tools such as Docker Compose make this trivial to manage locally.

Example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
version: "3"

services:

  app:
    image: openjdk:21
    working_dir: /app
    volumes:
      - .:/app
    command: ./gradlew bootRun
    ports:
      - "8080:8080"
    depends_on:
      - postgres
      - redis
      - kafka

  postgres:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: devpass
    ports:
      - "5432:5432"

  redis:
    image: redis:7
    ports:
      - "6379:6379"

  kafka:
    image: bitnami/kafka:latest
    environment:
      ALLOW_PLAINTEXT_LISTENER: "yes"
    ports:
      - "9092:9092"

Running the entire infrastructure becomes a single command:

1
docker compose up

This spins up:

  • PostgreSQL
  • Redis
  • Kafka
  • The application

A developer can destroy everything just as easily:

1
docker compose down

No configuration files remain on the host system. No services remain running in the background.


Advantages of the Container-Only Approach

Reproducibility

Every developer runs the same environment.

A project checked out from Git contains everything required to start development.

Fast Onboarding

New developers do not need to install multiple tools.

Typical onboarding becomes:

1
2
git clone repo
docker compose up

Version Isolation

One project may use:

  • Node 16
  • PostgreSQL 12

Another may use:

  • Node 20
  • PostgreSQL 16

Containers isolate these environments completely.

Easy Upgrades

Upgrading infrastructure becomes changing one line:

1
postgres:16 → postgres:17

Then recreating containers.

Clean System

The host machine stays free from:

  • SDKs
  • runtime conflicts
  • development databases
  • build tools

Philosophy for AI Agents

AI agents assisting software engineers should follow the same philosophy.

An AI agent generating development instructions should prefer containerized workflows instead of system installations.

For example:

Instead of suggesting:

1
2
brew install node
apt install postgresql

The agent should generate container commands.

Example guidance for AI agents:

  1. Prefer containerized runtimes for development tools.
  2. Generate Dockerfile or docker-compose.yml when describing project setups.
  3. Avoid instructions that modify the host machine.
  4. Assume environments are ephemeral and reproducible.
  5. Ensure development and deployment environments remain close.

For instance, when a user asks how to run Redis locally, the AI should suggest:

1
docker run -p 6379:6379 redis:7

Instead of guiding the user through OS-specific installation steps.

This approach dramatically reduces environment-related friction and aligns with modern DevOps practices.


Limitations and Trade-offs

The container-only philosophy is powerful, but it also introduces some considerations.

Performance overhead can exist for file mounts in certain environments, especially on macOS.

Container orchestration complexity may increase as systems grow larger.

Debugging inside containers sometimes requires additional tooling.

However, in most modern development environments these trade-offs are small compared to the benefits of reproducibility and simplicity.


Conclusion

The container-only philosophy reframes how software engineers approach local environments. Instead of treating the developer machine as a mutable workstation full of installed dependencies, the machine becomes simply a container host.

Projects define their environments explicitly. Developers run tools without installing them. Infrastructure becomes reproducible and disposable.

For both engineers and AI agents, the guiding principle is simple:

If a tool is needed, run it in a container. Do not install it.

This single rule removes an entire class of development problems and creates environments that are consistent, portable, and easy to maintain.

This post is licensed under CC BY 4.0 by the author.