Docker for Development: My First Month
We’ve been using Vagrant for development environments for the past year. It works, but it’s slow - VMs take minutes to start, and they consume 2GB+ of RAM each.
I spent the last month migrating our team to Docker. Here’s what I learned about using Docker for local development.
Table of Contents
Why Docker Over Vagrant?
Our Vagrant setup had problems:
- Slow startup - 2-3 minutes to boot a VM
- Heavy resource usage - Each VM uses 2GB RAM minimum
- Inconsistent state - Developers’ VMs drift from production
- Slow file sync - NFS shares are laggy on Mac
Docker promises to solve these:
- Containers start in seconds
- Much lighter weight (share host kernel)
- Immutable images ensure consistency
- Native file mounting (on Linux)
Installing Docker
I’m on Mac, so I installed Docker Toolbox (Docker for Mac doesn’t exist yet):
# Download from docker.com/toolbox
# Installs VirtualBox, Docker, Docker Compose
# Create default machine
docker-machine create --driver virtualbox default
# Set environment
eval $(docker-machine env default)
# Test
docker run hello-world
On Mac, Docker still uses a VM (via VirtualBox), but it’s managed automatically. Much better than Vagrant.
First Dockerfile
Here’s a Dockerfile for our Python Flask app:
FROM python:2.7
# Set working directory
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy application
COPY . .
# Expose port
EXPOSE 5000
# Run application
CMD ["python", "app.py"]
Build and run:
docker build -t myapp .
docker run -p 5000:5000 myapp
The app is now accessible at http://$(docker-machine ip default):5000.
Development Workflow
For development, I mount the source code as a volume:
docker run -p 5000:5000 -v $(pwd):/app myapp
Now code changes are reflected immediately without rebuilding the image.
But there’s a problem: on Mac, file changes don’t trigger Flask’s auto-reload. This is because Docker Toolbox uses VirtualBox shared folders, which don’t send file change notifications.
Workaround - use polling:
# app.py
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True, use_reloader=True,
extra_files=[]) # Force polling mode
Not ideal, but it works.
Docker Compose for Multi-Container Apps
Our app needs PostgreSQL and Redis. Docker Compose makes this easy:
# docker-compose.yml
version: '2'
services:
db:
image: postgres:9.5
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD: secret
volumes:
- postgres-data:/var/lib/postgresql/data
redis:
image: redis:3.2-alpine
web:
build: .
command: python app.py
volumes:
- .:/app
ports:
- "5000:5000"
depends_on:
- db
- redis
environment:
DATABASE_URL: postgresql://myapp:secret@db/myapp
REDIS_URL: redis://redis:6379
volumes:
postgres-data:
Start everything:
docker-compose up
This starts PostgreSQL, Redis, and the web app. They can communicate via service names (db, redis).
Database Migrations
For database setup, I added a migration service:
# docker-compose.yml
services:
# ... existing services ...
migrate:
build: .
command: python manage.py db upgrade
depends_on:
- db
environment:
DATABASE_URL: postgresql://myapp:secret@db/myapp
Run migrations:
docker-compose run migrate
Handling Dependencies
When I add a new Python package, I need to rebuild the image:
# Add package to requirements.txt
echo "requests==2.10.0" >> requirements.txt
# Rebuild
docker-compose build web
# Restart
docker-compose up -d web
To speed this up, I cache the dependencies layer:
FROM python:2.7
WORKDIR /app
# Copy only requirements first (cached layer)
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy application (changes frequently)
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
Now rebuilds are fast if only code changes, not dependencies.
Running Tests
I added a test service:
# docker-compose.yml
services:
test:
build: .
command: python -m pytest tests/
volumes:
- .:/app
depends_on:
- db
- redis
environment:
DATABASE_URL: postgresql://myapp:secret@db/myapp_test
REDIS_URL: redis://redis:6379
Run tests:
docker-compose run test
This ensures tests run in the same environment as production.
Debugging
Debugging in Docker is trickier than local development. I use pdb:
# In code
import pdb; pdb.set_trace()
Then run with stdin attached:
docker-compose run --service-ports web
The --service-ports flag exposes ports and attaches stdin, so pdb works.
Production-Like Environment
For staging, I use a production-like Dockerfile:
# Dockerfile.prod
FROM python:2.7
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# Use Gunicorn for production
RUN pip install gunicorn
EXPOSE 5000
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:5000", "app:app"]
And a separate compose file:
# docker-compose.prod.yml
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
ports:
- "5000:5000"
depends_on:
- db
- redis
environment:
DATABASE_URL: postgresql://myapp:secret@db/myapp
REDIS_URL: redis://redis:6379
Run staging:
docker-compose -f docker-compose.prod.yml up
Cleaning Up
Docker images and containers accumulate quickly:
# Remove stopped containers
docker rm $(docker ps -aq)
# Remove unused images
docker rmi $(docker images -q -f dangling=true)
# Remove volumes
docker volume rm $(docker volume ls -q -f dangling=true)
# Or use docker-compose
docker-compose down -v # Remove containers and volumes
I run this weekly to free up disk space.
Team Adoption
Getting the team to switch from Vagrant was challenging:
Resistance:
- “Vagrant works fine for me”
- “I don’t want to learn new tools”
- “Docker is just hype”
What helped:
- Show, don’t tell - Demo the faster startup
- Document everything - Write clear setup instructions
- Pair with teammates - Help them through first setup
- Fix issues quickly - Be responsive to problems
After two weeks, most of the team was on board. The speed improvement sold them.
Challenges
1. File permissions
On Linux, files created in containers are owned by root. Workaround:
# Create user with same UID as host
ARG USER_ID=1000
RUN useradd -m -u ${USER_ID} appuser
USER appuser
Build with:
docker build --build-arg USER_ID=$(id -u) -t myapp .
2. Networking on Mac
Docker Toolbox uses a VM, so localhost doesn’t work. Must use:
docker-machine ip default
I created an alias:
# ~/.bashrc
alias docker-ip='docker-machine ip default'
3. Slow file sync on Mac
VirtualBox shared folders are slow. No good solution yet. Docker for Mac (coming soon) promises to fix this.
4. Database persistence
First time I ran docker-compose down, I lost all database data! Now I always use named volumes:
volumes:
postgres-data: # Named volume persists
Comparison with Vagrant
After one month:
| Feature | Vagrant | Docker |
|---|---|---|
| Startup time | 2-3 min | 5-10 sec |
| RAM usage | 2GB+ per VM | 100MB+ per container |
| Disk usage | 5GB+ per VM | 500MB+ per image |
| File sync | Slow (NFS) | Fast (native on Linux) |
| Learning curve | Easy | Moderate |
Docker is clearly faster and lighter. But Vagrant is simpler for beginners.
What I’d Do Differently
- Wait for Docker for Mac - Docker Toolbox is a stopgap
- Use Alpine images - Smaller and faster
- Set up CI earlier - Docker makes CI/CD much easier
- Document gotchas - Save teammates from same issues
Future Plans
Next steps:
- CI/CD with Docker - Build images in Jenkins
- Docker in production - Deploy containers to EC2
- Docker Swarm - Orchestration for multiple hosts
- Private registry - Host our own images
Conclusion
Docker has transformed our development workflow. Containers start in seconds, use minimal resources, and ensure consistency across environments.
The learning curve is real, but worth it. After one month, I can’t imagine going back to Vagrant.
Key takeaways:
- Use Docker Compose for multi-container apps
- Mount code as volumes for development
- Use separate Dockerfiles for dev and prod
- Cache dependency layers for faster builds
- Document everything for your team
If you’re still using Vagrant, give Docker a try. The speed improvement alone is worth it.
Docker is the future of development environments. I’m excited to see where it goes.