We’ve been building microservices with Go for about a year now. Here’s what we learned.

Why Go for Microservices?

  • Fast compilation: Builds in seconds, not minutes
  • Small binaries: 10-20MB vs 100MB+ for Java
  • Low memory footprint: 20-50MB vs 500MB+ for JVM
  • Built-in concurrency: Goroutines make handling concurrent requests easy
  • Static typing: Catches errors at compile time

Our Stack

  • Framework: Standard library + gorilla/mux
  • Database: PostgreSQL with lib/pq
  • Caching: Redis with go-redis
  • Messaging: RabbitMQ with amqp
  • Logging: logrus
  • Metrics: Prometheus client

Service Structure

We settled on this structure:

myservice/
├── cmd/
│   └── server/
│       └── main.go
├── internal/
│   ├── handler/
│   ├── service/
│   ├── repository/
│   └── model/
├── pkg/
│   └── client/
├── Dockerfile
└── go.mod
  • cmd/: Entry points
  • internal/: Private code
  • pkg/: Public libraries
  • Dockerfile: For containerization

HTTP Server Setup

We use gorilla/mux for routing:

func main() {
    r := mux.NewRouter()
    
    // Health check
    r.HandleFunc("/health", healthHandler).Methods("GET")
    
    // API routes
    api := r.PathPrefix("/api/v1").Subrouter()
    api.HandleFunc("/users", listUsersHandler).Methods("GET")
    api.HandleFunc("/users/{id}", getUserHandler).Methods("GET")
    api.HandleFunc("/users", createUserHandler).Methods("POST")
    
    // Middleware
    r.Use(loggingMiddleware)
    r.Use(recoveryMiddleware)
    
    srv := &http.Server{
        Addr:         ":8080",
        Handler:      r,
        ReadTimeout:  15 * time.Second,
        WriteTimeout: 15 * time.Second,
        IdleTimeout:  60 * time.Second,
    }
    
    log.Fatal(srv.ListenAndServe())
}

Graceful Shutdown

Always handle shutdown gracefully:

func main() {
    srv := &http.Server{Addr: ":8080", Handler: handler}
    
    go func() {
        if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
            log.Fatalf("listen: %s\n", err)
        }
    }()
    
    // Wait for interrupt signal
    quit := make(chan os.Signal, 1)
    signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
    <-quit
    
    log.Println("Shutting down server...")
    
    ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
    defer cancel()
    
    if err := srv.Shutdown(ctx); err != nil {
        log.Fatal("Server forced to shutdown:", err)
    }
    
    log.Println("Server exiting")
}

This ensures in-flight requests complete before shutdown.

Error Handling

We use a custom error type:

type AppError struct {
    Code    int
    Message string
    Err     error
}

func (e *AppError) Error() string {
    if e.Err != nil {
        return fmt.Sprintf("%s: %v", e.Message, e.Err)
    }
    return e.Message
}

// Usage
func getUser(id string) (*User, error) {
    user, err := db.FindUser(id)
    if err == sql.ErrNoRows {
        return nil, &AppError{
            Code:    404,
            Message: "User not found",
            Err:     err,
        }
    }
    if err != nil {
        return nil, &AppError{
            Code:    500,
            Message: "Database error",
            Err:     err,
        }
    }
    return user, nil
}

Database Access

We use the repository pattern:

type UserRepository interface {
    FindByID(id string) (*User, error)
    Create(user *User) error
    Update(user *User) error
    Delete(id string) error
}

type postgresUserRepository struct {
    db *sql.DB
}

func (r *postgresUserRepository) FindByID(id string) (*User, error) {
    var user User
    err := r.db.QueryRow(
        "SELECT id, name, email FROM users WHERE id = $1",
        id,
    ).Scan(&user.ID, &user.Name, &user.Email)
    
    if err != nil {
        return nil, err
    }
    return &user, nil
}

This makes testing easier - we can mock the repository.

Configuration

We use environment variables:

type Config struct {
    Port        string
    DatabaseURL string
    RedisURL    string
    LogLevel    string
}

func LoadConfig() *Config {
    return &Config{
        Port:        getEnv("PORT", "8080"),
        DatabaseURL: getEnv("DATABASE_URL", ""),
        RedisURL:    getEnv("REDIS_URL", ""),
        LogLevel:    getEnv("LOG_LEVEL", "info"),
    }
}

func getEnv(key, defaultValue string) string {
    value := os.Getenv(key)
    if value == "" {
        return defaultValue
    }
    return value
}

Logging

We use structured logging with logrus:

log := logrus.New()
log.SetFormatter(&logrus.JSONFormatter{})
log.SetLevel(logrus.InfoLevel)

log.WithFields(logrus.Fields{
    "user_id": userID,
    "action":  "create_order",
}).Info("Order created successfully")

This makes logs easy to parse and search.

Metrics

We expose Prometheus metrics:

var (
    httpRequestsTotal = prometheus.NewCounterVec(
        prometheus.CounterOpts{
            Name: "http_requests_total",
            Help: "Total number of HTTP requests",
        },
        []string{"method", "endpoint", "status"},
    )
    
    httpRequestDuration = prometheus.NewHistogramVec(
        prometheus.HistogramOpts{
            Name: "http_request_duration_seconds",
            Help: "HTTP request duration in seconds",
        },
        []string{"method", "endpoint"},
    )
)

func init() {
    prometheus.MustRegister(httpRequestsTotal)
    prometheus.MustRegister(httpRequestDuration)
}

// Middleware to record metrics
func metricsMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()
        
        // Wrap response writer to capture status code
        wrapped := &responseWriter{ResponseWriter: w, statusCode: 200}
        
        next.ServeHTTP(wrapped, r)
        
        duration := time.Since(start).Seconds()
        
        httpRequestsTotal.WithLabelValues(
            r.Method,
            r.URL.Path,
            strconv.Itoa(wrapped.statusCode),
        ).Inc()
        
        httpRequestDuration.WithLabelValues(
            r.Method,
            r.URL.Path,
        ).Observe(duration)
    })
}

Testing

We write tests at multiple levels:

// Unit test
func TestUserService_Create(t *testing.T) {
    mockRepo := &mockUserRepository{}
    service := NewUserService(mockRepo)
    
    user := &User{Name: "John", Email: "john@example.com"}
    err := service.Create(user)
    
    assert.NoError(t, err)
    assert.Equal(t, 1, mockRepo.createCallCount)
}

// Integration test
func TestUserAPI_Create(t *testing.T) {
    // Set up test database
    db := setupTestDB(t)
    defer db.Close()
    
    // Create test server
    server := setupTestServer(db)
    defer server.Close()
    
    // Make request
    resp, err := http.Post(
        server.URL+"/api/v1/users",
        "application/json",
        strings.NewReader(`{"name":"John","email":"john@example.com"}`),
    )
    
    assert.NoError(t, err)
    assert.Equal(t, 201, resp.StatusCode)
}

Deployment

We deploy as Docker containers:

FROM golang:1.9-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o server cmd/server/main.go

FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/server .
EXPOSE 8080
CMD ["./server"]

Multi-stage builds keep images small (< 20MB).

Lessons Learned

  1. Keep services small: One service, one responsibility
  2. Use interfaces: Makes testing easier
  3. Handle errors explicitly: Don’t ignore errors
  4. Always set timeouts: For HTTP clients, database connections, etc.
  5. Monitor everything: Metrics, logs, traces
  6. Use context: For cancellation and timeouts
  7. Graceful shutdown: Always

Performance

Our Go services handle 10,000 requests/second on a single m4.large instance. Memory usage stays under 50MB. CPU usage is around 30%.

Compare that to our Java services: 1,000 requests/second, 500MB memory, 60% CPU.

Would We Choose Go Again?

Absolutely. The productivity, performance, and simplicity make it perfect for microservices.

Questions? Ask away!