I’ve been writing Go for about 8 months now, and goroutines are still my favorite feature. But the concurrency patterns took me a while to grok. Here are the ones I actually use in production.

Pattern 1: Worker Pool

This is my go-to for processing a queue of tasks with limited concurrency:

func workerPool(tasks <-chan Task, results chan<- Result, numWorkers int) {
    var wg sync.WaitGroup
    
    for i := 0; i < numWorkers; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for task := range tasks {
                result := processTask(task)
                results <- result
            }
        }()
    }
    
    wg.Wait()
    close(results)
}

We use this for processing uploaded files. Instead of spawning a goroutine for each file (which could be thousands), we limit it to 10 workers. Much more predictable resource usage.

Pattern 2: Fan-Out, Fan-In

When you need to distribute work across multiple goroutines and collect the results:

func fanOut(input <-chan int, numWorkers int) []<-chan int {
    channels := make([]<-chan int, numWorkers)
    
    for i := 0; i < numWorkers; i++ {
        ch := make(chan int)
        channels[i] = ch
        
        go func(out chan<- int) {
            defer close(out)
            for n := range input {
                out <- n * 2  // Do some work
            }
        }(ch)
    }
    
    return channels
}

func fanIn(channels ...<-chan int) <-chan int {
    out := make(chan int)
    var wg sync.WaitGroup
    
    for _, ch := range channels {
        wg.Add(1)
        go func(c <-chan int) {
            defer wg.Done()
            for n := range c {
                out <- n
            }
        }(ch)
    }
    
    go func() {
        wg.Wait()
        close(out)
    }()
    
    return out
}

This pattern is great for parallel processing. We use it for image resizing - fan out to multiple workers, each resizes images, then fan in to collect results.

Pattern 3: Timeout Pattern

Don’t let goroutines hang forever:

func doWorkWithTimeout(timeout time.Duration) (Result, error) {
    resultCh := make(chan Result)
    
    go func() {
        result := doExpensiveWork()
        resultCh <- result
    }()
    
    select {
    case result := <-resultCh:
        return result, nil
    case <-time.After(timeout):
        return Result{}, errors.New("timeout")
    }
}

We had a production issue where an external API call hung indefinitely. This pattern saved us. Now all our external calls have timeouts.

Pattern 4: Pipeline

Chain operations together:

func generator(nums ...int) <-chan int {
    out := make(chan int)
    go func() {
        defer close(out)
        for _, n := range nums {
            out <- n
        }
    }()
    return out
}

func square(in <-chan int) <-chan int {
    out := make(chan int)
    go func() {
        defer close(out)
        for n := range in {
            out <- n * n
        }
    }()
    return out
}

func main() {
    // Pipeline: generate -> square -> print
    for n := range square(generator(1, 2, 3, 4)) {
        fmt.Println(n)
    }
}

Clean and composable. Each stage is independent and testable.

Common Mistakes I Made

1. Forgetting to Close Channels

// Bad - goroutine leak
go func() {
    for {
        ch <- getValue()  // If nobody reads, this blocks forever
    }
}()

// Good
go func() {
    defer close(ch)
    for i := 0; i < 10; i++ {
        ch <- getValue()
    }
}()

2. Not Using Buffered Channels When Needed

// This can deadlock
ch := make(chan int)
ch <- 1  // Blocks forever if nobody's reading

// Better
ch := make(chan int, 1)
ch <- 1  // Doesn't block

3. Sharing Memory Instead of Communicating

// Bad - race condition
var counter int
for i := 0; i < 10; i++ {
    go func() {
        counter++  // Race!
    }()
}

// Good - use channels
ch := make(chan int)
go func() {
    count := 0
    for range ch {
        count++
    }
}()

Performance Notes

Goroutines are cheap (2KB stack), but they’re not free. I’ve seen people spawn millions of goroutines and wonder why their app crashes. Use worker pools to limit concurrency.

Also, channel operations have overhead. For simple cases, a mutex might be faster:

// For simple counter, mutex is faster
var mu sync.Mutex
var counter int

mu.Lock()
counter++
mu.Unlock()

Debugging Concurrent Code

Use the race detector:

go run -race main.go
go test -race ./...

It’s saved me countless times. Catches race conditions that are nearly impossible to find otherwise.

What I’m Still Learning

  • Context package for cancellation (coming in Go 1.7)
  • Advanced select patterns
  • Lock-free data structures

Concurrency in Go is powerful but takes practice. Start with simple patterns and build up. Don’t try to be clever - clear code is better than clever code.

Anyone have other patterns they use? I’m always looking to learn more.