Application 2025-06-14

Go's Concurrency and Parallelism Model and Goroutine Scheduling

Understand Go concurrency vs. parallelism. Covers goroutine scheduling, GOMAXPROCS, the GMP model, OS threads vs. goroutines, and how preemption works with sequence diagram examples.

Read in: ja
Go's Concurrency and Parallelism Model and Goroutine Scheduling

Overview

Go language strongly supports concurrency through lightweight goroutines and runtime mechanisms. Since Go 1.5, GOMAXPROCS is set to the number of available CPU cores by default, enabling parallel execution by properly configuring it. This article organizes the relationship between goroutine scheduling, multi-core utilization in CPU-bound processing, and the relationship between OS processes, threads, and goroutines.

Difference Between Concurrency and Parallelism

In Go, you can mainly implement concurrency directly, handling multiple tasks overlapped using goroutines. True parallelism requires an execution environment with multiple CPU cores and setting GOMAXPROCS to 2 or more.

Concurrency from a Time Axis Perspective

This represents how tasks are executed overlapped by dividing time on a single core. The actual scheduler switches at preemption points or I/O completion notifications, so the switching timing is not strictly deterministic, but goroutines dynamically switch while waiting for I/O or runtime preemption.

sequenceDiagram participant Core as Core participant TaskA as Task A participant TaskB as Task B Note over Core: Concurrency TaskA->>Core: Execution time slice 1 Note right of Core: Task A running Core-->>TaskA: Interrupt (waiting for I/O or preemption) TaskB->>Core: Execution time slice 1 Note right of Core: Task B running Core-->>TaskB: Interrupt (waiting for I/O or preemption) TaskA->>Core: Execution time slice 2 Note right of Core: Task A resumes Core-->>TaskA: Interrupt TaskB->>Core: Execution time slice 2 Note right of Core: Task B resumes Core-->>TaskB: Interrupt

Parallelism from a Time Axis Perspective

This shows tasks being physically executed simultaneously on multiple cores. This is possible when the execution environment has multiple cores and GOMAXPROCS is set to 2 or more.

sequenceDiagram participant Core1 as Core 1 participant Core2 as Core 2 participant TaskA as Task A participant TaskB as Task B Note over Core1,Core2: Parallelism par Simultaneous task execution TaskA->>Core1: Simultaneous execution and TaskB->>Core2: Simultaneous execution end

What is a Goroutine?

M-P-G Model (Machine, Processor, Goroutine)

Core concepts of the Go runtime:

Flow of G → P → M

  1. When a goroutine is created, it is registered in the local queue of P or the global queue.
  2. An idle M retrieves P and takes out a goroutine from the queue to execute.
  3. After completion or interruption due to I/O waiting or preemption, another runnable goroutine is executed similarly.

Goroutines are generated, interrupted, and resumed lightly, achieving high concurrency and parallelism. However, generating a large number may cause scheduling overhead or stack growth costs, so appropriate granularity design and profiling verification are important.

Deep Dive into the M-P-G Model and Reference Articles

Refer to these for a deeper understanding, using diagrams and specific code examples.

GOMAXPROCS and Parallel Execution

Details of Scheduling

Runnable Queue and Work Stealing

Preemptive Scheduling

Behavior During Blocking Operations

System Call and Thread Management

Stack Management and Memory Efficiency

CPU-Bound Processing and Parallel Utilization

Affinity with I/O-Bound Processing

Scheduler Tuning

Preemption Mechanism (Go 1.14 and Later)

func busyLoop() {
    for i := 0; i < 1e9; i++ {
        // Calculation processing in the loop
        _ = i * i
        // The Go runtime may insert a preemption point around here, allowing other goroutines to execute
    }
}

Actual preemption is done automatically within the runtime, and there is no need to describe it explicitly, but understanding that loops and function calls can be safe points makes it easier to maintain concurrency even in processes that occupy the CPU for a long time.

Relationship Between OS Processes, Threads, and Goroutines

[OS Process]
    ├─ Go runtime starts → Generates and manages multiple OS threads (M)
    ├─ Prepares multiple P within the Go runtime (GOMAXPROCS)
    └─ Goroutines (G) are generated at the user level and placed in the runnable queue of P
       └─ An idle M retrieves P and takes out G from the queue to execute

Understanding the mechanism that achieves high concurrency and parallelism simultaneously, and using benchmarks and profiling to optimize performance.

Conclusion

The Go runtime provides a lightweight goroutine generation and advanced scheduling mechanism based on the M-P-G model, clearly distinguishing and naturally supporting concurrency and parallelism. Developers can optimize performance and improve throughput by understanding GOMAXPROCS settings, goroutine granularity, profiling, synchronization methods, and more.

References

Tags: Golang
Share: 𝕏 Post Facebook Hatena
✏️ View source / Discuss on GitHub
☕ Support

If you enjoy this blog, consider supporting it. Every bit helps keep it running!


Related Articles