
False Sharing
False sharing happens when independent threads update different variables that happen to live on the same CPU cache line. They are not logically sharing data, but the hardware still has to bounce ownership of that line between cores. The result is terrible scaling in code that looks embarrassingly parallel on paper.
Minimal Example
struct alignas(64) Counter {
std::atomic<long> value;
};
Counter workerCounters[8];
void record(size_t workerId) {
workerCounters[workerId].value.fetch_add(1);
} What It Solves
- Explains why multicore throughput can flatten or regress even when lock contention looks low.
- Pushes you to think about memory layout, not just algorithmic independence.
- Makes per-thread counters, queues, and ring buffers much easier to reason about.
Failure Modes
- Packing hot counters into dense arrays without padding or per-core partitioning.
- Benchmarking only one thread and missing the coherence penalty entirely.
- Optimizing atomic instructions while ignoring the cache-line movement underneath them.
Production Checklist
- Pad or align frequently written per-thread data to cache-line boundaries.
- Benchmark with realistic core counts and inspect hardware counters when possible.
- Prefer local accumulation plus batched merge over constant shared updates.
Closing
False sharing is a layout bug disguised as a concurrency bug. If independent workers still touch the same cache line, the CPU will make them pay for it.
Browser support snapshot
Live support matrix for sharedarraybuffer from
Can I Use.
Show static fallback image

Source: caniuse.com









