Commit fcc16882 authored by Stephen Boyd's avatar Stephen Boyd Committed by Linus Torvalds

lib: atomic64: Initialize locks statically to fix early users

The atomic64 library uses a handful of static spin locks to implement
atomic 64-bit operations on architectures without support for atomic
64-bit instructions.

Unfortunately, the spinlocks are initialized in a pure initcall and that
is too late for the vfs namespace code which wants to use atomic64
operations before the initcall is run.

This became a problem as of commit 8823c079: "vfs: Add setns support
for the mount namespace".

This leads to BUG messages such as:

  BUG: spinlock bad magic on CPU#0, swapper/0/0
   lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0

coming out early on during boot when spinlock debugging is enabled.

Fix this by initializing the spinlocks statically at compile time.
Reported-and-tested-by: 's avatarVaibhav Bedia <>
Tested-by: 's avatarTony Lindgren <>
Cc: Eric W. Biederman <>
Signed-off-by: 's avatarStephen Boyd <>
Signed-off-by: 's avatarLinus Torvalds <>
parent 787314c3
......@@ -31,7 +31,11 @@
static union {
raw_spinlock_t lock;
char pad[L1_CACHE_BYTES];
} atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp;
} atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = {
[0 ... (NR_LOCKS - 1)] = {
.lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock),
static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
......@@ -173,14 +177,3 @@ int atomic64_add_unless(atomic64_t *v, long long a, long long u)
return ret;
static int init_atomic64_lock(void)
int i;
for (i = 0; i < NR_LOCKS; ++i)
return 0;
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment