Skip to content

Commit 78732f2

Browse files
wildea01Alex Shi
authored andcommitted
arm64: spinlock: fix spin_unlock_wait for LSE atomics
Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under the premise that the LSE atomics (in particular, the LDADDA instruction) are indivisible. Unfortunately, these instructions are only indivisible when used with the -AL (full ordering) suffix and, consequently, the same issue can theoretically be observed with LSE atomics, where a later (in program order) load can be speculated before the write portion of the atomic operation. This patch fixes the issue by performing a CAS of the lock once we've established that it's unlocked, in much the same way as the LL/SC code. Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers") Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 3a5facd09da848193f5bcb0dea098a298bc1a29d) Signed-off-by: Alex Shi <alex.shi@linaro.org>
1 parent b1c2728 commit 78732f2

1 file changed

Lines changed: 7 additions & 3 deletions

File tree

arch/arm64/include/asm/spinlock.h

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,13 +43,17 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
4343
"2: ldaxr %w0, %2\n"
4444
" eor %w1, %w0, %w0, ror #16\n"
4545
" cbnz %w1, 1b\n"
46+
/* Serialise against any concurrent lockers */
4647
ARM64_LSE_ATOMIC_INSN(
4748
/* LL/SC */
4849
" stxr %w1, %w0, %2\n"
49-
" cbnz %w1, 2b\n", /* Serialise against any concurrent lockers */
50-
/* LSE atomics */
5150
" nop\n"
52-
" nop\n")
51+
" nop\n",
52+
/* LSE atomics */
53+
" mov %w1, %w0\n"
54+
" cas %w0, %w0, %2\n"
55+
" eor %w1, %w1, %w0\n")
56+
" cbnz %w1, 2b\n"
5357
: "=&r" (lockval), "=&r" (tmp), "+Q" (*lock)
5458
:
5559
: "memory");

0 commit comments

Comments
 (0)