Page MenuHomeFreeBSD

smr: Fix synchronization in smr_enter()
ClosedPublic

Authored by markj on Aug 26 2022, 11:30 PM.
Tags
None
Referenced Files
Unknown Object (File)
Oct 3 2024, 12:00 AM
Unknown Object (File)
Sep 24 2024, 4:32 AM
Unknown Object (File)
Sep 21 2024, 8:18 AM
Unknown Object (File)
Sep 19 2024, 10:49 AM
Unknown Object (File)
Sep 18 2024, 12:52 AM
Unknown Object (File)
Sep 16 2024, 3:33 PM
Unknown Object (File)
Sep 6 2024, 12:29 AM
Unknown Object (File)
Aug 31 2024, 11:58 AM
Subscribers

Details

Summary

smr_enter() must publish its observed read sequence number before
issuing any subsequent memory operations. The ordering provided by
atomic_add_acq_int() is insufficient on some platforms, at least on
arm64, because it permits reordering of subsequent loads with the store
to c_seq.

Thus, use atomic_thread_fence_seq_cst() to issue a store-load barrier
after publishing the read sequence number.

On x86, take advantage of the fact that memory operations are not
reordered with locked instructions to improve code density: we can store
the observed read sequence and provide a store-load barrier with a
single operation.

Based on a patch from Pierre Habouzit <pierre@habouzit.net>.

PR: 265974

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Skipped
Unit
Tests Skipped
Build Status
Buildable 47171
Build 44058: arc lint + arc unit

Event Timeline

I think this wants atomic_thread_fence_seq_cst_after_atomic or similar.

Looking at atomic_add_int I don't think it has the intended effect, see https://lore.kernel.org/lkml/20190613134933.141230706@infradead.org/ for the same bug in Linux.

sys/sys/smr.h
141

This loses the compiler barrier that is included by the acq variant.

sys/sys/smr.h
141

The patch was written with the assumption that atomic ops on amd64 provide a full barrier as is.

So happens several ops lack memory clobber, so they don't do it from compiler's pov. I'm going to post a review to patch it.

markj marked 2 inline comments as done.

Use atomic_add_int() with acquire semantics on x86.

sys/sys/smr.h
141

I thought that our unqualified atomic operations have the same semantics as relaxed atomics in C11, in which case compiler reordering is permitted. On amd64, the cmpset and fcmpset implementations do not follow this convention.

sys/sys/smr.h
141

Me too.

sys/sys/smr.h
128

because you start with "... to both ..."

markj marked an inline comment as done.

Fix wording in the smr_enter() comment.

This revision is now accepted and ready to land.Sep 24 2022, 5:18 AM
This revision was automatically updated to reflect the committed changes.