Page MenuHomeFreeBSD

wg: Use a mutex to close races around callout scheduling.
Needs ReviewPublic

Authored by jhb on Aug 17 2022, 12:09 AM.
Tags
None
Referenced Files
Unknown Object (File)
Oct 3 2024, 4:35 PM
Unknown Object (File)
Oct 2 2024, 7:00 AM
Unknown Object (File)
Oct 2 2024, 12:19 AM
Unknown Object (File)
Sep 30 2024, 2:40 PM
Unknown Object (File)
Sep 8 2024, 3:48 AM
Unknown Object (File)
Aug 6 2024, 9:30 PM
Unknown Object (File)
Jul 31 2024, 8:17 PM
Unknown Object (File)
Jul 28 2024, 2:09 AM
Subscribers

Details

Reviewers
markj
Summary

This replaces the use of NET_EPOCH to deal with races between
callout_reset and callout_stop with the more standard
callout_init_mtx. It also avoids the need for atomics when dealing
with timer-related fields like p_enabled. The p_handshake_mtx is also
now subsumed into the new p_timers_mtx.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 46991
Build 43880: arc lint + arc unit

Event Timeline

jhb requested review of this revision.Aug 17 2022, 12:09 AM

This is an alternative to D35990.

As noted in the comment, this does result in dropping / reacquiring the mutex in a few places which seems like wasted work. The driver already does this in at least one other place (the rw lock for the endpoint structure is in one place write-locked to clear the source, then immediately read-locked to read it again). In fact, if we were to push the lock up a bit to the callers we could even subsume that rwlock with a single mutex that probably would not have a much larger footprint than this change alone. I suspect that holding the lock while the timers run probably doesn't really matter in practice as timers aren't used for regular traffic for but for keepalives (off by default, and when enabled only sent once every N seconds) and the handshake (if this is happening often then performance is already shot IMO).

sys/dev/wg/if_wg.c
1046

This can recurse, hence the use of MTX_RECURSE. Mostly because the actual callout runner for keep alive and initiation packets holds the mutex for the duration of the callout which includes sending a packet which then executes this. Fixing this to not recurse and instead require the lock in all of the callers would be a larger change, though it would perhaps avoid several places that drop the lock only to immediately re-acquire it. (For example, this hook is always called adjacent to one of the above hooks for a packet sent or received.)

I have to ask: do you happen to have a profile when pushing traffic over wg? I see a lot of locks in the old code in the diff and even more added afterwards. At least some of it should be immediately patchable with seqc, like wg_timers_get_last_handshake.

Compared to what is there before these changes do not seem to measurably impact performance using the same tests I've used previously (iperf over a tunnel between two jails over epair, 1, 2, 4, 8 UDP and TCP streams) in terms of CPU usage or bandwidth. The same tests did show measured improvement for other changes (the largest improvement came from fixing over scheduling of encryption/decryption tasks, but a smaller bump from using ossl(4) for data path crypto).

wg_get_last_handshake() is only used in an ioctl handler to return stats to userland (e.g. when running wg show). It is not in a hot path. Some of the changed functions here are also not really hot paths either as keepalives are not enabled by default and are not recommended for normal use either.

What should we do with this?

Hmm, I had originally written this for markj@ to look at as an alternative to the use of atomics / epoch. I do think this is probably easier to reason about at least. If we did go this route it would probably benefit from a followup to push the locking up a bit so that there is are fewer 'unlock/lock' patterns.