Page MenuHomeFreeBSD

tcp - hpts timing is off when we are above 1200 connections.
ClosedPublic

Authored by rrs on Apr 14 2022, 5:38 PM.
Tags
None
Referenced Files
Unknown Object (File)
Sep 28 2024, 7:50 AM
Unknown Object (File)
Sep 11 2024, 2:58 PM
Unknown Object (File)
Aug 19 2024, 5:58 AM
Unknown Object (File)
Aug 3 2024, 4:05 PM
Unknown Object (File)
Jul 10 2024, 5:59 AM
Unknown Object (File)
Jul 8 2024, 11:20 PM
Unknown Object (File)
Jun 27 2024, 2:32 PM
Unknown Object (File)
May 24 2024, 7:29 AM
Subscribers

Details

Summary

HPTS timing begins to go off when we reach the threshold of connections (1200 by default)
where we have any returning syscall or LRO stop finding the oldest hpts thread that
has not run but instead using the CPU it is on. This ends up causing quite a lot of times
where hpts threads may not run for extended periods of time. On top of all that which
causes heartburn if you are pacing in tcp, you also have the fact that where AMD's
podded L3 cache may have sets of 8 CPU's that share a L3, hpts is unaware of this
and thus on amd you can generate a lot of cache misses.

So to fix this we will get rid of the CPU mode, and always use oldest. But also make
HPTS aware of the CPU topology and keep the "oldest" to be within the same L3 cache.
This also works nicely for NUMA as well couple with Drew's earlier NUMA changes.

Test Plan

Use the hpts_monitor functionality to make sure after the changes are in
that no matter what load we have a more even distribution.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable