Addressing the probable root cause for cache_count to
show stange ("negative") values, by protecting all
arithmetic on that global counter by using a proper
counter.
Details
- Reviewers
tuexen - Group Reviewers
transport - Commits
- rGd20563819b92: tcp: Make hostcache.cache_count MPSAFE by using a counter_u64_t
rGc0f8ed6ff812: tcp: Make hostcache.cache_count MPSAFE by using a counter_u64_t
rG632e3363087c: tcp: Make hostcache.cache_count MPSAFE by using a counter_u64_t
rG95e56d31e348: tcp: Make hostcache.cache_count MPSAFE by using a counter_u64_t
Diff Detail
- Repository
- rG FreeBSD src repository
- Lint
Lint Not Applicable - Unit
Tests Not Applicable
Event Timeline
FWIW, I disagree with this change. I think we should instead use atomic operations here.
Counters have the property that they are cheap to update, but expensive to read. Moreover, if you are doing a lot of reads, you will be pulling in cachelines from the per-CPU space into other CPUs (which is fine, but uses cache space).
For something which will be written often, but only read occasionally, counters are a big performance win. (Statistics, for example, are a great example of things for which counters are ideal.) For things which are read with any regularity (and, especially, from critical paths), atomics should perform better.
I suggest reverting this change and replacing it with atomic operations.
Thanks for the explanation. I actually suggested to Richard to use the counter API instead of atomic operations. So it is my mistake...