Details
- Reviewers
alc kib jeff cem - Commits
- rS347950: Use M_NEXTFIT in memguard(9).
Diff Detail
- Lint
Lint Passed - Unit
No Test Coverage - Build Status
Buildable 20089 Build 19587: arc lint + arc unit
Event Timeline
My comments are not about this patch, but other issues with memguard.
sys/vm/memguard.c | ||
---|---|---|
317 ↗ | (On Diff #48182) | If guard pages are enabled, then this significantly overestimates the amount of physical memory being used. Moreover, if guard pages are enabled, we should block the allocation of superpage reservations in kmem_back(), because the reservations can never be fully populated or mapped as superpages. (Do we really need unmapped guard pages given the "buffer zone" around the returned memory?) |
Don't fall back to vmem_xalloc(). Instead, factor out handling of
resource shortages into vmem_try_fetch() and use that if the nextfit
search fails. This way we ensure that the cursor is updated on all
M_NEXTFIT allocations, so the policy is applied strictly.
Wrong review. I will update this review with the correct diff plus some
local modifications that I've made.
- Remove some stale comments referencing the use of a vm_map to manage memguard KVA.
- Reimplement the mapused sysctl so that we can see how much KVA is consumed at a given point in time.
sys/vm/memguard.c | ||
---|---|---|
205 ↗ | (On Diff #48937) | I would suggest: "... of kernel address space that is managed by a vmem arena." |
302 ↗ | (On Diff #48937) | "... so that we use a consistent value throughout this function." |
336 ↗ | (On Diff #48937) | As an aside, when do_guard is true, it would make sense to disable reservation-based allocation, since the guards will block promotion of the mapping. |
sys/vm/memguard.c | ||
---|---|---|
336 ↗ | (On Diff #48937) | Hmm, do we have a mechanism to do that here? |
sys/vm/memguard.c | ||
---|---|---|
336 ↗ | (On Diff #48937) | Not on an allocation-by-allocation basis, as opposed to the whole object. |