For use in pmap_change_attr_locked(), where we might need to demote L1
pages in the DMAP.
Details
- Reviewers
markj - Group Reviewers
riscv - Commits
- rG46bc4963e2de: riscv: implement pmap_demote_l1()
Diff Detail
- Repository
- rG FreeBSD src repository
- Lint
Lint Skipped - Unit
Tests Skipped - Build Status
Buildable 58220 Build 55108: arc lint + arc unit
Event Timeline
sys/riscv/riscv/pmap.c | ||
---|---|---|
285–287 | It doesn't really need to be a counter(9), but the existing counters could be. | |
2812 | Worth asserting that VIRT_IN_DMAP(va) is true? | |
2827–2828 | I do not understand the requirements around PTE_A/PTE_D in this context. amd64 asserts for the presence of both accessed and dirty/modified: KASSERT((oldpdpe & PG_A) != 0, ("pmap_demote_pdpe: oldpdpe is missing PG_A")); KASSERT((oldpdpe & (PG_M | PG_RW)) != PG_RW, ("pmap_demote_pdpe: oldpdpe is missing PG_M")); | |
2831–2832 | arm64 handles this case by allocating a temporary page. For now I am hoping it can be avoided. |
sys/riscv/riscv/pmap.c | ||
---|---|---|
2812 | Yes please. | |
2827–2828 | pmap_demote_pdpe() only operates on 1GB mappings in the direct map. Such mappings always accessed and dirty. It's not a requirement per se, but rather a simplifying assumption. For example, when demoting you want the demoted PDEs to have the same access and dirty flags as the original PDPE. If the mapping is writeable and clean, then PG_M can be set by the CPU at any time, so you need to atomically check for PG_M and destroy the mapping, which is more complicated than what this function does. | |
2831–2832 | I believe the only reason for that extra contortion is arm64's architectural break-before-make requirement for TLB invalidation, which doesn't apply here. | |
2856 | Don't we need an sfence_vma here? |
sys/riscv/riscv/pmap.c | ||
---|---|---|
2827–2828 | Understood, thank you! | |
2856 | I believe not. The translations cached in the TLB are outdated but still correct for the demoted range. pmap_demote_l2_locked() doesn't call sfence_vma() either, for the same reason. The TLB will eventually be flushed by the caller, after it has modified memory types, see the usage in D45471. I will include a brief comment to this effect with the next revision. |