Page MenuHomeFreeBSD

amd64/arm64 pmap: Stop requiring the accessed bit for superpage promotion
ClosedPublic

Authored by alc on Jun 9 2023, 6:04 PM.
Tags
None
Referenced Files
Unknown Object (File)
Sat, Oct 19, 1:23 PM
Unknown Object (File)
Oct 6 2024, 6:18 AM
Unknown Object (File)
Sep 28 2024, 8:56 PM
Unknown Object (File)
Sep 28 2024, 4:22 PM
Unknown Object (File)
Sep 24 2024, 4:36 AM
Unknown Object (File)
Sep 23 2024, 4:06 PM
Unknown Object (File)
Sep 17 2024, 4:12 AM
Unknown Object (File)
Sep 15 2024, 8:12 PM
Subscribers

Details

Summary

Stop requiring all of the PTEs to have the accessed bit set for superpage promotion to occur. Given that change, add support for promotion to pmap_enter_quick(), which does not set the accessed bit in the PTE that it creates.

Since the final mapping within a superpage-aligned and sized region of a memory-mapped file is typically created by a call to pmap_enter_quick(), we now achieve promotions in circumstances where they did not occur before, for example, this mapping within the X server's address space:

1745        0x844400000        0x84766f000 r-- 12911 26232   4   2 CNS-- vn /usr/local/llvm15/lib/libLLVM-15.so

See also https://www.usenix.org/system/files/atc20-zhu-weixi_0.pdf

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

alc requested review of this revision.Jun 9 2023, 6:04 PM
markj added inline comments.
sys/amd64/amd64/pmap.c
4073

There should be a semicolon instead of a comma after "PG_A".

4080

I'd perhaps assert that promoted || !allpte_PG_A_set is true (or use a tri-state variable instead of two bools).

sys/arm64/arm64/pmap.c
4026

There should be a semicolon after ATTR_AF.

This revision is now accepted and ready to land.Jun 9 2023, 9:21 PM
sys/amd64/amd64/pmap.c
4082

IMO it would be useful to document special use of the mpte valid field and enumerate the values.

7756

Could this block and the same block from pmap_enter() made into a function?

alc marked 3 inline comments as done.Jun 10 2023, 6:00 AM
sys/amd64/amd64/pmap.c
4082

I'm happy to, but where, vm_page.h? The above comment already describes what values are assigned to the PTP's valid field under different circumstances.

7756

Each of the three instances has a slightly different body. This one may have to calculate the PDE. The others do not. The EPT instance increments an EPT-specific counter.

(The EPT instance could also be more different than it is now in that it currently pointlessly tests mpte == NULL and before that va < VM_MAXUSER_ADDRESS.)

kib added inline comments.
sys/amd64/amd64/pmap.c
4082

It is MD, so should be somewhere in pmap.h or pmap.c. Perhaps together with the explanation of the pindex use for ptps, and wirings. Currently all that comments are spread over the pmap.c, like pmap_unwire_ptp() and pmap_allocpte(_nosleep()).

7756

I might try to look at this after your commit.

Address Mark's comments.

In pmap_enter_quick_locked(), when promotion is attempted, return NULL, so that the unmapped PTP isn't passed to pmap_enter_quick_locked() by pmap_enter_object().

Add an EPT-related KASSERT to pmap_promote_pde().

This revision now requires review to proceed.Jun 11 2023, 8:19 AM
sys/amd64/amd64/pmap.c
7756

Since we expect pmap_ps_enabled(pmap) to be true, I think that the following makes sense.

diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c
index 3cb02a4f9daa..a36bbe16bee8 100644
--- a/sys/amd64/amd64/pmap.c
+++ b/sys/amd64/amd64/pmap.c
@@ -436,7 +436,7 @@ pt_entry_t pg_nx;
 static SYSCTL_NODE(_vm, OID_AUTO, pmap, CTLFLAG_RD | CTLFLAG_MPSAFE, 0,
     "VM/pmap parameters");
 
-static int pg_ps_enabled = 1;
+static int __read_frequently pg_ps_enabled = 1;
 SYSCTL_INT(_vm_pmap, OID_AUTO, pg_ps_enabled, CTLFLAG_RDTUN | CTLFLAG_NOFETCH,
     &pg_ps_enabled, 0, "Are large page mappings enabled?");
 
@@ -6865,6 +6865,10 @@ pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, vm_page_t mpte,
        pt_entry_t allpte_PG_A, PG_A, PG_G, PG_M, PG_PKU_MASK, PG_RW, PG_V;
        int PG_PTE_CACHE;
 
+       PMAP_LOCK_ASSERT(pmap, MA_OWNED);
+       if (!pmap_ps_enabled(pmap))
+               return;
+
        PG_A = pmap_accessed_bit(pmap);
        PG_G = pmap_global_bit(pmap);
        PG_M = pmap_modified_bit(pmap);
@@ -6873,8 +6877,6 @@ pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, vm_page_t mpte,
        PG_PKU_MASK = pmap_pku_mask_bit(pmap);
        PG_PTE_CACHE = pmap_cache_mask(pmap, 0);
 
-       PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-
        /*
         * Examine the first PTE in the specified PTP.  Abort if this PTE is
         * ineligible for promotion due to hardware errata, invalid, or does
@@ -7391,7 +7393,6 @@ pmap_enter(pmap_t pmap, vm_offset_t va, vm_page_t m, vm_prot_t prot,
         * populated, then attempt promotion.
         */
        if ((mpte == NULL || mpte->ref_count == NPTEPG) &&
-           pmap_ps_enabled(pmap) &&
            (m->flags & PG_FICTITIOUS) == 0 &&
            vm_reserv_level_iffullpop(m) == 0)
                pmap_promote_pde(pmap, pde, va, mpte, &lock);
@@ -7782,7 +7783,6 @@ pmap_enter_quick_locked(pmap_t pmap, vm_offset_t va, vm_page_t m,
         * attempt promotion.
         */
        if ((mpte == NULL || mpte->ref_count == NPTEPG) &&
-           pmap_ps_enabled(pmap) &&
            (m->flags & PG_FICTITIOUS) == 0 &&
            vm_reserv_level_iffullpop(m) == 0) {
                if (pde == NULL)
@@ -10359,7 +10359,6 @@ pmap_emulate_accessed_dirty(pmap_t pmap, vm_offset_t va, int ftype)
        m = PHYS_TO_VM_PAGE(*pte & PG_FRAME);
 
        if ((mpte == NULL || mpte->ref_count == NPTEPG) &&
-           pmap_ps_enabled(pmap) &&
            (m->flags & PG_FICTITIOUS) == 0 &&
            vm_reserv_level_iffullpop(m) == 0) {
                pmap_promote_pde(pmap, pde, va, mpte, &lock);
sys/amd64/amd64/pmap.c
7756

Looks good to me.

This revision is now accepted and ready to land.Jun 12 2023, 1:27 PM
sys/amd64/amd64/pmap.c
7756

I will prepare a followup patch with this change and a similar one to arm64.