When recursing in pmap_change_props_locked we may fail because there is
no pte. This shouldn't be considered a fail as it may happen in a few
cases, e.g. there are multiple normal memory ranges with device memory
between them.
Details
Diff Detail
- Repository
- rG FreeBSD src repository
- Lint
Lint Not Applicable - Unit
Tests Not Applicable
Event Timeline
It's an updated version, so might break it again (but hopefully not). It will now skip over unmapped memory rather than return early.
sys/arm64/arm64/pmap.c | ||
---|---|---|
6027 |
sys/arm64/arm64/pmap.c | ||
---|---|---|
555 | The last sentence of this comment isn't accurate now. | |
558 | There is a very similar function in arm64/iommu/iommu_pmap.c that still has the old behaviour w.r.t. setting *level when a PTE is missing. I think it would be better to keep them consistent. | |
587 | The assertion lvl == 3 in pmap_qremove() doesn't catch some erroneous cases now. |
sys/arm64/arm64/pmap.c | ||
---|---|---|
558 | I also think that the code was correct, and that that the proposed changes to pmap_pte() should be undone. |
So for the case of setting memory as uncacheable (when wbinv_range() is called), do you end up calling this function on unmapped range? Does it work on arm64?
We only ever call wbinv_range when ptep != NULL so a mapping will exist for the current virtual address. If the DMAP is unmapped the caller will perform the cache management. If the DMAP is mapped cpu_dcache_wbinv_range will be called twice, once for the non-DMAP, and once for the DMAP memory.