Page MenuHomeFreeBSD

Fix /dev/mem access lock up
AcceptedPublic

Authored by gonzo on Dec 10 2020, 12:12 AM.
Tags
None
Referenced Files
Unknown Object (File)
Fri, Nov 8, 11:23 AM
Unknown Object (File)
Sep 24 2024, 4:23 AM
Unknown Object (File)
Sep 17 2024, 11:27 AM
Unknown Object (File)
Sep 9 2024, 1:38 AM
Unknown Object (File)
Sep 8 2024, 5:13 PM
Unknown Object (File)
Sep 5 2024, 2:18 PM
Unknown Object (File)
Sep 4 2024, 1:50 PM
Unknown Object (File)
Aug 30 2024, 5:56 PM
Subscribers

Details

Summary

VIRT_IN_DMAP macro checks only that a virtual address falls into
the [dmap_virt_base, dmap_virt_max] region. This region is not guaranteed
to be contiguous, there may be physicall addresses in it that
are not part of the hardware mem segments list. When accessing
such addresses through /dev/mem kernel goes into permanent pmap_fault
loop and the process that attempted the access gets into unkillable state.

To fix this behavior check if the fault address is in DMAP. These
addresses should not fault by definition, if it happens it's an
indication of an error.

Test Plan
  • Boot device in verbose mode
  • Find an address that is lower then max available physical address but not in the available list.
  • Try accessing through dd if=/dev/mem of=/dev/null bs=16 count=1 skip=$((addr/16))
  • dd is stuck
  • With fix dd fails with the following diagnostics: dd: /dev/mem: Bad address

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 35408
Build 32322: arc lint + arc unit

Event Timeline

What will happen if we try to access a DMAP page during the break-before-make sequence while it is being demoted?

What will happen if we try to access a DMAP page during the break-before-make sequence while it is being demoted?

This is where my understanding of the issue gets vague. As far as I read in code promotion/demotion happens during adding/removing mapping. My assumption was that this should never happen for dmap, its mappings populated early in pmap bootstrap process and never touched.

All mappings that point to the same physical address need to be the same memory type. Because of this we will demote memory in the DMAP if needed to make this possible, e.g. when marking a page as uncached the DMAP page pointing at the same memory will also be marked as uncached, including a demotion if it's currently part of a larger block.

This comment was removed by gonzo.

Thanks for the explanation. It's more complex than I expected then. So far this issue was relevant only for the /dev/mem scenario, where acpidump -dt was getting stuck in an unkillable state. If /dev/mem can avoid this situation then the fix is not strictly required.

Still, if there are other code paths where a non-mapped dmap area is accessed it's going to manifest in a pagefault loop instead of propagating the fault to the panic/debugger which may be a bit more problematic to troubleshoot than immediate failure.

Remove check for dmap in pmap_kextract so it can handle
faults in the missing regions of the sparse dmap.

andrew added a reviewer: markj.

You might want to change the commit message to say you're removing the DMAP check from pmap_kextract as it wass only correct when the DMAP region is contiguous which is no longer true.

This revision is now accepted and ready to land.Dec 15 2020, 9:17 AM

Most users of pmap_kextract() know that the mapping is valid. For example, UMA uses pmap_kextract() very frequently to look up the NUMA domain of a given item so that it can maintain per-domain caches, and most items that it manages are mapped in the direct map. Now we have to do a full page table walk each time, I suspect it will add measurable overhead to some workloads. It would probably be better to modify memrw() to use pmap_extract().

How about something like D27621?

How is it guaranteed that an access of an invalid physical address will raise one of the exceptions handled by pmap_fault()?