There is a mismatch between how required bounce pages are counted in
_bus_dmamap_count_pages() and bounce_bus_dmamap_load_buffer().
This problem has been observed on the RISC-V VisionFive v2 SoC which has
memory physically addressed above 4GB. This requires some bouncing for
the dwmmc driver. This driver has a maximum segment size of 2048 bytes.
When attempting to load a page-aligned 4-page buffer that requires
bouncing, we can end up counting 4 bounce pages for an 8-segment
transfer. These pages will be incorrectly configured to cover only the
first half of the transfer (4 x 2048 bytes). With this change, 8 bounce
pages are allocated and set up.
Note that _bus_dmamap_count_phys() does not appear to have this problem,
as it clamps the segment size to dmat->common.maxsegsz.
Transactions must meet the following conditions in order for the
miscalculation to manifest:
- Maximum segment size smaller than 1 page
- Transfer size exceeding 1 segment
- Buffer requires bouncing
- Driver uses _bus_dmamap_load_buffer(), not _bus_dmamap_load_phys() or other variations
It seems unusual but not inconceivable that this exact combination has
not been encountered or has gone unnoticed on other architectures, which
also lack this check for max segment size. For example, the rockpro64
uses the dwmmc driver, but fails to meet 3, as its memory is physically
addressed below 4GB. Some other mmc drivers appear to fail 1, etc.