Page MenuHomeFreeBSD

vtblk: Use busdma
ClosedPublic

Authored by cperciva on Sep 22 2022, 7:01 AM.
Tags
None
Referenced Files
Unknown Object (File)
Oct 1 2024, 1:30 PM
Unknown Object (File)
Sep 25 2024, 4:38 PM
Unknown Object (File)
Sep 25 2024, 4:38 PM
Unknown Object (File)
Sep 25 2024, 4:38 PM
Unknown Object (File)
Sep 25 2024, 4:38 PM
Unknown Object (File)
Sep 25 2024, 4:25 PM
Unknown Object (File)
Sep 16 2024, 11:59 AM
Unknown Object (File)
Sep 8 2024, 4:05 PM
Subscribers

Details

Summary

We assume that bus addresses from busdma are the same thing as
"physical" addresses in the Virtio specification; this seems to
be true for the platforms on which Virtio is currently supported.

For block devices which are limited to a single data segment per
I/O, we request PAGE_SIZE alignment from busdma; this is necessary
in order to support unaligned I/Os from userland, which may cross
a boundary between two non-physically-contiguous pages. On devices
which support more data segments per I/O, we retain the existing
behaviour of limiting I/Os to (max data segs - 1) * PAGE_SIZE.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 47678
Build 44565: arc lint + arc unit

Event Timeline

The first three commits in this stack (D36664, D36665, D36666) are purely refactoring and shouldn't result in any functional changes. This commit does the actual work of pushing requests through busdma.

bryanv added inline comments.
sys/dev/virtio/block/virtio_blk.c
1121

Nit: there may be enough repeated error checks after here that's it is clearer to put the happy path in its own block, something like

if (error == 0) {
    if (ordered) {
    ...
    }
    
    if (req->vbr_busdma_wait) {
    ...
    }
    req->vbr_error = 0;
    return;
}

out:
...
req->vbr_error = error;
1276

0 -> BUS_DMA_WAITOK ?

This revision is now accepted and ready to land.Oct 3 2022, 3:31 AM
This revision now requires review to proceed.Oct 3 2022, 6:21 AM
sys/dev/virtio/block/virtio_blk.c
1276

Thanks! I didn't know that existed... it's not mentioned in bus_dma.9...

This revision is now accepted and ready to land.Oct 10 2022, 6:33 PM
This revision was automatically updated to reflect the committed changes.

@cperciva this change caused a regression on powerpc64 and powerpc64le (mountroot can't mount the filesystem).

Reverting this block to the old behavior makes powerpc64le work again:

diff --git a/sys/dev/virtio/block/virtio_blk.c b/sys/dev/virtio/block/virtio_blk.c
index 41229bb6da2c..846465f4a5fa 100644
--- a/sys/dev/virtio/block/virtio_blk.c
+++ b/sys/dev/virtio/block/virtio_blk.c
@@ -1069,18 +1069,11 @@ vtblk_request_execute_cb(void * callback_arg, bus_dma_segment_t * segs,
        sglist_append(sg, &req->vbr_hdr, sizeof(struct virtio_blk_outhdr));
 
        if (bp->bio_cmd == BIO_READ || bp->bio_cmd == BIO_WRITE) {
-               /*
-                * We cast bus_addr_t to vm_paddr_t here; this isn't valid on
-                * all platforms, but we believe it works on all platforms
-                * which are supported by virtio.
-                */
-               for (i = 0; i < nseg; i++) {
-                       error = sglist_append_phys(sg,
-                           (vm_paddr_t)segs[i].ds_addr, segs[i].ds_len);
-                       if (error || sg->sg_nseg == sg->sg_maxseg) {
-                               panic("%s: bio %p data buffer too big %d",
-                                   __func__, bp, error);
-                       }
+               error = sglist_append_bio(sg, bp);
+               if (error || sg->sg_nseg == sg->sg_maxseg) {
+                       (void)i;
+                       panic("%s: bio %p data buffer too big %d",
+                                       __func__, bp, error);
                }
 
                /* Special handling for dump, which bypasses busdma. */

Could you please take a look?

@cperciva
Since this affects running CURRENT in a VM, can you take a look at that issue?

@cperciva
Since this affects running CURRENT in a VM, can you take a look at that issue?

Fixed in 9af32ef5643bf35e1a2687269f2339dfb007ef36 (2023-01-11).

Ah, thanks for info!
I didn't want to upgrade my VMs because of the fear I won't be able to use them...

now if I could only understand why qemu's vtblk are still hated, I'd be set.