Page MenuHomeFreeBSD

bhyve: [snapshot] Use pci_next() to save/restore pci devices
ClosedPublic

Authored by gusev.vitaliy_gmail.com on May 15 2023, 2:08 PM.
Tags
Referenced Files
F102093334: D40109.diff
Thu, Nov 7, 1:08 PM
Unknown Object (File)
Oct 7 2024, 12:40 AM
Unknown Object (File)
Oct 3 2024, 6:17 PM
Unknown Object (File)
Oct 3 2024, 3:13 PM
Unknown Object (File)
Oct 1 2024, 4:08 PM
Unknown Object (File)
Oct 1 2024, 3:17 PM
Unknown Object (File)
Sep 28 2024, 2:28 PM
Unknown Object (File)
Sep 26 2024, 12:40 AM

Details

Summary

Related to multiple devices support.

Current snapshot implementation doesn't support multiple devices with similar type. For example, two virtio-blk or two CD-ROM-s, etc.

So the following configuration cannot be restored.

bhyve -s 3:0,virtio-blk,disk.img -s 4,virtio-blk,disk2.img
In some cases it is restored silently, but doesn't work. In some cases it gets failed during restore stage.

This commit fixes that issue.

Sponsored by: vStack

Test Plan

.meta file will be, for the instance:

{
  "basic metadata": {
    "ncpus": 2,
    "vmname": "ubuntu22",
    "memsize": 1073741824,
    "memflags": 0
  },
  "kern_structs": [
    {
      "device": "vhpet",
      "size": 284,
      "file_offset": 0
    },
    {
      "device": "vm",
      "size": 400,
      "file_offset": 284
    },
    {
      "device": "vioapic",
      "size": 388,
      "file_offset": 684
    },
    {
      "device": "vlapic",
      "size": 8338,
      "file_offset": 1072
    },
    {
      "device": "vmcx",
      "size": 1328,
      "file_offset": 9410
    },
    {
      "device": "vatpit",
      "size": 172,
      "file_offset": 10738
    },
    {
      "device": "vatpic",
      "size": 118,
      "file_offset": 10910
    },
    {
      "device": "vpmtmr",
      "size": 4,
      "file_offset": 11028
    },
    {
      "device": "vrtc",
      "size": 140,
      "file_offset": 11032
    }
  ],
  "devices": [
    {
...skipping...
      "file_offset": 10738
    },
    {
      "device": "vatpic",
      "size": 118,
      "file_offset": 10910
    },
    {
      "device": "vpmtmr",
      "size": 4,
      "file_offset": 11028
    },
    {
      "device": "vrtc",
      "size": 140,
      "file_offset": 11032
    }
  ],
  "devices": [
    {
      "device": "hostbridge@pci.0.0.0",
      "size": 452,
      "file_offset": 11172
    },
    {
      "device": "virtio-net@pci.0.4.0",
      "size": 66197,
      "file_offset": 11624
    },
    {
      "device": "ahci@pci.0.5.0",
      "size": 1531,
      "file_offset": 77821
    },
    {
      "device": "ahci@pci.0.5.1",
      "size": 1531,
      "file_offset": 79352
    },
    {
      "device": "lpc@pci.0.31.0",
      "size": 620,
      "file_offset": 80883
    },
    {
      "device": "fbuf@pci.2.7.0",
      "size": 16777668,
      "file_offset": 81503
    },
    {
      "device": "atkbdc",
      "size": 141,
      "file_offset": 16859171
    }
  ]
}

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

small nits - other than that, looks good

usr.sbin/bhyve/pci_emul.c
2396–2397

EPRINTLN or even drop the warning?

2411–2412

EPRINTLN or even drop the warning?

usr.sbin/bhyve/snapshot.c
894

this comment doesn't add much value

901

style nit - wrap return value in parens

913

EPRINTLN?

930

EPRINTLN?

1085

EPRINTLN

1131

comment doesn't add any value

usr.sbin/bhyve/snapshot.c
930

Don't you think whole code should be moved to use err(3) subset? Because for now bhyve code is mixed with warnx(), EPRINTLN, etc., but warnx() is more native way.

Thanks

usr.sbin/bhyve/snapshot.c
930

It's preferable to use err(3) or warn(3) instead of hand-rolling them, this sentiment is dictated by style(9).

With that said, EPRINTLN() handles raw mode and I believe it is only used for code that needs to print error/warning messages after that bhyve process is already up and running. I'm not certain on this, but that's my take on it.

D22657 explains briefly why EPRINTLN() was brought in.

This revision is now accepted and ready to land.May 16 2023, 4:27 PM