Page MenuHomeFreeBSD

riscv vmm: implement SBI RFNC
ClosedPublic

Authored by br on Mon, Jan 13, 1:44 PM.
Tags
None
Referenced Files
Unknown Object (File)
Wed, Jan 29, 11:46 AM
Unknown Object (File)
Tue, Jan 28, 2:56 AM
Unknown Object (File)
Sat, Jan 25, 2:20 AM
Unknown Object (File)
Wed, Jan 22, 5:04 AM
Unknown Object (File)
Tue, Jan 21, 12:26 PM
Unknown Object (File)
Sun, Jan 19, 4:33 AM
Unknown Object (File)
Sat, Jan 18, 11:10 AM
Unknown Object (File)
Sat, Jan 18, 10:58 AM
Subscribers

Details

Summary

This adds remote fence extension support.

Test Plan

Tested on EIC7700 although it is not clear if the SoC actually needs it: it works fine with and without this support included.

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

br requested review of this revision.Mon, Jan 13, 1:44 PM

This adds remote fence extension support.

Some further explanation, either here or in a comment, would be helpful.

sys/riscv/vmm/riscv.h
71

_INVALID would be a bit clearer as a name here IMO, assuming that "UNK" means "unknown".

sys/riscv/vmm/vmm_fence.c
133

atomic_readandclear_32() is a bit clearer here IMO.

171

Probably this should be atomic_store_32 instead of set_32. The latter is an atomic OR, but it doesn't seem necessary.

194

Why is the mb needed?

sys/riscv/vmm/vmm_riscv.c
750

Need to destroy the mutex here too I believe.

sys/riscv/vmm/vmm_sbi.c
120

Why atomic? This bitset is on the stack.

br marked 5 inline comments as done.Tue, Jan 14, 9:31 PM
br added inline comments.
sys/riscv/vmm/vmm_fence.c
194

To ensure that the request is enqueued before the state of vcpu is obtained on out of order CPUs ?
vcpu_get_state() returns with vcpu unlocked, so the vcpu could change its state before we even return from vcpu_get_state() function ?
If these assumptions are correct, then vcpu could change from SLEEPING to RUNNING after return from vcpu_get_state(). And if things are reordered then an issue arise (i.e. not interrupting a CPU that we need to interrupt) ?

sys/riscv/vmm/riscv.h
104–105

Pack these two as a bitmask of the desired requests?

sys/riscv/vmm/vmm_fence.c
45

bool?

53

Normally one dequeues from the head and enqueues to the tail

55

What's wrong with *new_fence = *fence?

57–59
70

Same comments as dequeue

112

Why the parens?

149

bool

sys/riscv/vmm/vmm_sbi.c
100

Unused

224

... and the return value isn't even checked for these

sys/riscv/vmm/vmm_fence.c
197

Here too, I don't see why this set needs to be atomic.

208

running_vcpus is a bitset of vcpu IDs, but smp_rendezvous() takes a bitset of physical CPU IDs, so this looks wrong. There is no reason to expect vCPU 0 to be running on CPU 0.

sys/riscv/vmm/vmm_fence.c
208

(There is a vm_smp_rendezvous() on amd64 for this purpose, but it was not ported. I will eventually merge this into sys/dev/vmm, but would like to finish with D48270 first.)

br marked 13 inline comments as done.

Address @jrtc27 and @markj comments (Thanks!)

sys/riscv/vmm/vmm_sbi.c
100

It was used as a return value to the guest (instead of C function return code).
I've it fixed to use C return code, and will fix for the reset of SBI requests in a separate commit.

This revision was not accepted when it landed; it landed in state Needs Review.Tue, Jan 21, 10:36 AM
This revision was automatically updated to reflect the committed changes.
sys/riscv/vmm/vmm_fence.c
208

This is still wrong, just less wrong than before. Once the vcpu is unlocked, there is no guarantee about which host CPU the vcpu is running on. That's why vm_smp_rendezvous() uses vcpu_notify_event().

sys/riscv/vmm/vmm_fence.c
208

I understand but once we have the request enqueued we don't care if vcpu change its host cpu, because in this case it will process enqueued SBI request on entry to the guest anyway. In this scenario we indeed interrupt wrong CPU without reason, but we don't loose instruction/data synchronization in the guest.