Page MenuHomeFreeBSD

gve: Add DQO QPL support
AcceptedPublic

Authored by shailend_google.com on Sep 17 2024, 6:54 PM.
Tags
None
Referenced Files
Unknown Object (File)
Sat, Nov 2, 9:34 AM
Unknown Object (File)
Wed, Oct 30, 5:38 AM
Unknown Object (File)
Fri, Oct 18, 11:06 PM
Unknown Object (File)
Wed, Oct 9, 12:12 AM
Unknown Object (File)
Oct 7 2024, 1:21 AM
Unknown Object (File)
Oct 1 2024, 7:24 AM
Unknown Object (File)
Oct 1 2024, 1:36 AM
Unknown Object (File)
Sep 28 2024, 2:39 AM
Subscribers

Details

Reviewers
markj
emaste
delphij
lwhsu
kibab
kib
Group Reviewers
network
Summary

DQO is the descriptor format for our next generation virtual NIC.
It is necessary to make full use of the hardware bandwidth on many
newer GCP VM shapes.

This patch extends the previously introduced DQO descriptor format
with a "QPL" mode. QPL stands for Queue Page List and refers to
the fact that the hardware cannot access arbitrary regions of the
host memory and instead expects a fixed bounce buffer comprising
of a list of pages.

The QPL aspects are similar to the already existing GQI queue
queue format: in that the mbufs being input in the Rx path have
external storage in the form of vm pages attached to them; and
in the Tx path we always copy the mbuf payload into QPL pages.

Signed-off-by: Shailend Chand <shailend@google.com>

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Skipped
Unit
Tests Skipped
Build Status
Buildable 60023
Build 56907: arc lint + arc unit

Event Timeline

shailend_google.com retitled this revision from [RFC] gve: Add DQO QPL support to gve: Add DQO QPL support.Tue, Oct 15, 8:25 PM
shailend_google.com added a reviewer: kib.

Removed the RFC prefix in the commit msg
Rebased onto parent

How exactly does the receive path compare to GQI mode? Do I understand correctly that there is a fixed pool of pages that can be used to receive data, and that's it? If so, what happens when the pool runs out? In the GQI case it looks like the driver falls back to allocating regular mbufs if it can't flip an external gve mbuf, but I'm not sure.

sys/dev/gve/gve_rx_dqo.c
224

There should be an extra newline after variable declarations.

sys/dev/gve/gve_tx_dqo.c
873

Why not use atomic_cmpset_rel_32() instead of having a separate fence?

shailend_google.com marked 2 inline comments as done.

Address Mark's review comments

This is similar to GQI_QPL in that the NIC only recognizes a fixed set of pre-negotiated pages (QPL: queue page list) as valid buffers. So in Tx, every output mbuf's contents are copied into the QPL. In GQI, the QPL pages are mapped into a contiguous memory region ("fifo"). This is not possible in DQO_QPL because the completions for Tx packets might arrive out of order. So page fragments used for each packet are explicitly marked as "in use" (pending_pkt) and reaped again when a completion arrives.

In Rx, GQI_QPL has the same number of pages as ring entries. On receiving a packet fragment in a ring entry, if the other half of the page is not usable (tcp/ip or socket is still not done with it), then it copies the contents out of the page into a freshly allocated cluster-mbuf and posts the same half-page back to the NIC. DQO_QPL is similar in that if there are no QPL pages to post to the NIC, it too resorts to copying the contents out of the arriving buffers into cluster-mbufs and posting those buffers back. It differs from GQI_QPL in its out-of-order nature (there is a distinct completion ring) and that it has more pages than ring entries to reduce the incidence of copy: a particular QPL page is not tied to a ring index like in GQI_QPL.

sys/dev/gve/gve_tx_dqo.c
873

Thanks!

markj added inline comments.
sys/dev/gve/gve_tx_dqo.c
871

Some comment explaining the barriers and what they synchronize would be useful.

This revision is now accepted and ready to land.Tue, Nov 5, 3:32 PM

Add a couple of comments suggested by Mark

This revision now requires review to proceed.Tue, Nov 5, 5:33 PM

If you mail me git-formatted patches, I'm apply to apply them.

This revision is now accepted and ready to land.Tue, Nov 5, 6:37 PM