Page MenuHomeFreeBSD

Support for NS_MOREFRAG in Netmap bridge utility
AbandonedPublic

Authored by rajesh1.kumar_amd.com on Jan 1 2021, 12:16 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, Nov 7, 9:00 AM
Unknown Object (File)
Fri, Nov 1, 10:17 AM
Unknown Object (File)
Oct 4 2024, 1:34 AM
Unknown Object (File)
Oct 3 2024, 11:28 PM
Unknown Object (File)
Sep 17 2024, 9:50 PM
Unknown Object (File)
Sep 7 2024, 5:06 AM
Unknown Object (File)
Aug 25 2024, 5:22 PM
Unknown Object (File)
Jul 31 2024, 6:46 AM

Details

Summary

The current bridge utility doesn't seem to support packets which has been
split into multiple fragments.

Presence of multiple fragments is indicated by NS_MOREFRAG flag.

Test Plan

Can be tested with any driver which splits the packets into multiple
fragments (Eg: When driver supports split header)

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 35834
Build 32723: arc lint + arc unit

Event Timeline

Good catch, thanks.
I think the existing line 113 (copying the NS_MOREFRAG flag) was never tested, because it copies the flag the other way around.

However, I think the change you proposed for the copy case (no zerocopy) is not correct, because there is no guarantee that you have space in the destination netmap buffer (txbuf). You could add some boundary check, but in any case this would change the program semantics and add complexity.

I prepared a much simpler patch that should fix your problem:
https://reviews.freebsd.org/D27980
could you please have a try at that (for both zerocopy and copy case)?

@vmaffione, Thanks for the simplified patch. Network bridging with and without zerocopy is working properly (it needs D27799 netmap_txsync changes)

Actually, the changes to the non-zerocopy case has been done before making the iflib changes in netmap_txsync. So, we retained those changes. Anyway, we can change it to your simplified change.

Will you be pushing in that change?

Just a question regarding the destination netmap buffer size. In my case, I verified this copy case changes with axgbe driver where split header is enabled. So, the fragments in this case, will sum up to maximum of receive buffer size (which is by default 2048 when MTU is 1500 and 4096 when MTU is 9000). So, Is it possible for other drivers to have sum of all fragments larger than the receive buffer size? If not, we may not run into a situation where destination buffer doesn't suffice right?

Il giorno mer 6 gen 2021 alle ore 20:01 rajesh1.kumar_amd.com (Rajesh
Kumar) <phabric-noreply@freebsd.org> ha scritto:

rajesh1.kumar_amd.com added a comment.

@vmaffione,  Thanks for the simplified patch. Network bridging with and

without zerocopy is working properly (it needs D27799 <
https://reviews.freebsd.org/D27799> netmap_txsync changes)

The change in at https://reviews.freebsd.org/D27980 is necessary

independently on D27799, although of course you need the latter to perform
your tests.

Actually, the changes to the non-zerocopy case has been done before

making the iflib changes in netmap_txsync. So, we retained those changes.
Anyway, we can change it to your simplified change.

Will you be pushing in that change?

Done. We should therefore close this review.

Just a question regarding the destination netmap buffer size. In my

case, I verified this copy case changes with axgbe driver where split
header is enabled. So, the fragments in this case, will sum up to maximum
of receive buffer size (which is by default 2048 when MTU is 1500 and 4096
when MTU is 9000). So, Is it possible for other drivers to have sum of all
fragments larger than the receive buffer size? If not, we may not run into
a situation where destination buffer doesn't suffice right?

When supported by the driver, the NS_MOREFRAG allows your netmap
application to send or receive packets spanning multiple netmap slots.
One of the use cases for that is supporting large MTUs, say up to 64K (if
the NIC supports it), while always using netmap buffers 2KB large.
This is something independent of the driver. Once the driver supports
NS_MOREFRAG (e.g. NAF_MOREFRAG is set in the struct netmap_adapter), the
application can do that.
So yes, it is always possible for the sum of all fragments to be larger
than a single receive buffer size.

REPOSITORY

rS FreeBSD src repository - subversion

CHANGES SINCE LAST ACTION

https://reviews.freebsd.org/D27877/new/

REVISION DETAIL

https://reviews.freebsd.org/D27877

EMAIL PREFERENCES

https://reviews.freebsd.org/settings/panel/emailpreferences/

To: rajesh1.kumar_amd.com, mmacy, manu, vmaffione
Cc: imp, Contributor Reviews (src)

Done. We should therefore close this review.

Sure. I am Abandoning this Revision. Let me know if any further action is needed.

When supported by the driver, the NS_MOREFRAG allows your netmap
application to send or receive packets spanning multiple netmap slots.
One of the use cases for that is supporting large MTUs, say up to 64K (if
the NIC supports it), while always using netmap buffers 2KB large.
This is something independent of the driver. Once the driver supports
NS_MOREFRAG (e.g. NAF_MOREFRAG is set in the struct netmap_adapter), the
application can do that.
So yes, it is always possible for the sum of all fragments to be larger
than a single receive buffer size.

Thanks for the explanation @vmaffione

Sorry about a confusion here. The fragmentation I meant here is in case of split header. I understand sum of all fragments split based on MTU size/receive buffer size will be more than a single receive buffer size. But I meant about a single fragment being split again on split header case. For Eg: If Receive buffer is configured as 4K. I may receive a 9K packet split into 3 fragments (4K, 4K & 1K roughly). But each of these fragments can again be split into two fragments in case of split header (256B header+ Rest of 4K data, 0B header+ 4K data, 0B header + 1K data roughly). I meant about this fragmentation.

In my netmap bridging test, the fragments which I am handling here is the split header fragments. So, I thought sum of these fragments will not exceed the receive buffer size and made those "copy" path changes. So, can we use NS_MOREFRAG for these fragments or should they be handled at iflib/driver level?

Since I am abandoning this revision, shall we take this discussion to D27799 thread?

Abandoning this revision as the necessary fix is done as part of D27980.

Thanks @vmaffione for the review and fix.

Il giorno ven 8 gen 2021 alle ore 10:37 rajesh1.kumar_amd.com (Rajesh
Kumar) <phabric-noreply@freebsd.org> ha scritto:

rajesh1.kumar_amd.com added a comment.

> Done. We should therefore close this review.

Sure.  I am Abandoning this Revision. Let me know if any further action

is needed.

> When supported by the driver, the NS_MOREFRAG allows your netmap
> application to send or receive packets spanning multiple netmap slots.
> One of the use cases for that is supporting large MTUs, say up to 64K

(if

> the NIC supports it), while always using netmap buffers 2KB large.
> This is something independent of the driver. Once the driver supports
> NS_MOREFRAG (e.g. NAF_MOREFRAG is set in the struct netmap_adapter),

the

> application can do that.
> So yes, it is always possible for the sum of all fragments to be larger
> than a single receive buffer size.

Thanks for the explanation @vmaffione

Sorry about a confusion here.  The fragmentation I meant here is in case

of split header. I understand sum of all fragments split based on MTU
size/receive buffer size will be more than a single receive buffer size.
But I meant about a single fragment being split again on split header case.
For Eg: If Receive buffer is configured as 4K. I may receive a 9K packet
split into 3 fragments (4K, 4K & 1K roughly). But each of these fragments
can again be split into two fragments in case of split header (256B header+
Rest of 4K data, 0B header+ 4K data, 0B header + 1K data roughly). I meant
about this fragmentation.

In my netmap bridging test, the fragments which I am handling here is

the split header fragments. So, I thought sum of these fragments will not
exceed the receive buffer size and made those "copy" path changes. So, can
we use NS_MOREFRAG for these fragments or should they be handled at
iflib/driver level?

Ok, but "bridge" is a generic program that cannot do such assumptions.

You are always free to write/fork a custom applications which does
application-specific, or even driver-specific assumptions.

Since I am abandoning this revision, shall we take this discussion to

D27799 https://reviews.freebsd.org/D27799 thread?

Yes, D27799 brings NS_MOREFRAG support to iflib, and it is therefore

useful (once the panics are gone), irrespectively on axgbe.
On the other hand, I understood you don't really need NS_MOREFRAG on axgbe
because you plan to disable split header.

REPOSITORY

rS FreeBSD src repository - subversion

CHANGES SINCE LAST ACTION

https://reviews.freebsd.org/D27877/new/

REVISION DETAIL

https://reviews.freebsd.org/D27877

EMAIL PREFERENCES

https://reviews.freebsd.org/settings/panel/emailpreferences/

To: rajesh1.kumar_amd.com, mmacy, manu, vmaffione
Cc: imp, Contributor Reviews (src)