Previously, when transmitting short runs of packets via cxgbe_nm_tx(),
we would wait until a large number of packets were buffered before
scheduling a task to clean transmit buffers.
Obtained from: np
Differential D17883
cxgbe: Flush transmitted packets more regularly in netmap mode markj on Nov 7 2018, 7:54 AM. Authored by Tags None Referenced Files
Details
Previously, when transmitting short runs of packets via cxgbe_nm_tx(), Obtained from: np
Diff Detail
Event TimelineComment Actions @brd what kind of workload do you see the improvements with? There are no updates here but I discussed this (and a couple of other t4_netmap.c changes) with the author and submitter at vBSDCon 2019 and we thought it best to leave this as a private change in their repo at that time. If this helps other users too then we could replace the on/off style lazy_tx_credit_flush with a threshold at which to flush the credits instead of hardcoding it to 64. Comment Actions I believe Brad was just hoping to see this committed on behalf of the submitter. Apparently this patch has been used in production for several years now.
Is your suggestion to add a new lazy_tx_credit_flush_thresh sysctl, used as follows: 1033 if (npkt == 0 && npkt_remaining == 0) { 1034 /* All done. */ 1035 if (lazy_tx_credit_flush == 0) { 1036 wr->equiq_to_len16 |= htobe32(F_FW_WR_EQUEQ | 1037 F_FW_WR_EQUIQ); 1038 nm_txq->equeqidx = nm_txq->pidx; 1039 nm_txq->equiqidx = nm_txq->pidx; 1040 } else if (NMIDXDIFF(nm_txq, equeqidx) >= 1041 lazy_tx_credit_flush_thresh) { 1042 wr->equiq_to_len16 |= htobe32(F_FW_WR_EQUEQ); 1043 nm_txq->equeqidx = nm_txq->pidx; 1044 } 1045 ring_nm_txq_db(sc, nm_txq); 1046 return; 1047 } ? |