[vortex-bug] tbusy and tx_full
Bogdan Costescu
Bogdan.Costescu@IWR.Uni-Heidelberg.De
Thu, 18 May 2000 18:29:10 +0200 (CEST)
On Thu, 18 May 2000, Donald Becker wrote:
> The dev->tbusy flag isn't just for the driver to lock its transmit routine.
> It is used to tell the software queue layer that the device cannot accept
> another packet to transmit. The driver should leave dev->tbusy set when it
> cannot accept more packets, and clear it in the interrupt routine as Tx
> space becomes available.
Errr... then why do we check for tbusy at the beginning of start_xmit? If
the upper layer knows that it cannot queue another packet, it should not
call start_xmit. Or is this still happenning ? (for reasons that I don't
quite understand).
And what about "return 1;" in this case and after testing tx_full ?
If the upper level still calls start_xmit and start_xmit returns 1, the
upper level will retry, right? So why don't we do it in every case that we
cannot handle ?
> Because dev->tbusy is used by the queue layer, and the private lp->tx_full
> is used to tell the interrupt handler that it is safe to clear dev->tbusy.
That's one of the misteries for me. You said tbusy was only to allow
locking of start_xmit WRT re-entry. Now you also say that this is used by
the upper level to NOT call start_xmit and then the upper level should be
informed when this condition is gone. Then why does the upper level the
checking on tbusy when this can be done on tx_full instead and keep tbusy
as a start_xmit lock only ?
> Triggering the BH to run is somewhat expensive. With hysteresis you allow
> the queue layer to do a chunk of work at once, rather than constantly
> cycling from the interrupt handler to the queue layer refill.
Yes, and you increase the latency. You know, the Beowulf guys are
sensitive to this...
So, you increase the latency and reduce the system load. This is a
fair trade IMHO (like the interrupt mitigation), but cannot be this (the
hysteresis) a run-time tunable parameter ?
And what about the M-VIA and S-Core guys who are using the kernel drivers
as starting point for their low latency comunnication software; do they
know about this "feature" ?
> Occasionally (hmmm, three years ago -- time to re-test) I do tests to make
> certain that the various driver constants are still good choices. One test
> I run is with the Tx packet queue length. With 100Mbps Ethernet and typical
> TCP/IP traffic, the driver sees diminishing returns once the driver queue
> length reaches 5-7 entries. More than 10 entries almost never increases
> performance.
Is this with or without interrupt mitigation?
> Yes, the current 3c59x interrupt handler has the very bad behavior of
> doing the following while cleaning up the Tx queue
> if (inl(ioaddr + DownListPtr) == virt_to_bus(&vp->tx_ring[entry]))
> This is an expensive PCI read for every packet free. It mostly made sense
> with the 3c905, but it's not needed at all with the 3c905B/C.
That's why I proposed 2 methods for avoiding this:
1. use PktID in DPD FSH = index of DPD in tx_ring. This will limit the
size of the ring to 256 entries (which is too much anyway IMHO). You can
read only one register which will give you the PktID of the current DPD
being processed. Then you free all DPDs between dirty_tx and this PktID.
Only 1 PCI read.
2. use dnComplete bit in DPD FSH. This is set when the NIC finishes
processing that DPD (Warning: B and C rev. have different download
sequences!). You keep the existing loop, but you make
tx_ring[entry].status & dnComplete
your condition in "if". This does not impose the tx_ring size limit. No
PCI read.
Sincerely,
Bogdan Costescu
IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: Bogdan.Costescu@IWR.Uni-Heidelberg.De