8139 driver problems in SMP systems???
sharkey@ale.physics.sunysb.edu
sharkey@ale.physics.sunysb.edu
Mon May 24 06:42:45 1999
Hi,
I'm not sure if this list is active or not...
Anyway, if anyone's out there, I've got four machines with Realtek RTL8139
ethernet cards. Two of them are dual processor SMP systems, and two are
uniprocessor systems. The uniprocessor machines seem to work well, but the
SMP machines are having problems.
At first I was using the stock driver that shipped with the latest stable
kernels, and the result was that after several hours of successful usage,
the net would just disappear. It was as if the cable was pulled out.
One of the two machines is under reasonably heavy load, and would lose its
connection with a mean time to failure of about 4-6 hours. The other system,
less heavily utilized, would stay up for a few days at a time.
Unfortunately, I had to switch the heavy traffic machine back its old 10Mbps
card, since having a stable 10Mbit systems is more important than an unstable
100Mbit system. The other machine I left running with the 8139 for debugging.
This seems to be a rather common problem, since I was able to find many
references to this sort of thing on deja. (Although none of the posts I
read explicitly correlated this with SMP.)
I have since switched to version 1.06c of the RTL8139 driver and performance
has improved and worsened simultaneously. The machine which was running fine
for a few days at a time now has problems about once per hour or so, but
when it has problems now, they aren't so severe. Ping times increase
dramatically, but they are less than infinity.
lbpaper# ping lbscissors
PING lbscissors.kek.jp (130.87.218.173): 56 data bytes
64 bytes from 130.87.218.173: icmp_seq=0 ttl=255 time=2063.9 ms
64 bytes from 130.87.218.173: icmp_seq=1 ttl=255 time=1770.1 ms
64 bytes from 130.87.218.173: icmp_seq=2 ttl=255 time=770.4 ms
64 bytes from 130.87.218.173: icmp_seq=3 ttl=255 time=700.9 ms
64 bytes from 130.87.218.173: icmp_seq=4 ttl=255 time=2000.0 ms
64 bytes from 130.87.218.173: icmp_seq=5 ttl=255 time=1800.1 ms
64 bytes from 130.87.218.173: icmp_seq=6 ttl=255 time=1000.1 ms
64 bytes from 130.87.218.173: icmp_seq=7 ttl=255 time=1901.7 ms
64 bytes from 130.87.218.173: icmp_seq=8 ttl=255 time=1605.4 ms
64 bytes from 130.87.218.173: icmp_seq=9 ttl=255 time=1301.8 ms
64 bytes from 130.87.218.173: icmp_seq=10 ttl=255 time=1701.5 ms
64 bytes from 130.87.218.173: icmp_seq=11 ttl=255 time=1899.8 ms
64 bytes from 130.87.218.173: icmp_seq=12 ttl=255 time=1101.2 ms
64 bytes from 130.87.218.173: icmp_seq=13 ttl=255 time=1000.1 ms
64 bytes from 130.87.218.173: icmp_seq=14 ttl=255 time=2066.2 ms
64 bytes from 130.87.218.173: icmp_seq=15 ttl=255 time=1492.9 ms
64 bytes from 130.87.218.173: icmp_seq=16 ttl=255 time=1000.1 ms
64 bytes from 130.87.218.173: icmp_seq=17 ttl=255 time=1402.4 ms
64 bytes from 130.87.218.173: icmp_seq=18 ttl=255 time=1587.4 ms
64 bytes from 130.87.218.173: icmp_seq=19 ttl=255 time=1506.0 ms
64 bytes from 130.87.218.173: icmp_seq=20 ttl=255 time=1205.4 ms
--- lbscissors.kek.jp ping statistics ---
22 packets transmitted, 21 packets received, 4% packet loss
round-trip min/avg/max = 700.9/1470.3/2066.2 ms
Then I run a shell script which does "ifconfig up; ifconfig down" and
restores the routing table, and all is well again:
lbpaper# /root/bin/fix_net
lbpaper# ping lbscissors
PING lbscissors.kek.jp (130.87.218.173): 56 data bytes
64 bytes from 130.87.218.173: icmp_seq=0 ttl=255 time=0.3 ms
64 bytes from 130.87.218.173: icmp_seq=1 ttl=255 time=0.4 ms
64 bytes from 130.87.218.173: icmp_seq=2 ttl=255 time=0.4 ms
64 bytes from 130.87.218.173: icmp_seq=3 ttl=255 time=0.3 ms
64 bytes from 130.87.218.173: icmp_seq=4 ttl=255 time=0.9 ms
64 bytes from 130.87.218.173: icmp_seq=5 ttl=255 time=0.3 ms
64 bytes from 130.87.218.173: icmp_seq=6 ttl=255 time=0.2 ms
--- lbscissors.kek.jp ping statistics ---
7 packets transmitted, 7 packets received, 0% packet loss
round-trip min/avg/max = 0.2/0.4/0.9 ms
Now, what I find most unusual about the long ping times above is that
they don't appear to be random. They are too close to integer numbers of
100ms. This seems to be some sort of like some sort of Nagle algorithm
correction gone wild. (Yeah, I know Nagle is implemented at the TCP level
and won't affect ping, but I don't remember the name of whatever its link
level equivalent is that slows down transmission after collision detection.)
I haven't yet done any deep digging to find the root of this problem.
Any suggestions?
Eric Sharkey
| To unsubscribe, send mail to Majordomo@cesdis.gsfc.nasa.gov, and within the
| body of the mail, include only the text:
| unsubscribe this-list-name youraddress@wherever.org
| You will be unsubscribed as speedily as possible.