[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: memory (mbuf) leak in fxp driver.
It takes about a week... but the 2 factors that really magnify the problem
1. it's running tcpdump all the time
2. it's running at 10 MBit
there may be some additional factors here as well...
* tcpdump is being killed & restarted every hour
* the leak also seems to happen faster on networks with LOW network
* It may be related to the revision level of the card, (Intel 82557 rev
From: Rob Paisley [mailto:firstname.lastname@example.org]
Sent: Tuesday, May 22, 2001 11:09 PM
To: Benninghoff, John
Subject: Re: memory (mbuf) leak in fxp driver.
How long does the box have to be up to fill the buffer??
I've got a card that uses the fxp driver, and It's gotten to uptimes of
100+ days, without problem. Now it's a firewall, with TWO cards that use
the fxp driver, and still haven't seen any problem.
Not sure what to tell you. netstat -m for me gives the following output:
123 mbufs in use:
100 mbufs allocated to data
14 mbufs allocated to packet headers
9 mbufs allocated to socket names and addresses
102/342 mapped pages in use
699 Kbytes allocated to network (31% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines
Tell me what ya think
On Tue, 22 May 2001, Benninghoff, John wrote:
> Date: Tue, 22 May 2001 17:20:05 -0500
> From: "Benninghoff, John" <JABenninghoff@dainrauscher.com>
> To: "'email@example.com'" <firstname.lastname@example.org>
> Subject: memory (mbuf) leak in fxp driver.
> Hello all,
> I'm fairly certain that I've uncovered a memory leak in the fxp driver, at
> least for 2.8-stable. What I'm seeing is a steadily increasing number of
> mbufs in use, until I get a "mb_map full" kernel error and the network
> At this point, netstat -m shows something like this:
> 18390 mbufs in use:
> 2265 mbufs allocated to data
> 16124 mbufs allocated to packet headers
> 1 mbuf allocated to socket names and addresses
> 8190/8192 mapped pages in use
> 18682 Kbytes allocated to network (99% in use)
> 0 requests for memory denied
> 0 requests for memory delayed
> 1941 calls to protocol drain routines
> As you can see, I've already set NMBCLUSTERS="8192" in my kernel
> configuration. This really only delays the inevitable.
> I searched for similar problems in the mailing list archives, and I
> that someone else was experiencing a similar problem when running a bridge
> using fxp (Intel) NICs...
> (note that this appears to be 10 meg, not 100)
> After further testing / experimenting I noticed the following:
> * running tcpdump (as I do) makes it worse, the mbufs fill up much faster.
> I'm doing sniffing on heavily-utilized networks.
> * cards running at 10 meg fill up much faster than cards running at 100
> (not what I would expect)
> * the problem seems to exist in 2.7, 2.8, and 2.9 (beta). I haven't
> * unplugging the network connection doesn't reduce the mbufs in use.
> * I've seen similar problems reported on NetBSD and FreeBSD, perhaps
> they all share parent code (?)
> * hard to say for sure, but other drivers, like xl, don't seem to behave
> same way.
> It really looks like a leak in fxp, but I lack the expertise to find it in
> the source code ...
> Any suggestions ? Should I submit this as a bug report ?
> here are the relevant lines from dmesg:
> fxp0 at pci0 dev 1 function 0 "Intel 82557" rev 0x08: irq 11, address
> inphy0 at fxp0 phy 1: i82555 10/100 media interface, rev. 4
> fxp1 at pci0 dev 2 function 0 "Intel 82557" rev 0x08: irq 10, address
> inphy1 at fxp1 phy 1: i82555 10/100 media interface, rev. 4
> John A Benninghoff