[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: High Network Utilization?
- To: Philipp Buehler <OpenBSD_(_at_)_fips_(_dot_)_de>
- Subject: Re: High Network Utilization?
- From: "Park, Young K" <ypark_(_at_)_ola_(_dot_)_state_(_dot_)_md_(_dot_)_us>
- Date: Tue, 15 Oct 2002 09:30:59 -0400
- Cc: misc_(_at_)_openbsd_(_dot_)_org
Thank you for response to my question.
So based on what you said, I don't need to do anything?
If there is a way to reduce the number of % utilization, I want to do.
This machine will be used as bridged firewall with 4 network segments
(1 external, 1 internal, 2 DMZs)
I don't know that you read my previous messages.
I was using the box with v3.1 loaded for two month. And it crashed since
then with _pool_get error.
From: Philipp Buehler [mailto:OpenBSD_(_at_)_fips_(_dot_)_de]
Sent: Sunday, October 13, 2002 8:01 AM
Subject: Re: High Network Utilization?
On 12/10/2002, Park, Young K <ypark_(_at_)_ola_(_dot_)_state_(_dot_)_md_(_dot_)_us> wrote To
> Even though I set NMBCLUSTERS = 32768, it still indicates that high
> network utilization(94%).
> olafw# netstat -m
> 324 mbufs in use:
> 321 mbufs allocated to data
> 1 mbuf allocated to packet headers
> 2 mbufs allocated to socket names and addresses 320/332 mapped
> pages in use
> 760 Kbytes allocated to network (94% in use) <----- ????
well, it uses 94%, but that's somewhat misleading. With NMBCLUSTERS you set
'Maxpg' of 'mclpl'. In your case to 16384 pool items, and the kernel uses
320 items in mclpool right now; and this pool has currently a population of
332 items (166 pages).
> olafw# vmstat -m
> Memory resource pool statistics
> Name Size Requests Fail Releases Pgreq Pgrel Npage Hiwat Minpg
> mbpl 256 3918 0 3573 24 0 24 24 1
> mclpl 2048 2867 0 2546 166 0 166 166 4
Regarding the number of Requests and Releases, I'd say, that the machine had
a rather short uptime before these commands have been issued. Furthermore
the count between mbpl and mclpl, and the mbuf type usage
(321 mbufs allocated to data) leads me to the conclusion, that a certain
application is sending a lot of large network packets (or should receive
Something is wrong, since the clusters dont get freed by either sending them
to the network or getting received from the kernel to the application.
You can find this out by 'netstat -nfinet', look where big numbers for
Send-Q and/or Recv-Q appear.
After all your current pool usage would fit into the default of 2048 for
NMBCLUSTERS, so dont get worried and dont touch stuff, you obviously dont
understand for now.
Philipp Buehler, aka fips | sysfive.com GmbH | BOfH | NUCH | <double-p>
#1: Break the clue barrier!
#2: Already had buzzword confuseritis ?