[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: ZFS leaking vnodes (sort of)
- To: "Pawel Jakub Dawidek" <pjd_(_at_)_freebsd_(_dot_)_org>
- Subject: Re: ZFS leaking vnodes (sort of)
- From: "Joao Barros" <joao_(_dot_)_barros_(_at_)_gmail_(_dot_)_com>
- Date: Fri, 13 Jul 2007 00:10:48 +0100
- Cc: current_(_at_)_freebsd_(_dot_)_org
- Dkim-signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=BuxmsIilUQlm4g3TyTvvSFPQ0+nq/DU+wFj6i+x9/BL3MenlkXqL3SnQJxhJ9CQFBbcxRBtvl6OfGxfbCFv6j7flziZe/IjfkoAJ4oRKKLJ4y6S+yagzA9lpqz7FFQEhd7Ni6xiYHz5/yHBt1x+yoPHjelb5AI+EDKMRPtM/+jA=
On 7/9/07, Pawel Jakub Dawidek <pjd_(_at_)_freebsd_(_dot_)_org> wrote:
On Sat, Jul 07, 2007 at 02:26:17PM +0100, Doug Rabson wrote:
> I've been testing ZFS recently and I noticed some performance issues
> while doing large-scale port builds on a ZFS mounted /usr/ports tree.
> Eventually I realised that virtually nothing ever ended up on the vnode
> free list. This meant that when the system reached its maximum vnode
> limit, it had to resort to reclaiming vnodes from the various
> filesystem's active vnode lists (via vlrureclaim). Since those lists
> are not sorted in LRU order, this led to pessimal cache performance
> after the system got into that state.
> I looked a bit closer at the ZFS code and poked around with DDB and I
> think the problem was caused by a couple of extraneous calls to vhold
> when creating a new ZFS vnode. On FreeBSD, getnewvnode returns a vnode
> which is already held (not on the free list) so there is no need to
> call vhold again.
Whoa! Nice catch... The patch works here - I did some pretty heavy
tests, so please commit it ASAP.
I also wonder if this can help with some of those 'kmem_map too small'
panics. I was observing that ARC cannot reclaim memory and this may be
because all vnodes and thus associated data are beeing held.
To ZFS users having problems with performance and/or stability of ZFS:
Can you test the patch and see if it helps?
I've recompiled my system after Doug committed this patch 3 days ago
and I can panic my machine as soon as I don't set kern.maxvnodes to
50000 while doing a ls -R after a recursive chown on some thousands of
files and dirs:
panic: kmem_malloc(16384): kmem_map too small: 326066176 total allocated
I noticed that before this patch the system panicked very easily and
soon in the chown process. Now it completes the chown on the thousands
of files and dirs I have in it and only panics after in the ls -R
process. It's an improvement, but something else is still there...
freebsd-current_(_at_)_freebsd_(_dot_)_org mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscribe_(_at_)_freebsd_(_dot_)_org"