Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp206045ybl; Thu, 5 Dec 2019 18:10:49 -0800 (PST) X-Google-Smtp-Source: APXvYqw1HP2eoxMes5kfx4n7JcyM8S/0Xq6dLOrmJE0xT5bwT2bzX3tR+ZNEyiDFlMT3OXXk4Gq/ X-Received: by 2002:a54:4792:: with SMTP id o18mr4276950oic.74.1575598248929; Thu, 05 Dec 2019 18:10:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1575598248; cv=none; d=google.com; s=arc-20160816; b=olrdj2k20nstnENNxf1VInetISRa7a1/JNyvoUMDrdvZoiR3xCujRqZP2Zp68CWEjJ FGfo0HwFiIsNJbY5A+sqEKXGd8xbOgJokSo8XhLy2moKSc0Y3Jz+jQfeQHoXv2N9H5ku OOA0Zcj8SrkCbirNpXodp1DMO64De+giFCDoVUPwcE3rJc1Zrnz1486OrPoyJVB3mcx8 kz9bhymfbPFJYKcPCXNlcBVfpKQ72gk9YP+zce+Ia3BQcA3nwsK1LI/NiIEvHew5IwWX p+yAeg/XlXv1577BwIvjizw5pkcK+TT2yVCNWPMdCINSGyRicgmxPZC2+zn+FBIdtsL3 nqvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=E6ynbexOW0WMGm35QGwCp/Wmxq6O+j4NiYQzJUNRa54=; b=vCE7bMDXEYocRu2cenImbyYmlKxCM1UL81ShBLwnp4AjbA84J5BBR/KLs/LosqSL0t Z5iZ0EmHlj0dySxRzj9OSlNIyR4Xo7bqyV4oACteo7NnDdOY7I3QCHPpz3SeiBDBtdRn vhH/kFk6bpThL1LMfo+5SZPmD75vngbbLOcOjLAQ76cPiCu3Ji5zMAdEqYwQaK7wrQW0 hqs6blq2cNeHz7iEbfFr4UuIYXmsWZGEDpvln0PJJtM1bwYvdnFOJM2NY6pxnPuK5ObY vbfUCAxPjyG/npBNXErneKgoIpRXkfQBsNm1hnYFxsefA7YoaIj6SrgfG1qB1hVP14Pz Pt2g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d13si6222258otp.196.2019.12.05.18.10.36; Thu, 05 Dec 2019 18:10:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726209AbfLFCKI (ORCPT + 99 others); Thu, 5 Dec 2019 21:10:08 -0500 Received: from mail104.syd.optusnet.com.au ([211.29.132.246]:45500 "EHLO mail104.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726065AbfLFCKH (ORCPT ); Thu, 5 Dec 2019 21:10:07 -0500 Received: from dread.disaster.area (pa49-179-150-192.pa.nsw.optusnet.com.au [49.179.150.192]) by mail104.syd.optusnet.com.au (Postfix) with ESMTPS id 4B22F7EA9DE; Fri, 6 Dec 2019 13:09:54 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1id345-0007fT-Ps; Fri, 06 Dec 2019 13:09:53 +1100 Date: Fri, 6 Dec 2019 13:09:53 +1100 From: Dave Chinner To: Andrey Ryabinin Cc: Pavel Tikhomirov , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Vladimir Davydov , Roman Gushchin , Shakeel Butt , Chris Down , Yang Shi , Tejun Heo , Thomas Gleixner , "Kirill A . Shutemov" , Konstantin Khorenko , Kirill Tkhai , Trond Myklebust , Anna Schumaker , "J. Bruce Fields" , Chuck Lever , linux-nfs@vger.kernel.org, Alexander Viro , linux-fsdevel@vger.kernel.org Subject: Re: [PATCH] mm: fix hanging shrinker management on long do_shrink_slab Message-ID: <20191206020953.GS2695@dread.disaster.area> References: <20191129214541.3110-1-ptikhomirov@virtuozzo.com> <4e2d959a-0b0e-30aa-59b4-8e37728e9793@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4e2d959a-0b0e-30aa-59b4-8e37728e9793@virtuozzo.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=W5xGqiek c=1 sm=1 tr=0 a=ZXpxJgW8/q3NVgupyyvOCQ==:117 a=ZXpxJgW8/q3NVgupyyvOCQ==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=kj9zAlcOel0A:10 a=pxVhFHJ0LMsA:10 a=7-415B0cAAAA:8 a=qrpnvERzZt7yDo6Pn0wA:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [please cc me on future shrinker infrastructure modifications] On Mon, Dec 02, 2019 at 07:36:03PM +0300, Andrey Ryabinin wrote: > > On 11/30/19 12:45 AM, Pavel Tikhomirov wrote: > > We have a problem that shrinker_rwsem can be held for a long time for > > read in shrink_slab, at the same time any process which is trying to > > manage shrinkers hangs. > > > > The shrinker_rwsem is taken in shrink_slab while traversing shrinker_list. > > It tries to shrink something on nfs (hard) but nfs server is dead at > > these moment already and rpc will never succeed. Generally any shrinker > > can take significant time to do_shrink_slab, so it's a bad idea to hold > > the list lock here. registering/unregistering a shrinker is not a performance critical task. If a shrinker is blocking for a long time, then we need to work to fix the shrinker implementation because blocking is a much bigger problem than just register/unregister. > > The idea of the patch is to inc a refcount to the chosen shrinker so it > > won't disappear and release shrinker_rwsem while we are in > > do_shrink_slab, after that we will reacquire shrinker_rwsem, dec > > the refcount and continue the traversal. This is going to cause a *lot* of traffic on the shrinker rwsem. It's already a pretty hot lock on large machines under memory pressure (think thousands of tasks all doing direct reclaim across hundreds of CPUs), and so changing them to cycle the rwsem on every shrinker that will only make this worse. Esepcially when we consider that there may be hundreds to thousands of registered shrinker instances on large machines. As an example of how frequent cycling of a global lock in shrinker instances causes issues, we used to take references to superblock shrinker count invocations to guarantee existence. This was found to be a scalability limitation when lots of near-empty superblocks were present in a system (see commit d23da150a37c ("fs/superblock: avoid locking counting inodes and dentries before reclaiming them")). This alleviated the problem for a while, but soon we had problems with just taking a reference to the superblock in the callbacks that did actual work. Hence we changed it to just take a per-superblock rwsem to get rid of the global sb_lock spinlock in this path. See commit eb6ef3df4faa ("trylock_super(): replacement for grab_super_passive()". Now we don't have a scalability problem. IOWs, we already know that cycling a global rwsem on every individual shrinker invocation is going to cause noticable scalability problems. Hence I don't think that this sort of "cycle the global rwsem faster to reduce [un]register latency" solution is going to fly because of the runtime performance regressions it will introduce.... > I don't think this patch solves the problem, it only fixes one minor symptom of it. > The actual problem here the reclaim hang in the nfs. The nfs client is waiting on the NFS server to respond. It may actually be that the server has hung, not the client... > It means that any process, including kswapd, may go into nfs inode reclaim and stuck there. *nod* > I think this should be handled on nfs/vfs level by making inode eviction during reclaim more asynchronous. That's what we are trying to do with similar blocking based issues in XFS inode reclaim. It's not simple, though, because these days memory reclaim is like a bowl full of spaghetti covered with a delicious sauce of non-obvious heuristics and broken functionality.... Cheers, Dave. -- Dave Chinner david@fromorbit.com