Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp1800849imm; Sun, 27 May 2018 16:49:32 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqlaC9YhhM+vsOBlM1fYd0IQXklTl6+hCG69/OthvDewgBcD0d8CQ09MLGM40kR+/Xk4JXe X-Received: by 2002:a65:57ca:: with SMTP id q10-v6mr8512386pgr.279.1527464972744; Sun, 27 May 2018 16:49:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527464972; cv=none; d=google.com; s=arc-20160816; b=ZT3sh+5Av1pWawPbXEnWZ4JjZ2If2nSIrhz242t2eWVfVHnNbU0kxzvV3VDNjn1cG0 h1+EnpHmN8MTUfaPePtVywCST3ON+eemXaYaqG0CEmHSP3YAqq1PvCow/39LXPhNYX7Z DuoulFBBwFRETgC4uGGT6x03WZHVMoMJM6p3fidXCFK1P75CW+L/KsruZaedlD/3RyOd 2ikLOOyFfdrCTiAWgiX6Cb03qSujwsTqvad0NlTMrpp1GfGrl1h3KD+wnHvu0Q5B66RN yOzrp5XaACYeC1VvH9nsuSOCKU/eehu5093iD2NIXa91arQ56n/1NTkw8uEHyG1Lp4Mo FzMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=iOkdmWieeKrR+FU/QQzP252lKyS/4SVchcRZXOzjkmg=; b=N3weF2Bz48OYGZqrBSANY9M7Va9vPEeqXRkQnPEF3suTb46BCIRdO6QE2Dq+ku+zar o656e1ZHfdyjslAx/B9C77Dm7+zf8g9eM/IoPOaJ9BgmX10vFq7mioK1a1dPFDpTJp3E BjQHDxe8eGZaoQ1GaUFMnwv4TYWI31i9p6YfCFHJKwfQ+fcnik6dn/mEHm+nMhCliEBm MCdNL4lYWX14hQ+K1b5PlEViRrmyobb/t3FzP/UYRGLC1xQ+N3kgThZTG7993suqQipo xYoMPc6FsJ9/2Qt+olpxK/16ofnjUuxZjv7D761xJj55DD8iglQzLgXxQvWvQFg7fyja qdNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d90-v6si29232655pld.92.2018.05.27.16.49.06; Sun, 27 May 2018 16:49:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752718AbeE0Xs7 (ORCPT + 99 others); Sun, 27 May 2018 19:48:59 -0400 Received: from ipmail03.adl6.internode.on.net ([150.101.137.143]:15684 "EHLO ipmail03.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752406AbeE0Xs5 (ORCPT ); Sun, 27 May 2018 19:48:57 -0400 Received: from ppp59-167-129-252.static.internode.on.net (HELO dastard) ([59.167.129.252]) by ipmail03.adl6.internode.on.net with ESMTP; 28 May 2018 09:18:55 +0930 Received: from dave by dastard with local (Exim 4.80) (envelope-from ) id 1fN5PC-0006Nw-2g; Mon, 28 May 2018 09:48:54 +1000 Date: Mon, 28 May 2018 09:48:54 +1000 From: Dave Chinner To: Michal Hocko Cc: Jonathan Corbet , LKML , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, "Darrick J. Wong" , David Sterba Subject: Re: [PATCH] doc: document scope NOFS, NOIO APIs Message-ID: <20180527234854.GF23861@dastard> References: <20180424183536.GF30619@thunk.org> <20180524114341.1101-1-mhocko@kernel.org> <20180524221715.GY10363@dastard> <20180525081624.GH11881@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180525081624.GH11881@dhcp22.suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 25, 2018 at 10:16:24AM +0200, Michal Hocko wrote: > On Fri 25-05-18 08:17:15, Dave Chinner wrote: > > On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote: > [...] > > > +FS/IO code then simply calls the appropriate save function right at the > > > +layer where a lock taken from the reclaim context (e.g. shrinker) and > > > +the corresponding restore function when the lock is released. All that > > > +ideally along with an explanation what is the reclaim context for easier > > > +maintenance. > > > > This paragraph doesn't make much sense to me. I think you're trying > > to say that we should call the appropriate save function "before > > locks are taken that a reclaim context (e.g a shrinker) might > > require access to." > > > > I think it's also worth making a note about recursive/nested > > save/restore stacking, because it's not clear from this description > > that this is allowed and will work as long as inner save/restore > > calls are fully nested inside outer save/restore contexts. > > Any better? > > -FS/IO code then simply calls the appropriate save function right at the > -layer where a lock taken from the reclaim context (e.g. shrinker) and > -the corresponding restore function when the lock is released. All that > -ideally along with an explanation what is the reclaim context for easier > -maintenance. > +FS/IO code then simply calls the appropriate save function before any > +lock shared with the reclaim context is taken. The corresponding > +restore function when the lock is released. All that ideally along with > +an explanation what is the reclaim context for easier maintenance. > + > +Please note that the proper pairing of save/restore function allows nesting > +so memalloc_noio_save is safe to be called from an existing NOIO or NOFS scope. It's better, but the talk of this being necessary for locking makes me cringe. XFS doesn't do it for locking reasons - it does it largely for preventing transaction context nesting, which has all sorts of problems that cause hangs (e.g. log space reservations can't be filled) that aren't directly locking related. i.e we should be talking about using these functions around contexts where recursion back into the filesystem through reclaim is problematic, not that "holding locks" is problematic. Locks can be used as an example of a problematic context, but locks are not the only recursion issue that require GFP_NOFS allocation contexts to avoid. Cheers, Dave. -- Dave Chinner david@fromorbit.com