Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp3129772imm; Tue, 29 May 2018 01:20:20 -0700 (PDT) X-Google-Smtp-Source: AB8JxZovgtY8jBd7T4tD1aI1FbUiLCPahcLRK4UNy3x2U+Kfmc+ZYBJKMITCKNz2mjeWniB3vlQe X-Received: by 2002:a62:f20d:: with SMTP id m13-v6mr16583598pfh.170.1527582020079; Tue, 29 May 2018 01:20:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527582020; cv=none; d=google.com; s=arc-20160816; b=F40+UsG0tnrrhuUjiX6Qm0FPVXJ5HEnmhZzmZhvhCA4ZhnP8UHs1R7jC9EA2M86031 +xkOPWYVar13ZjOHdGok0FUgH/Q9xvPRjx7IQjnDXTHV52NBqBJTfyCaQhrqWzkjp508 RYOsLVfugBadeYYA0DKvle3gVEPk9WrkXATTG+W9TCbp3gM5r2WHBUm+2fgSXRWJK1Fp 0rI238YMp1BegYel28d19jjtB2LpqtC/MaMxMFHcEHZLqKGU7zSZZI8xRNtyV/CGGBqv 9Z1jlWjwRkfhaNXTNA9R6Wgju0DXYlNgmOmxOyMKOJG0cW0gggreprZ0YLpyptct+qih B2QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=Jxj+w0f01oe75cja+2FcY3XLWNSbqm7Vlwo0sfRA9yA=; b=A4lrYMIiyW4dUwEp+tMxtWgwKppZ+7AN4AOSE5E/BBjtXtNHMSHl4MqUhy4yXfOJVE C59Ju7tmIWgA65pTM7IroHnx4FK97xO6KJR/UH8TET5zbX1QE8sEuWvsaCV41c9dKgXD tLlOe01qKdtsUlpEc1JWcojjGcpA2pWDL4wsLSuWooTM4SYMHLhjZTNCvgnVk7fu5P9A AlwcGnfdZTjrdK8FB75USXoeNjztWh4AkTmWVUUMEU+BnHIWdNFVihfAvmF/IylmfEo0 WYKPnLotUy15BTAme6x42QFoukVYbka/6nx7j9Y6mCZmo8Ca3Cpf4JW4ZOEE0TbzlDN/ 5+8A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f11-v6si25287767pgo.406.2018.05.29.01.20.05; Tue, 29 May 2018 01:20:20 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932336AbeE2ISa (ORCPT + 99 others); Tue, 29 May 2018 04:18:30 -0400 Received: from mx2.suse.de ([195.135.220.15]:45452 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932072AbeE2IS0 (ORCPT ); Tue, 29 May 2018 04:18:26 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext-too.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id AB8AFAF79; Tue, 29 May 2018 08:18:24 +0000 (UTC) Date: Tue, 29 May 2018 10:18:23 +0200 From: Michal Hocko To: Dave Chinner Cc: Jonathan Corbet , LKML , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, "Darrick J. Wong" , David Sterba Subject: Re: [PATCH] doc: document scope NOFS, NOIO APIs Message-ID: <20180529081823.GN27180@dhcp22.suse.cz> References: <20180424183536.GF30619@thunk.org> <20180524114341.1101-1-mhocko@kernel.org> <20180524221715.GY10363@dastard> <20180525081624.GH11881@dhcp22.suse.cz> <20180527234854.GF23861@dastard> <20180528091923.GH1517@dhcp22.suse.cz> <20180528223205.GG23861@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180528223205.GG23861@dastard> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 29-05-18 08:32:05, Dave Chinner wrote: > On Mon, May 28, 2018 at 11:19:23AM +0200, Michal Hocko wrote: > > On Mon 28-05-18 09:48:54, Dave Chinner wrote: > > > On Fri, May 25, 2018 at 10:16:24AM +0200, Michal Hocko wrote: > > > > On Fri 25-05-18 08:17:15, Dave Chinner wrote: > > > > > On Thu, May 24, 2018 at 01:43:41PM +0200, Michal Hocko wrote: > > > > [...] > > > > > > +FS/IO code then simply calls the appropriate save function right at the > > > > > > +layer where a lock taken from the reclaim context (e.g. shrinker) and > > > > > > +the corresponding restore function when the lock is released. All that > > > > > > +ideally along with an explanation what is the reclaim context for easier > > > > > > +maintenance. > > > > > > > > > > This paragraph doesn't make much sense to me. I think you're trying > > > > > to say that we should call the appropriate save function "before > > > > > locks are taken that a reclaim context (e.g a shrinker) might > > > > > require access to." > > > > > > > > > > I think it's also worth making a note about recursive/nested > > > > > save/restore stacking, because it's not clear from this description > > > > > that this is allowed and will work as long as inner save/restore > > > > > calls are fully nested inside outer save/restore contexts. > > > > > > > > Any better? > > > > > > > > -FS/IO code then simply calls the appropriate save function right at the > > > > -layer where a lock taken from the reclaim context (e.g. shrinker) and > > > > -the corresponding restore function when the lock is released. All that > > > > -ideally along with an explanation what is the reclaim context for easier > > > > -maintenance. > > > > +FS/IO code then simply calls the appropriate save function before any > > > > +lock shared with the reclaim context is taken. The corresponding > > > > +restore function when the lock is released. All that ideally along with > > > > +an explanation what is the reclaim context for easier maintenance. > > > > + > > > > +Please note that the proper pairing of save/restore function allows nesting > > > > +so memalloc_noio_save is safe to be called from an existing NOIO or NOFS scope. > > > > > > It's better, but the talk of this being necessary for locking makes > > > me cringe. XFS doesn't do it for locking reasons - it does it > > > largely for preventing transaction context nesting, which has all > > > sorts of problems that cause hangs (e.g. log space reservations > > > can't be filled) that aren't directly locking related. > > > > Yeah, I wanted to not mention locks as much as possible. > > > > > i.e we should be talking about using these functions around contexts > > > where recursion back into the filesystem through reclaim is > > > problematic, not that "holding locks" is problematic. Locks can be > > > used as an example of a problematic context, but locks are not the > > > only recursion issue that require GFP_NOFS allocation contexts to > > > avoid. > > > > agreed. Do you have any suggestion how to add a more abstract wording > > that would not make head spinning? > > > > I've tried the following. Any better? > > > > diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst > > index c0ec212d6773..adac362b2875 100644 > > --- a/Documentation/core-api/gfp_mask-from-fs-io.rst > > +++ b/Documentation/core-api/gfp_mask-from-fs-io.rst > > @@ -34,9 +34,11 @@ scope will inherently drop __GFP_FS respectively __GFP_IO from the given > > mask so no memory allocation can recurse back in the FS/IO. > > > > FS/IO code then simply calls the appropriate save function before any > > -lock shared with the reclaim context is taken. The corresponding > > -restore function when the lock is released. All that ideally along with > > -an explanation what is the reclaim context for easier maintenance. > > +critical section wrt. the reclaim is started - e.g. lock shared with the > > +reclaim context or when a transaction context nesting would be possible > > +via reclaim. The corresponding restore function when the critical > > .... restore function should be called when ... fixed > But otherwise I think this is much better. Thanks! -- Michal Hocko SUSE Labs