Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1409563ybt; Thu, 25 Jun 2020 05:32:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxbJE6waMM292OVWdzdsfvmGF2CdXyM29nePcfFEvkKMxUzgDkCmJ2lkWeMS7MdkY3uGEgY X-Received: by 2002:a05:6402:1605:: with SMTP id f5mr16678059edv.8.1593088375299; Thu, 25 Jun 2020 05:32:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593088375; cv=none; d=google.com; s=arc-20160816; b=qjJlyfdf9m+8N53HFfqXb5dfsLI3BrvF+XhQfkFVDgIrHiuDQSbi2vORyi5S/tvlyf A/YHjgL5BeQn08PdHY++3E+Jci4NnpHAfv0g5G3LughrK/UU1j9qysCHNcIia8fVyD4k Ak0sv28x8bDmh7F+elYbk8xOH7lUNIQU/135PmgxLzdY+27x1IV0d4Qeph/LTUn5nuzG rxs92+N1/Yqc2M4wuyyyjOse/d+fFLPRv3OxC+E9e16JLBP6dRIzS+2NPf7ReV57RgLl wjZ3LcBR6qNWROtQ07PYNF35KIIj86ZRhZTdZzyB+7QNH1ySeYhGG+VXOnZ6FkkmuV2v Nlbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=LfL/J6kcjTPEsDskt6aLX8PefsDeHP4anavcHPUMTKA=; b=ewcCZPrSBb1jV7D4pq0D0W7zmZ07uJgyRKDMlWqIfGtYWSoTcy4YrmIjboeEbhwI8z gYVWkwz2ieeCL2IeC3AJKLo6a1qEIwYlvsQ2xTDxkVxpZHFI9fBBhIZ0LMwveb2BuPuC iD/XcC207Qw28weoOYP7eHZGapD5b11xml1fs/tyjtfRNnFCm/ygTGiJzKVITQ4+EvRd VazybJnXEFWVn/3+Olr+L3tqhCqBo8hzjFqn5K9HPb4UOeFf5ARV2UcjJPB2bDhidB7D U7OecuEKZxVoIonWr5+/ksvJ9unazMygxMgKgIt8hKVpWEDhzsS+DtefdQxJt5mnQaR5 3fIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s5si6500913eja.431.2020.06.25.05.32.31; Thu, 25 Jun 2020 05:32:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2404641AbgFYMbr (ORCPT + 99 others); Thu, 25 Jun 2020 08:31:47 -0400 Received: from mail-ej1-f66.google.com ([209.85.218.66]:45098 "EHLO mail-ej1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2404343AbgFYMbr (ORCPT ); Thu, 25 Jun 2020 08:31:47 -0400 Received: by mail-ej1-f66.google.com with SMTP id a1so5724405ejg.12; Thu, 25 Jun 2020 05:31:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=LfL/J6kcjTPEsDskt6aLX8PefsDeHP4anavcHPUMTKA=; b=K8tP2qbXQI2hBSalRv4SKfazP6iRV3H2GezAc8E8Uh9rrhiWuQ8Uqg34LY7Fb6evnI l0B30MTdDB8NfVH+KqkGnEjAhchh7qqCHacKA4oePCjNf6TrHlbE0h6RdihWtzcD6spO IE8vZBokrOoqo6uM3O/nZUnpG3CboXOzspyfNggmh3UZuWqH9stnfsm8dKW0rUYkyuq6 laFq5z4Bb37H2lDLFaKcdL6RTThXEoBdGPecY/sJJA1qLPPP8FRrX51vy/UNLpxHukgN TzJQMEzsGodiU98ggnmLqKKHWgzO/aMakaLVJ7b+s9Sl9MnE0UqIB2SnSEaU/RL4mTx4 J3XA== X-Gm-Message-State: AOAM530w5IK5YXKYgtRq/pvNRb0vyX7+bEIhjkMbTaP28RkyOk1Dd0gq MTQHz5NVGHmYaldpcgU8ebY= X-Received: by 2002:a17:906:3d41:: with SMTP id q1mr30164872ejf.12.1593088305149; Thu, 25 Jun 2020 05:31:45 -0700 (PDT) Received: from localhost (ip-37-188-168-3.eurotel.cz. [37.188.168.3]) by smtp.gmail.com with ESMTPSA id u13sm11449983ejx.3.2020.06.25.05.31.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Jun 2020 05:31:44 -0700 (PDT) Date: Thu, 25 Jun 2020 14:31:43 +0200 From: Michal Hocko To: "Matthew Wilcox (Oracle)" Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-xfs@vger.kernel.org, dm-devel@redhat.com, Mikulas Patocka , Jens Axboe , NeilBrown Subject: Re: [PATCH 2/6] mm: Add become_kswapd and restore_kswapd Message-ID: <20200625123143.GK1320@dhcp22.suse.cz> References: <20200625113122.7540-1-willy@infradead.org> <20200625113122.7540-3-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200625113122.7540-3-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 25-06-20 12:31:18, Matthew Wilcox wrote: > Since XFS needs to pretend to be kswapd in some of its worker threads, > create methods to save & restore kswapd state. Don't bother restoring > kswapd state in kswapd -- the only time we reach this code is when we're > exiting and the task_struct is about to be destroyed anyway. > > Signed-off-by: Matthew Wilcox (Oracle) Certainly better than an opencoded PF_$FOO manipulation Acked-by: Michal Hocko I would just ask for a clarification because this is rellying to have a good MM knowledge to follow > +/* > + * Tell the memory management that we're a "memory allocator", I would go with. Tell the memory management that the caller is working on behalf of the background memory reclaim (aka kswapd) and help it to make a forward progress. That means that it will get an access to memory reserves should there be a need to allocate memory in order to make a forward progress. Note that the caller has to be extremely careful when doing that. Or something like that. > + * and that if we need more memory we should get access to it > + * regardless (see "__alloc_pages()"). "kswapd" should > + * never get caught in the normal page freeing logic. > + * > + * (Kswapd normally doesn't need memory anyway, but sometimes > + * you need a small amount of memory in order to be able to > + * page out something else, and this flag essentially protects > + * us from recursively trying to free more memory as we're > + * trying to free the first piece of memory in the first place). > + */ > +#define KSWAPD_PF_FLAGS (PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD) > + > +static inline unsigned long become_kswapd(void) > +{ > + unsigned long flags = current->flags & KSWAPD_PF_FLAGS; > + current->flags |= KSWAPD_PF_FLAGS; > + return flags; > +} > + > +static inline void restore_kswapd(unsigned long flags) > +{ > + current->flags &= ~(flags ^ KSWAPD_PF_FLAGS); > +} > + > static inline void set_current_io_flusher(void) > { > current->flags |= PF_LOCAL_THROTTLE; > diff --git a/mm/vmscan.c b/mm/vmscan.c > index b6d84326bdf2..27ae76699899 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -3870,19 +3870,7 @@ static int kswapd(void *p) > if (!cpumask_empty(cpumask)) > set_cpus_allowed_ptr(tsk, cpumask); > > - /* > - * Tell the memory management that we're a "memory allocator", > - * and that if we need more memory we should get access to it > - * regardless (see "__alloc_pages()"). "kswapd" should > - * never get caught in the normal page freeing logic. > - * > - * (Kswapd normally doesn't need memory anyway, but sometimes > - * you need a small amount of memory in order to be able to > - * page out something else, and this flag essentially protects > - * us from recursively trying to free more memory as we're > - * trying to free the first piece of memory in the first place). > - */ > - tsk->flags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; > + become_kswapd(); > set_freezable(); > > WRITE_ONCE(pgdat->kswapd_order, 0); > @@ -3932,8 +3920,6 @@ static int kswapd(void *p) > goto kswapd_try_sleep; > } > > - tsk->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD); > - > return 0; > } > > -- > 2.27.0 > -- Michal Hocko SUSE Labs