Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp202158pxu; Fri, 4 Dec 2020 00:46:20 -0800 (PST) X-Google-Smtp-Source: ABdhPJx60dC0lqx64eqG02elhENr9sqDo5IeAvMeXvEtGGf0QkK1V5j0oth+G/gGtupu7m5hD8Yt X-Received: by 2002:a17:907:20a4:: with SMTP id pw4mr5994873ejb.499.1607071580644; Fri, 04 Dec 2020 00:46:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607071580; cv=none; d=google.com; s=arc-20160816; b=evWdDDozdAjAQe6OYUZBQI+AKZ+YTv1NRt3KWOmmaJyxpaanY88CgI1nCBRufnLtLE r/dw6fBYTIvS95LSvoVJiMuCHlryGhYVu55VVZIUoydte5Nc60oSwDP2IkE+Sv884vMC haR37jyjA6esVMJLGyrtqI0KqD1ZDpNlq0SaFnyXKOpFL5JnZ7a70Vk5Jn6rn8DaJu3Q wkTYxxqzLzFD9sg10EhBeurwu1Lfmud7FHW3cVrjDbnfrPVfSpfToQLc+e8C2bn4yXhF YIkHWe6L43e+aPDkzC/x1uFh4XsrxrDPuFdT+e4yC+7ZdxwFDhLnmHxCau3Shn4+nu0b 4RLw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=HI67x8wgUuRFJMifoHJdppXQZOBKg1Z9h2SXs1njUEc=; b=hfkspM48NkPYJ1PytpP0Sb/QJ8tk565gH+a9iforO/kyoyZisnwJrrDhpNzkxGiT+r J4P8KOikqe8lDFTKmB7XVKIKcZ656Eu099fKL8vx9gIRb57gpo8/26nS9U9MinVJumes 54hyd5Jv50qUsoY6YlY4nqxlO1M3/e0bh9WX7bK75hhimpobK4H5lKBtg/Hf1BSVNN+c g3XFII1D+zTqM0wTZdsyLE7HyrDKCiZ1cpsI8xrLEjO3f/Cl5+xNH6kakUgJD+N/su31 qkaZHPW6NgwEXjzwqbKHg8irlTo11wtAPKiLB+CH8xuvjBaAi9khIPOiERoGRWCmhFC+ F8Sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=eMt0MtPC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gb36si891019ejc.88.2020.12.04.00.45.56; Fri, 04 Dec 2020 00:46:20 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=eMt0MtPC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728081AbgLDIoA (ORCPT + 99 others); Fri, 4 Dec 2020 03:44:00 -0500 Received: from mx2.suse.de ([195.135.220.15]:46310 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727069AbgLDIoA (ORCPT ); Fri, 4 Dec 2020 03:44:00 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1607071393; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=HI67x8wgUuRFJMifoHJdppXQZOBKg1Z9h2SXs1njUEc=; b=eMt0MtPCcV8zg51XvfnNsUBBpaPQY1Cac0pEeEOMAhTUo8fE76lubPSFtm8EheCEwwA7kG ll3halDFdD4E5Rs0JeBg9m+8kDWOuQFy00fv/+o/ryn43GFq5gh0KHqmlAvv14orbkIcLA CNfESr+NlLyNV4JCKkhlWuKeWYU5Knc= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 517E6ACC3; Fri, 4 Dec 2020 08:43:13 +0000 (UTC) Date: Fri, 4 Dec 2020 09:43:12 +0100 From: Michal Hocko To: Pavel Tatashin Cc: LKML , linux-mm , Andrew Morton , Vlastimil Babka , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , Joonsoo Kim , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Jason Gunthorpe , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard Subject: Re: [PATCH 5/6] mm: honor PF_MEMALLOC_NOMOVABLE for all allocations Message-ID: <20201204084312.GA25569@dhcp22.suse.cz> References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-6-pasha.tatashin@soleen.com> <20201203091703.GA17338@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 03-12-20 10:15:41, Pavel Tatashin wrote: > On Thu, Dec 3, 2020 at 4:17 AM Michal Hocko wrote: > > > > On Wed 02-12-20 00:23:29, Pavel Tatashin wrote: > > [...] > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index 611799c72da5..7a6d86d0bc5f 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -3766,20 +3766,25 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) > > > return alloc_flags; > > > } > > > > > > -static inline unsigned int current_alloc_flags(gfp_t gfp_mask, > > > - unsigned int alloc_flags) > > > +static inline unsigned int cma_alloc_flags(gfp_t gfp_mask, > > > + unsigned int alloc_flags) > > > { > > > #ifdef CONFIG_CMA > > > - unsigned int pflags = current->flags; > > > - > > > - if (!(pflags & PF_MEMALLOC_NOMOVABLE) && > > > - gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) > > > + if (gfp_migratetype(gfp_mask) == MIGRATE_MOVABLE) > > > alloc_flags |= ALLOC_CMA; > > > - > > > #endif > > > return alloc_flags; > > > } > > > > > > +static inline gfp_t current_gfp_checkmovable(gfp_t gfp_mask) > > > +{ > > > + unsigned int pflags = current->flags; > > > + > > > + if ((pflags & PF_MEMALLOC_NOMOVABLE)) > > > + return gfp_mask & ~__GFP_MOVABLE; > > > + return gfp_mask; > > > +} > > > + > > > > It sucks that we have to control both ALLOC and gfp flags. But wouldn't > > it be simpler and more straightforward to keep current_alloc_flags as is > > (module PF rename) and hook the gfp mask evaluation into current_gfp_context > > and move it up before the first allocation attempt? > > We could do that, but perhaps as a separate patch? I am worried about > hidden implication of adding extra scope (GFP_NOIO|GFP_NOFS) to the > fast path. Why? > Also, current_gfp_context() is used elsewhere, and in some > places removing __GFP_MOVABLE from gfp_mask means that we will need to > also change other things. For example [1], in try_to_free_pages() we > call current_gfp_context(gfp_mask) which can reduce the maximum zone > idx, yet we simply set it to: reclaim_idx = gfp_zone(gfp_mask), not to > the newly determined gfp_mask. Yes and the direct reclaim should honor the movable zone restriction. Why should we reclaim ZONE_MOVABLE when the allocation cannot really allocate from it? Or have I misunderstood your concern? > > [1] https://soleen.com/source/xref/linux/mm/vmscan.c?r=2da9f630#3239 > > > All scope flags > > should be applicable to the hot path as well. It would add few cycles to > > there but the question is whether that would be noticeable over just > > handling PF_MEMALLOC_NOMOVABLE on its own. The cache line would be > > pulled in anyway. > > Let's try it in a separate patch? I will add it in the next version of > this series. Separate patch or not is up to you. But I do not see a strong reason why this cannot be addressed in a single one. -- Michal Hocko SUSE Labs