Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp251893pxj; Fri, 28 May 2021 03:11:00 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwem7oeHmKfb5Sn/qJCLHXz5bmrklsn54uGX6733TJFuaVQVzeaDk+erUYqxv4y7+TjNvo+ X-Received: by 2002:a02:5142:: with SMTP id s63mr7762071jaa.82.1622196660820; Fri, 28 May 2021 03:11:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622196660; cv=none; d=google.com; s=arc-20160816; b=vdhuGm77qHSCVxnGPlurudmzE4t4XtD39Z6lFMgn3tG/ZECBrgKcNUju/Zhdee59eS 6W8MTzL4aDC1N+ytAu514ot5nZk1bKcdDeD7C5AJ+py66Yopjg4G1V8F/XgCu6Gt78EF GTuXbooBOcPkDSfD9PCUeccjpvDcpOSvu2MNOq0LfifuA5HSN9/oQgRSP3GXfgBDwn48 vDfzGMIteT2YYn2ANH2+kPmqnclN6vYKbsC1Z1WckYXUv61eku5QbFL5mCgG4QsY2K0c iy4TotsGqNbDcth7naLljFRY/yoFTNzKN1mK60+pONLyV6FJQyUt6zkE9wO6mu4jkH7R NyiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=lFaY6O+yqWJthoVj3wPvRO+4mQSSN6b3H9eF6Ds+Y74=; b=j8YkHyYyn6X0MVO7qhdFc9zGQ5OF2sTVMm6uQW7HZLi9wAtxAbd3r2UbuumZCF8qMr n7gGm7fCWYVnYLiu9aNK7BjvK16kpogGFgvxn/X1wk4l9DT4DFC7Vp0PXUU+P6KHfKci oYy+exaH9mG9PvYCxDA4xUqlmHpr2hkMAi8UOcvQCpz+16jOifhEUBv1fb2SpKJPNq9L mLOp7v9itDOdH+WD3dt9BbWYFAq9IIQc/vHP5Mu9ccCM/w26Dvkx6gxJOlMmZ7lGatDf nt3d41FB8w9jOndQJ89NqQX8zh/dRiMFr1o8oA3vJ9oM1Y14ulHhpa118vcGVE57nyt7 D0Vw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q1si1247004jat.103.2021.05.28.03.10.47; Fri, 28 May 2021 03:11:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236423AbhE1KK4 (ORCPT + 99 others); Fri, 28 May 2021 06:10:56 -0400 Received: from outbound-smtp02.blacknight.com ([81.17.249.8]:44747 "EHLO outbound-smtp02.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236396AbhE1KKz (ORCPT ); Fri, 28 May 2021 06:10:55 -0400 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id D538EBAB09 for ; Fri, 28 May 2021 11:09:19 +0100 (IST) Received: (qmail 28010 invoked from network); 28 May 2021 10:09:19 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 28 May 2021 10:09:19 -0000 Date: Fri, 28 May 2021 11:09:18 +0100 From: Mel Gorman To: David Hildenbrand Cc: Dave Hansen , Andrew Morton , Hillf Danton , Dave Hansen , Vlastimil Babka , Michal Hocko , LKML , Linux-MM , "Tang, Feng" Subject: Re: [PATCH 0/6 v2] Calculate pcp->high based on zone sizes and active CPUs Message-ID: <20210528100918.GM30378@techsingularity.net> References: <20210525080119.5455-1-mgorman@techsingularity.net> <7177f59b-dc05-daff-7dc6-5815b539a790@intel.com> <20210528085545.GJ30378@techsingularity.net> <54ff0363-2f39-71d1-e26c-962c3fddedae@redhat.com> <20210528094949.GL30378@techsingularity.net> <6c189def-11cc-80db-0fde-56aa506cfdea@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <6c189def-11cc-80db-0fde-56aa506cfdea@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 28, 2021 at 11:52:53AM +0200, David Hildenbrand wrote: > > > "Disable pcplists so that page isolation cannot race with freeing > > > in a way that pages from isolated pageblock are left on pcplists." > > > > > > Guess we'd then want to move the draining before start_isolate_page_range() > > > in alloc_contig_range(). > > > > > > > Or instead of draining, validate the PFN range in alloc_contig_range > > is within the same zone and if so, call zone_pcp_disable() before > > start_isolate_page_range and enable after __alloc_contig_migrate_range. > > > > We require the caller to only pass a range within a single zone, so that > should be fine. > > The only ugly thing about zone_pcp_disable() is > mutex_lock(&pcp_batch_high_lock) which would serialize all > alloc_contig_range() and even with offline_pages(). > True so it would have to be accessed if that is bad or not. If racing against offline_pages, memory is potentially being offlined in the target zone which may cause allocation failure. If racing with other alloc_contig_range calls, the two callers are potentially racing to isolate and allocate the same range. The arguement could be made that alloc_contig_ranges should be serialised within one zone to improve the allocation success rate at the potential cost of allocation latency. -- Mel Gorman SUSE Labs