Received: by 10.223.176.5 with SMTP id f5csp324209wra; Thu, 1 Feb 2018 21:22:49 -0800 (PST) X-Google-Smtp-Source: AH8x226e4YxK3mJs9Rs6BcL0PyzShCBCzQUNE0e0TnKpCqOkVVGKUiSAmTqNzyA8ZUMKfAmPcNAW X-Received: by 10.99.111.10 with SMTP id k10mr31158124pgc.421.1517548969029; Thu, 01 Feb 2018 21:22:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517548968; cv=none; d=google.com; s=arc-20160816; b=QOEmuIRck+Ct6ZAVUnY2RU5g7exbgDBqfZEXHH8JtOBr1mJeTzsq0kwFF/NNrDh7oV 9UQ/apg8SseDksMBPudKkXljh4BIeTgMKprMToxkvvywWRwdFx55utijlMbJseLKmxGr EfMxAacGFUra/n6vkpQmiR/aNaNH0ShTvDX7ZEhTB5bYD3c6fIp8S3hAAzI/eDWvgucb NScUYqkPN0RGKfEAZavhx0q1m496sKNiBb+uxdD21ecu9duOkx8eB+IDU9c/2nHTYqEd NJ2FW4+zgqvkghWvcP6RqiLzxIR2csG9kNGz6u7GxlsqXKfKBxPgrw+hk0VbWyDZj9gc G1+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=kwENCd4gZmZ1ChBup74uIjA5JdZUAAYA64xIFrM2eWg=; b=t+SuN59HDV1DGpUNLsNnpobJz/aOuZxtbb9eWQ10r8yxrcKtFKuvR+77oi7ILHpY0Q 8bE9N3Z3cj5R0xuI2pI10PfvPo+/7kgFEs045U1Dxx19sZ1g8XAuI0r11sJFZUYJAaF8 QnuJKkGTJQr+y7DZrtCvCbpYwAngHfzgquXRMrdchfo7KVm2Re8vvz/3VOUkYxpl2xEx UcZqFdmL6fROFYX2U0dTrkk5awVs+XqNaZodlbRy7UO72sVcFUku/KJfAimbIhwl+0MJ IsCf9f9ea850ehVZ+i/BR4oVArX3AD2++YieXAQdLYmCF0/z4XySBgbBL4lFVjroCztf SPTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r1-v6si1101610plb.453.2018.02.01.21.22.34; Thu, 01 Feb 2018 21:22:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750703AbeBBFU4 (ORCPT + 99 others); Fri, 2 Feb 2018 00:20:56 -0500 Received: from mga04.intel.com ([192.55.52.120]:12406 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751463AbeBBFUp (ORCPT ); Fri, 2 Feb 2018 00:20:45 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Feb 2018 21:20:44 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,447,1511856000"; d="scan'208";a="200747805" Received: from aaronlu.sh.intel.com (HELO intel.com) ([10.239.159.135]) by fmsmga006.fm.intel.com with ESMTP; 01 Feb 2018 21:20:32 -0800 Date: Fri, 2 Feb 2018 13:21:20 +0800 From: Aaron Lu To: daniel.m.jordan@oracle.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ak@linux.intel.com, akpm@linux-foundation.org, Dave.Dice@oracle.com, dave@stgolabs.net, khandual@linux.vnet.ibm.com, ldufour@linux.vnet.ibm.com, mgorman@suse.de, mhocko@kernel.org, pasha.tatashin@oracle.com, steven.sistare@oracle.com, yossi.lev@oracle.com, Dave Hansen , Tim Chen Subject: Re: [RFC PATCH v1 13/13] mm: splice local lists onto the front of the LRU Message-ID: <20180202052120.GA16272@intel.com> References: <20180131230413.27653-1-daniel.m.jordan@oracle.com> <20180131230413.27653-14-daniel.m.jordan@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180131230413.27653-14-daniel.m.jordan@oracle.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 31, 2018 at 06:04:13PM -0500, daniel.m.jordan@oracle.com wrote: > Now that release_pages is scaling better with concurrent removals from > the LRU, the performance results (included below) showed increased > contention on lru_lock in the add-to-LRU path. > > To alleviate some of this contention, do more work outside the LRU lock. > Prepare a local list of pages to be spliced onto the front of the LRU, > including setting PageLRU in each page, before taking lru_lock. Since > other threads use this page flag in certain checks outside lru_lock, > ensure each page's LRU links have been properly initialized before > setting the flag, and use memory barriers accordingly. > > Performance Results > > This is a will-it-scale run of page_fault1 using 4 different kernels. > > kernel kern # > > 4.15-rc2 1 > large-zone-batch 2 > lru-lock-base 3 > lru-lock-splice 4 > > Each kernel builds on the last. The first is a baseline, the second > makes zone->lock more scalable by increasing an order-0 per-cpu > pagelist's 'batch' and 'high' values to 310 and 1860 respectively Since the purpose of the patchset is to optimize lru_lock, you may consider adjusting pcp->high to be >= 32768(page_fault1's test size is 128M = 32768 pages). That should eliminate zone->lock contention entirely. > (courtesy of Aaron Lu's patch), the third scales lru_lock without > splicing pages (the previous patch in this series), and the fourth adds > page splicing (this patch). > > N tasks mmap, fault, and munmap anonymous pages in a loop until the test > time has elapsed.