Received: by 10.223.176.5 with SMTP id f5csp59913wra; Thu, 1 Feb 2018 15:33:03 -0800 (PST) X-Google-Smtp-Source: AH8x2246fxu5gsDEe+4Uf4eFZX6VinxstqL1SOHnFKaxdR9Ayxa8nVYKkt/0HJaMD6Ey6YO1HjCK X-Received: by 2002:a17:902:6c:: with SMTP id 99-v6mr31314718pla.409.1517527983231; Thu, 01 Feb 2018 15:33:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517527983; cv=none; d=google.com; s=arc-20160816; b=dTEA3hAwt5G4kodJBHylndVl8IXrskG9vi6PI5uUfG93wyluDnw2LKpjz9ZaEFK7lt RisA0jOG19G+S93h0W5HRxjS7uwdRXIb+IzGqTkfowRTJcdBk/LnnU8klmNnJwPncHw1 wpMos7uh99vvaYmFnZ06ZNiNomrFmq8ZruuyfEEuEs0nG0/doxMDgXPQJY+yQxVidTVW WN2SGu6I3+aVrDmEg37VHDPIFkqZX5t/Gl6bcgpbgZ+25tXj21VDvjvz3HtE0st1xaHv nZ2E1K+1jViXh2DIFgPP4IdFe4WXgnQI28wbjK8J5M95nfJfr/610wNRpZU3o/jIvLRv WZ4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=jCMoxtkgbsuDScWh6mZO68Bf3JS033OZ0LnrSSdpdxg=; b=MsapUuz0KhypsgxJ7YH2W+zj07rC4HJcFuIDPuNrjB5iDlzjKfNnjQFOFJRorEL34f c4EzoMmaUDJSJZ+k5XR7kbl0fIWh5o2IQe1cWx6GARLr/n849Z3foqYMSLeICGAdzEPi CIBczmqIxALensiP3+V5M/c7ljK8lWhNTzZoe+jqegd646HnpBLs3fJL5I3R9OXAVHqH SsvHp+RG7d59/AK+N1rlXNzydfvmsJrqWYVx/PxCzKyPPR/K+VzL1hBYIKdXggwxEWrU qGWCD21vK8ougGwF1k2WMZuiSusLmdsxASVzj4HwQSilee2ARty+XAGOPOmrrFYCWBhr HpLA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p17si402428pgc.311.2018.02.01.15.32.47; Thu, 01 Feb 2018 15:33:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751791AbeBAXaw (ORCPT + 99 others); Thu, 1 Feb 2018 18:30:52 -0500 Received: from mga09.intel.com ([134.134.136.24]:53194 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751569AbeBAXaq (ORCPT ); Thu, 1 Feb 2018 18:30:46 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Feb 2018 15:30:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,444,1511856000"; d="scan'208";a="200684873" Received: from schen9-desk3.jf.intel.com (HELO [10.54.74.42]) ([10.54.74.42]) by fmsmga006.fm.intel.com with ESMTP; 01 Feb 2018 15:30:45 -0800 Subject: Re: [RFC PATCH v1 13/13] mm: splice local lists onto the front of the LRU To: daniel.m.jordan@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: aaron.lu@intel.com, ak@linux.intel.com, akpm@linux-foundation.org, Dave.Dice@oracle.com, dave@stgolabs.net, khandual@linux.vnet.ibm.com, ldufour@linux.vnet.ibm.com, mgorman@suse.de, mhocko@kernel.org, pasha.tatashin@oracle.com, steven.sistare@oracle.com, yossi.lev@oracle.com References: <20180131230413.27653-1-daniel.m.jordan@oracle.com> <20180131230413.27653-14-daniel.m.jordan@oracle.com> From: Tim Chen Message-ID: Date: Thu, 1 Feb 2018 15:30:44 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0 MIME-Version: 1.0 In-Reply-To: <20180131230413.27653-14-daniel.m.jordan@oracle.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/31/2018 03:04 PM, daniel.m.jordan@oracle.com wrote: > Now that release_pages is scaling better with concurrent removals from > the LRU, the performance results (included below) showed increased > contention on lru_lock in the add-to-LRU path. > > To alleviate some of this contention, do more work outside the LRU lock. > Prepare a local list of pages to be spliced onto the front of the LRU, > including setting PageLRU in each page, before taking lru_lock. Since > other threads use this page flag in certain checks outside lru_lock, > ensure each page's LRU links have been properly initialized before > setting the flag, and use memory barriers accordingly. > > Performance Results > > This is a will-it-scale run of page_fault1 using 4 different kernels. > > kernel kern # > > 4.15-rc2 1 > large-zone-batch 2 > lru-lock-base 3 > lru-lock-splice 4 > > Each kernel builds on the last. The first is a baseline, the second > makes zone->lock more scalable by increasing an order-0 per-cpu > pagelist's 'batch' and 'high' values to 310 and 1860 respectively > (courtesy of Aaron Lu's patch), the third scales lru_lock without > splicing pages (the previous patch in this series), and the fourth adds > page splicing (this patch). > > N tasks mmap, fault, and munmap anonymous pages in a loop until the test > time has elapsed. > > The process case generally does better than the thread case most likely > because of mmap_sem acting as a bottleneck. There's ongoing work > upstream[*] to scale this lock, however, and once it goes in, my > hypothesis is the thread numbers here will improve. > > kern # ntask proc thr proc stdev thr stdev > speedup speedup pgf/s pgf/s > 1 1 705,533 1,644 705,227 1,122 > 2 1 2.5% 2.8% 722,912 453 724,807 728 > 3 1 2.6% 2.6% 724,215 653 723,213 941 > 4 1 2.3% 2.8% 721,746 272 724,944 728 > > kern # ntask proc thr proc stdev thr stdev > speedup speedup pgf/s pgf/s > 1 4 2,525,487 7,428 1,973,616 12,568 > 2 4 2.6% 7.6% 2,590,699 6,968 2,123,570 10,350 > 3 4 2.3% 4.4% 2,584,668 12,833 2,059,822 10,748 > 4 4 4.7% 5.2% 2,643,251 13,297 2,076,808 9,506 > > kern # ntask proc thr proc stdev thr stdev > speedup speedup pgf/s pgf/s > 1 16 6,444,656 20,528 3,226,356 32,874 > 2 16 1.9% 10.4% 6,566,846 20,803 3,560,437 64,019 > 3 16 18.3% 6.8% 7,624,749 58,497 3,447,109 67,734 > 4 16 28.2% 2.5% 8,264,125 31,677 3,306,679 69,443 > > kern # ntask proc thr proc stdev thr stdev > speedup speedup pgf/s pgf/s > 1 32 11,564,988 32,211 2,456,507 38,898 > 2 32 1.8% 1.5% 11,777,119 45,418 2,494,064 27,964 > 3 32 16.1% -2.7% 13,426,746 94,057 2,389,934 40,186 > 4 32 26.2% 1.2% 14,593,745 28,121 2,486,059 42,004 > > kern # ntask proc thr proc stdev thr stdev > speedup speedup pgf/s pgf/s > 1 64 12,080,629 33,676 2,443,043 61,973 > 2 64 3.9% 9.9% 12,551,136 206,202 2,684,632 69,483 > 3 64 15.0% -3.8% 13,892,933 351,657 2,351,232 67,875 > 4 64 21.9% 1.8% 14,728,765 64,945 2,485,940 66,839 > > [*] https://lwn.net/Articles/724502/ Range reader/writer locks > https://lwn.net/Articles/744188/ Speculative page faults > The speedup looks pretty nice and seems to peak at 16 tasks. Do you have an explanation of what causes the drop from 28.2% to 21.9% going from 16 to 64 tasks? Was the loss in performance due to increased contention on LRU lock when more tasks running results in a higher likelihood of hitting the sentinel? If I understand your patchset correctly, you will need to acquire LRU lock for sentinel page. Perhaps an increase in batch size could help? Thanks. Tim