Received: by 10.223.176.5 with SMTP id f5csp2910617wra; Thu, 1 Feb 2018 07:55:28 -0800 (PST) X-Google-Smtp-Source: AH8x2252puHBtwP+tYpwOIBEKE7Cn6ZOHXo1AJClZsKxClzuPqzXxDaeXdwcboYHZHIwnjozsV6r X-Received: by 10.101.86.201 with SMTP id w9mr6233397pgs.434.1517500528385; Thu, 01 Feb 2018 07:55:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517500528; cv=none; d=google.com; s=arc-20160816; b=E43QKoSHuS0EZG0Hk1Mn37F1Ew2l8QZPtDZAX1ibYy1rHEPFVDNETGPCfebsAczJpX YA5bJ/08Pw1TOgU5CpJIY1j21UgZflqdsU1ehoDi/7YCmYx28hq3Yuqao/oav6G+apnF 2PR1mqQrbkzMzBxRhsQx/srTlp8sTLoK6B985chr8lMJNIYPXr5y+3yIRMwpU/6jMBrF 0EJuPl+z6bHPU/823R5hW73JMuzuXg2rDCFKOGwtlt1rcfn2dt9UfIGzDDThXigO8Fuo Aq+yzh5qFM8ROhP4XipaR06bZ8FQwvc5BNdRpS9Uvcez43MySj/4eQsEmL5mvqzJa7Im zWMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=z6D9iH4StCkwk9GYI7t3Juhc8S//UBosBYy/iizXQF0=; b=Ugc7Yt32r8cGlmzZOsnDZq0LbYEEEAKPGcmGy3mmavTZIAzAf5zOJArIP/Ck+4mhQQ 11DjFHci3mc6j+g3ko8kHm79phJi8xBe4VEAsCQzIqb7ZmPd+N6iidpEue5LkqEwFGII o+3sCvfn2sO2ipFBTnUaesv1F18Te7NDAg28CG3P7muJfrO0tpnPydmi0QiSu3j+S2yK iGq4viN4uybHeK+KtvarjYV9ZYSRZlDcVe9yzU+8sp3ijZbDgfHD2lRoqLkemg/pdV8o S+RqelTomDPjhCTbyexOLmRMhqDAMRC1eaD2+VTTn+QX7dp6yYVdhqTkQAfiUQL9O1Jf QNzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e13-v6si20822plo.242.2018.02.01.07.55.13; Thu, 01 Feb 2018 07:55:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751960AbeBAPyo (ORCPT + 99 others); Thu, 1 Feb 2018 10:54:44 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34184 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751508AbeBAPyn (ORCPT ); Thu, 1 Feb 2018 10:54:43 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id E4BBD67C23; Thu, 1 Feb 2018 15:54:42 +0000 (UTC) Received: from 117.195.187.81.in-addr.arpa (unknown [10.33.36.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6BFBC80EA; Thu, 1 Feb 2018 15:54:39 +0000 (UTC) Subject: Re: [RFC PATCH v1 00/13] lru_lock scalability To: daniel.m.jordan@oracle.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: aaron.lu@intel.com, ak@linux.intel.com, akpm@linux-foundation.org, Dave.Dice@oracle.com, dave@stgolabs.net, khandual@linux.vnet.ibm.com, ldufour@linux.vnet.ibm.com, mgorman@suse.de, mhocko@kernel.org, pasha.tatashin@oracle.com, steven.sistare@oracle.com, yossi.lev@oracle.com References: <20180131230413.27653-1-daniel.m.jordan@oracle.com> From: Steven Whitehouse Message-ID: <6bd1c8a5-c682-a3ce-1f9f-f1f53b4117a9@redhat.com> Date: Thu, 1 Feb 2018 15:54:38 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.0 MIME-Version: 1.0 In-Reply-To: <20180131230413.27653-1-daniel.m.jordan@oracle.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Thu, 01 Feb 2018 15:54:43 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On 31/01/18 23:04, daniel.m.jordan@oracle.com wrote: > lru_lock, a per-node* spinlock that protects an LRU list, is one of the > hottest locks in the kernel. On some workloads on large machines, it > shows up at the top of lock_stat. > > One way to improve lru_lock scalability is to introduce an array of locks, > with each lock protecting certain batches of LRU pages. > > *ooooooooooo**ooooooooooo**ooooooooooo**oooo ... > | || || || > \ batch 1 / \ batch 2 / \ batch 3 / > > In this ASCII depiction of an LRU, a page is represented with either '*' > or 'o'. An asterisk indicates a sentinel page, which is a page at the > edge of a batch. An 'o' indicates a non-sentinel page. > > To remove a non-sentinel LRU page, only one lock from the array is > required. This allows multiple threads to remove pages from different > batches simultaneously. A sentinel page requires lru_lock in addition to > a lock from the array. > > Full performance numbers appear in the last patch in this series, but this > prototype allows a microbenchmark to do up to 28% more page faults per > second with 16 or more concurrent processes. > > This work was developed in collaboration with Steve Sistare. > > Note: This is an early prototype. I'm submitting it now to support my > request to attend LSF/MM, as well as get early feedback on the idea. Any > comments appreciated. > > > * lru_lock is actually per-memcg, but without memcg's in the picture it > becomes per-node. GFS2 has an lru list for glocks, which can be contended under certain workloads. Work is still ongoing to figure out exactly why, but this looks like it might be a good approach to that issue too. The main purpose of GFS2's lru list is to allow shrinking of the glocks under memory pressure via the gfs2_scan_glock_lru() function, and it looks like this type of approach could be used there to improve the scalability, Steve. > > Aaron Lu (1): > mm: add a percpu_pagelist_batch sysctl interface > > Daniel Jordan (12): > mm: allow compaction to be disabled > mm: add lock array to pgdat and batch fields to struct page > mm: introduce struct lru_list_head in lruvec to hold per-LRU batch > info > mm: add batching logic to add/delete/move API's > mm: add lru_[un]lock_all APIs > mm: convert to-be-refactored lru_lock callsites to lock-all API > mm: temporarily convert lru_lock callsites to lock-all API > mm: introduce add-only version of pagevec_lru_move_fn > mm: add LRU batch lock API's > mm: use lru_batch locking in release_pages > mm: split up release_pages into non-sentinel and sentinel passes > mm: splice local lists onto the front of the LRU > > include/linux/mm_inline.h | 209 +++++++++++++++++++++++++++++++++++++++++++++- > include/linux/mm_types.h | 5 ++ > include/linux/mmzone.h | 25 +++++- > kernel/sysctl.c | 9 ++ > mm/Kconfig | 1 - > mm/huge_memory.c | 6 +- > mm/memcontrol.c | 5 +- > mm/mlock.c | 11 +-- > mm/mmzone.c | 7 +- > mm/page_alloc.c | 43 +++++++++- > mm/page_idle.c | 4 +- > mm/swap.c | 208 ++++++++++++++++++++++++++++++++++++--------- > mm/vmscan.c | 49 +++++------ > 13 files changed, 500 insertions(+), 82 deletions(-) >