Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp4085858ybg; Sun, 7 Jun 2020 21:19:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwtFY4VcSz8ay6UAU3x8AJ6ODYuv9FkVj+TFP/NM2EhMVY2tslQNthOe6Pd7P31Epcy6WDO X-Received: by 2002:a17:906:28da:: with SMTP id p26mr18441890ejd.551.1591589943367; Sun, 07 Jun 2020 21:19:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591589943; cv=none; d=google.com; s=arc-20160816; b=RYq85ys41Bmli3aNcofQ3QZN68nVZ2ZZfXWFGXgVZ3XaNLVFeF/gtOSoXGoeNNaRKe w/ZLjGid7dN8a9yYg3Kp6/pkiRDzxu5t9IZkHQoCfeyWtBzCy7iEaKTa1ldNoN85gTCu BrGmGbmqDEDaA4x/UeDDtDnB19gQyMkfxjkvm4Toj4LU7mMGRxKQI9Fy1xOCpxstDFal I/rNUg2m6iqxn/qvfXB8BCANJam/kQNRbpnrvztd+GsjFpuS7YE/LTJrLOdRcRNHxWMS qszNArhN1n8B9xKOToPd7qihx9QzINKNpHsW7eD11pE69vu9i6ZXO7lj//oDhf5R1PTM iuTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=fA+uj3QZAzsnVf9+eDRPzEvhxrdeblA3XsCH2pBda6c=; b=LH6q21zh7vHtHC9Shp6adKyBA6oMfrcpNHwU+fSKElZwl0nCLziHqtsEUkBDUv//mG HHBOS0LR9PBCr/ilUrtkp+1pVMIsbcTzokY07vKNsTJdSZnr8CFele7WgcZqkJCbrPv2 /E6oOFRVO27IUi+xpJNBORBkx6M3w5FS9OzUmIUH3AF4NHIkMdwJL4f4jYd25Ap/q5pr 9mkNh9Z3x7Ksu2SZtd/Vp+ikdfHPJ8twrsDmPyvow4JAlI+a34XBt8c1DVw2t7pN3Td4 +Repu+s/nae3L6zR/cJTOKAtqQnfBEgTzmNyZpN29MOnzffUDwvhAVrr499vpiTPMlVQ z/hA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=TjUF5tNl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e11si7974189ejh.44.2020.06.07.21.18.31; Sun, 07 Jun 2020 21:19:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=TjUF5tNl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726973AbgFHEPx (ORCPT + 99 others); Mon, 8 Jun 2020 00:15:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726256AbgFHEPw (ORCPT ); Mon, 8 Jun 2020 00:15:52 -0400 Received: from mail-oo1-xc41.google.com (mail-oo1-xc41.google.com [IPv6:2607:f8b0:4864:20::c41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0672FC08C5C3 for ; Sun, 7 Jun 2020 21:15:50 -0700 (PDT) Received: by mail-oo1-xc41.google.com with SMTP id 18so3214828ooy.3 for ; Sun, 07 Jun 2020 21:15:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=fA+uj3QZAzsnVf9+eDRPzEvhxrdeblA3XsCH2pBda6c=; b=TjUF5tNlT+ZJVgIpEG8THrAsuG7kXiJxhKwJkjnOl2EXDP/9/yp8bTR1lTq8yq7H/J n98yyqsooEWcDoZFha3e4sUvHdOusdjOKqSS5r1TuCxM5CsUWilBL3CS/L4+l34XbwNB t4y09KlOnMtI6GzMDRJ1riVKAo8+aw6OL6eQ2mN2Pe2NtzzwNR4HaibqfK4NPP9IdbwG NgPShHDskJv64yLtr9EJuUyHq0uQT+QolJ1XcZj7a8Tg1Xpa4PAsNPh0lDC2MlcZcYwa os6Q0Aerzygn0DGTdXwhg1HOZVaMLrEpv0e2S9UY4VcbETHlxYG1xCug56llNqUZlCey Y0EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=fA+uj3QZAzsnVf9+eDRPzEvhxrdeblA3XsCH2pBda6c=; b=EGbOMveXSlvBK9idrS/861UT2mqwAoxBgCWhJcKd0/apDCyxrYokJaGwFvcY6xOmlk tIJoARPW8uPjsUkFWeH75Yo03y2zqI+JFRAjCGcd7g/BrHnthdoMl51sf1VlyMJYkxoM nHevQdNwGIu6sJj5k7PaPVr0pWmeS9R9TD5dJMMGV8+SpvMKFVv9D0NjNec8uh/wg1Aq vzMHyp7/pkIRUJ5TM1XOASAATifA+UPR0XqL867FQ/Cso/FX9BUaESniR3+wbIY/WRnG 96nbbgon8ywfgHgxJX1K9BIT4GhiJqP6+Gyr5c/Tu1SCX6HhjO9OaGgZmwUGjxt/37ON jsgg== X-Gm-Message-State: AOAM5332ufctooO/S4PfkWndZgDBPcAuh2j7OXoESuPSkbQlLkCfm+S4 CNZ8wzHasDejiBfeaLsEWKnrbA== X-Received: by 2002:a05:6820:257:: with SMTP id b23mr13344242ooe.90.1591589747703; Sun, 07 Jun 2020 21:15:47 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id q2sm1229767oti.36.2020.06.07.21.15.45 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Sun, 07 Jun 2020 21:15:46 -0700 (PDT) Date: Sun, 7 Jun 2020 21:15:21 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Alex Shi cc: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com Subject: Re: [PATCH v11 00/16] per memcg lru lock In-Reply-To: <1590663658-184131-1-git-send-email-alex.shi@linux.alibaba.com> Message-ID: References: <1590663658-184131-1-git-send-email-alex.shi@linux.alibaba.com> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 28 May 2020, Alex Shi wrote: > This is a new version which bases on linux-next > > Johannes Weiner has suggested: > "So here is a crazy idea that may be worth exploring: > > Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's > linked list. > > Can we make PageLRU atomic and use it to stabilize the lru_lock > instead, and then use the lru_lock only serialize list operations? > ..." > > With new memcg charge path and this solution, we could isolate > LRU pages to exclusive visit them in compaction, page migration, reclaim, > memcg move_accunt, huge page split etc scenarios while keeping pages' > memcg stable. Then possible to change per node lru locking to per memcg > lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let > pages remain on lru list, lru lock could guard them for list integrity. > > The patchset includes 3 parts: > 1, some code cleanup and minimum optimization as a preparation. > 2, use TestCleanPageLRU as page isolation's precondition > 3, replace per node lru_lock with per memcg per node lru_lock > > The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for > each of memcg per node. So on a large machine, each of memcg don't > have to suffer from per node pgdat->lru_lock competition. They could go > fast with their self lru_lock > > Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104 > containers on a 2s * 26cores * HT box with a modefied case: > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice > > With this patchset, the readtwice performance increased about 80% > in concurrent containers. > > Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this > idea 8 years ago, and others who give comments as well: Daniel Jordan, > Mel Gorman, Shakeel Butt, Matthew Wilcox etc. > > Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu, > and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks! > > > Alex Shi (14): > mm/vmscan: remove unnecessary lruvec adding > mm/page_idle: no unlikely double check for idle page counting > mm/compaction: correct the comments of compact_defer_shift > mm/compaction: rename compact_deferred as compact_should_defer > mm/thp: move lru_add_page_tail func to huge_memory.c > mm/thp: clean up lru_add_page_tail > mm/thp: narrow lru locking > mm/memcg: add debug checking in lock_page_memcg > mm/lru: introduce TestClearPageLRU > mm/compaction: do page isolation first in compaction > mm/mlock: reorder isolation sequence during munlock > mm/lru: replace pgdat lru_lock with lruvec lock > mm/lru: introduce the relock_page_lruvec function > mm/pgdat: remove pgdat lru_lock > > Hugh Dickins (2): > mm/vmscan: use relock for move_pages_to_lru > mm/lru: revise the comments of lru_lock > > Documentation/admin-guide/cgroup-v1/memcg_test.rst | 15 +- > Documentation/admin-guide/cgroup-v1/memory.rst | 8 +- > Documentation/trace/events-kmem.rst | 2 +- > Documentation/vm/unevictable-lru.rst | 22 +-- > include/linux/compaction.h | 4 +- > include/linux/memcontrol.h | 92 +++++++++++ > include/linux/mm_types.h | 2 +- > include/linux/mmzone.h | 6 +- > include/linux/page-flags.h | 1 + > include/linux/swap.h | 4 +- > include/trace/events/compaction.h | 2 +- > mm/compaction.c | 104 ++++++++----- > mm/filemap.c | 4 +- > mm/huge_memory.c | 51 +++++-- > mm/memcontrol.c | 87 ++++++++++- > mm/mlock.c | 93 ++++++------ > mm/mmzone.c | 1 + > mm/page_alloc.c | 1 - > mm/page_idle.c | 8 - > mm/rmap.c | 2 +- > mm/swap.c | 112 ++++---------- > mm/swap_state.c | 6 +- > mm/vmscan.c | 168 +++++++++++---------- > mm/workingset.c | 4 +- > 24 files changed, 487 insertions(+), 312 deletions(-) Hi Alex, I didn't get to try v10 at all, waited until Johannes's preparatory memcg swap cleanup was in mmotm; but I have spent a while thrashing this v11, and can happily report that it is much better than v9 etc: I believe this memcg lru_lock work will soon be ready for v5.9. I've not yet found any flaw at the swapping end, but fixes are needed for isolate_migratepages_block() and mem_cgroup_move_account(): I've got a series of 4 fix patches to send you (I guess two to fold into existing patches of yours, and two to keep as separate from me). I haven't yet written the patch descriptions, will return to that tomorrow. I expect you will be preparing a v12 rebased on v5.8-rc1 or v5.8-rc2, and will be able to include these fixes in that. Tomorrow... Hugh