Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp2595810ybz; Mon, 27 Apr 2020 00:09:42 -0700 (PDT) X-Google-Smtp-Source: APiQypIpcp98/4TM6+E9WSBkzQxPJzg4dqWkTIYbihX+2INQRAZx5C2RtlJ6XLPJC5p++MqcOJyt X-Received: by 2002:a17:906:558a:: with SMTP id y10mr19203154ejp.192.1587971382300; Mon, 27 Apr 2020 00:09:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587971382; cv=none; d=google.com; s=arc-20160816; b=tIyap0dwGlO71OmWQmyOjAHazFrXNn7xDtIS9Pau2QmCIl70raP6r+Gk0MEzglV5Aj y7QA6z0rv4dCVIq5hZdDbdLF+LBnMyazjkwvgreHNU2dBfG1hQIZT2yms95IuJBJBw8M tPktm6o7sAfrIQ8zv72oDtvLhSfebNyKr6fPn+hBD5YMCKtz72zJbMQppM4wcbMLbfbm B7GFxingoxvQmHT0bdJ11A/BNkUyHDTfnaa1Ixq2RCMfQkd/MTI6xIMRON/Uhe0bQWN9 n/QKpsDhhhe5h+G1sAgA8E5MqfxZasYCHaotmP3d9d3CzP0zr/tK6jno86kT0kRe+55W DhsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=3Z+Usj3SDLBezs6dUmHfmfhWZ7rzwZgKPVF+4hZHqK8=; b=07DOipCPCRcz83f3cTOA4hdryK0h+jqQGhPAARLx050UYvu4nq35VtIiix5c2e1wBo Za/jVvyWTOa72uw2hERzmxI9lvEhNfT9TeK0zh6ZrlQ1dvk9eF978qpB8VH4hIAzNiuh L7eKYmktCxDnpDU7aYHbI8q49uHCrZDQc6ACKLxi621DP0WI7ZcWYhuanZSL7WNUqpxq XnK0HNQpWEYHRDRXgAr/+fmCNRZz8UBGrZQgAWaS/JaBjSsE6yHhofnfwcygz+NEzt1f KKFk4vCo3m+aupnKnX6tfbifvMbg5AGUHhcc4apYm06J9Wn2nN89SCSt0LdjJ3q2HsIv 8VMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id o9si8132125ejr.224.2020.04.27.00.09.19; Mon, 27 Apr 2020 00:09:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726622AbgD0HDi (ORCPT + 99 others); Mon, 27 Apr 2020 03:03:38 -0400 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:39502 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726221AbgD0HDi (ORCPT ); Mon, 27 Apr 2020 03:03:38 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01358;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---0Twm27rp_1587971012; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0Twm27rp_1587971012) by smtp.aliyun-inc.com(127.0.0.1); Mon, 27 Apr 2020 15:03:32 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com Cc: Alex Shi Subject: [PATCH v10 00/15] per memcg lru lock Date: Mon, 27 Apr 2020 15:02:49 +0800 Message-Id: <1587970985-21629-1-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is a new version which bases on Johannes new patchset "mm: memcontrol: charge swapin pages on instantiation" https://lkml.org/lkml/2020/4/21/266 Johannes Weiner has suggested: "So here is a crazy idea that may be worth exploring: Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's linked list. Can we make PageLRU atomic and use it to stabilize the lru_lock instead, and then use the lru_lock only serialize list operations? ..." With the cleaning memcg charge path and this suggestion, we could isolate LRU pages to exclusive visit them in compaction, page migration, reclaim, memcg move_accunt, huge page split etc scenarios while keeping pages' memcg stable. Then possible to change per node lru locking to per memcg lru locking. As to pagevec_lru_move_fn funcs, it would be safe to let pages remain on lru list, lru lock could guard them for list integrity. This is version safely pass Hugh Dickins's swapping kernel building testcase, Thanks for the great case! I want to send out a bit early for more testing and review while people's memory is still hot with Johannes new memcg charge patch. :) I will do more testing beside. The patchset includes 3 parts: 1, some code cleanup and minimum optimization as a preparation. 2, use TestCleanPageLRU as page isolation's precondition 3, replace per node lru_lock with per memcg per node lru_lock The 3rd part moves per node lru_lock into lruvec, thus bring a lru_lock for each of memcg per node. So on a large machine, each of memcg don't have to suffer from per node pgdat->lru_lock competition. They could go fast with their self lru_lock Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104 containers on a 2s * 26cores * HT box with a modefied case: https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice With this patchset, the readtwice performance increased about 80% in concurrent containers. Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this idea 8 years ago, and others who give comments as well: Daniel Jordan, Mel Gorman, Shakeel Butt, Matthew Wilcox etc. Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu, and Yun Wang. Alex Shi (13): mm/swap: use vmf clean up swapin funcs parameters mm/vmscan: remove unnecessary lruvec adding mm/page_idle: no unlikely double check for idle page counting mm/thp: move lru_add_page_tail func to huge_memory.c mm/thp: clean up lru_add_page_tail mm/thp: narrow lru locking mm/memcg: add debug checking in lock_page_memcg mm/lru: introduce TestClearPageLRU mm/compaction: do page isolation first in compaction mm/mlock: ClearPageLRU before get lru lock in munlock page isolation mm/lru: replace pgdat lru_lock with lruvec lock mm/lru: introduce the relock_page_lruvec function mm/pgdat: remove pgdat lru_lock Hugh Dickins (2): mm/vmscan: use relock for move_pages_to_lru mm/lru: revise the comments of lru_lock Documentation/admin-guide/cgroup-v1/memcg_test.rst | 15 +- Documentation/admin-guide/cgroup-v1/memory.rst | 8 +- Documentation/trace/events-kmem.rst | 2 +- Documentation/vm/unevictable-lru.rst | 22 +-- include/linux/memcontrol.h | 92 +++++++++++ include/linux/mm_types.h | 2 +- include/linux/mmzone.h | 5 +- include/linux/page-flags.h | 1 + include/linux/swap.h | 12 +- mm/compaction.c | 85 +++++++---- mm/filemap.c | 4 +- mm/huge_memory.c | 55 +++++-- mm/madvise.c | 11 +- mm/memcontrol.c | 87 ++++++++++- mm/mlock.c | 93 ++++++------ mm/mmzone.c | 1 + mm/page_alloc.c | 1 - mm/page_idle.c | 8 - mm/rmap.c | 2 +- mm/swap.c | 119 ++++----------- mm/swap_state.c | 23 ++- mm/swapfile.c | 8 +- mm/vmscan.c | 168 +++++++++++---------- mm/zswap.c | 3 +- 24 files changed, 497 insertions(+), 330 deletions(-) -- 1.8.3.1