Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp792847ybg; Sun, 26 Jul 2020 22:41:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzX26Oi/2SFOJtUgk3LKXjNdLCfhw9qAWmYs4U9Rxn0Bwa3aehTUP2GhMIo9jV7ijEsZBpr X-Received: by 2002:a05:6402:cb9:: with SMTP id cn25mr20239139edb.247.1595828482119; Sun, 26 Jul 2020 22:41:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595828482; cv=none; d=google.com; s=arc-20160816; b=XVOabByktTSuzhsugLb2Day+zMlfdhL5FCYdvlUeH3Wyr3EGR4HY/OtzeRLzEjlbfo fzwrOdIpXxXDw+6nzwomIiufvg8Jlq1cnZ9z3zxejaU7PKrYjTVSmn5zpV34pljlkzvL CGrMDi579vCZU/lHjj4LE78+1dCyDeN0g9cZhuQcC6kec2U5txS8qeGO4YgJi8AVIRvZ uGZYi15Aq8HtlRLmKGIuon2w4Vc46tR/DcrHyHrdxfOxGb9cj5UP3rXJ81hBZCEjx0Oh He6hqO54y6JG2JsAGudUwn/cVaL9sZyuU2C5b4sOJJ7gm1NWKlA5gatqOlQsPKxflHom OaAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:references:to:from:subject; bh=qJZqdjwtuGQW/q+yOnW8UncE9Pahm8isEmlUXaXMkHo=; b=1FAJT9lbZeHIqjiYYGUBPSkbWLJGMiXM6j9ipQXaLV5/4QHRk7KdgaOnMYAYrOLj3n ovXDaefhpqY2WXe9J1gQN4HMPsCcO6uQgn2V6mLvm6ooTAHizW41Zu4k2/aPI9KmL1MC w2VhLNuZ/DZCekz5OmtMwbRsmIcWw/Z6jHwS/Ts8qSii5VOR19vLVpcwQrwXqH3cmeaY vPvHg+1QgPZnjBkp/v7I5N2MNpxOsStPyTiYc85cfUYG+3EM0S3WeQVqMLP98R3ESyWT sQJcG6VB0ictp6BWgDjmxZO9aT4N/UN6q26TgZPSTU0BiNi7BZPAUNUo4ZJrCLXdyfvz jzmA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g7si4723758edj.525.2020.07.26.22.40.58; Sun, 26 Jul 2020 22:41:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726227AbgG0Fkq (ORCPT + 99 others); Mon, 27 Jul 2020 01:40:46 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:60923 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726140AbgG0Fkq (ORCPT ); Mon, 27 Jul 2020 01:40:46 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R861e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0U3tiQ2q_1595828439; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U3tiQ2q_1595828439) by smtp.aliyun-inc.com(127.0.0.1); Mon, 27 Jul 2020 13:40:40 +0800 Subject: Re: [PATCH v17 00/21] per memcg lru lock From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> Message-ID: <49d4f3bf-ccce-3c97-3a4c-f5cefe2d623a@linux.alibaba.com> Date: Mon, 27 Jul 2020 13:40:31 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org A standard for new page isolation steps like the following: 1, get_page(); #pin the page avoid be free 2, TestClearPageLRU(); #serialize other isolation, also memcg change 3, spin_lock on lru_lock; #serialize lru list access The step 2 could be optimzed/replaced in scenarios which page is unlikely be accessed by others. 在 2020/7/25 下午8:59, Alex Shi 写道: > The new version which bases on v5.8-rc6. It includes Hugh Dickins fix in > mm/swap.c and mm/mlock.c fix which Alexander Duyck pointed out, then > removes 'mm/mlock: reorder isolation sequence during munlock' > > Hi Johanness & Hugh & Alexander & Willy, > > Could you like to give a reviewed by since you address much of issue and > give lots of suggestions! Many thanks! > > Current lru_lock is one for each of node, pgdat->lru_lock, that guard for > lru lists, but now we had moved the lru lists into memcg for long time. Still > using per node lru_lock is clearly unscalable, pages on each of memcgs have > to compete each others for a whole lru_lock. This patchset try to use per > lruvec/memcg lru_lock to repleace per node lru lock to guard lru lists, make > it scalable for memcgs and get performance gain. > > Currently lru_lock still guards both lru list and page's lru bit, that's ok. > but if we want to use specific lruvec lock on the page, we need to pin down > the page's lruvec/memcg during locking. Just taking lruvec lock first may be > undermined by the page's memcg charge/migration. To fix this problem, we could > take out the page's lru bit clear and use it as pin down action to block the > memcg changes. That's the reason for new atomic func TestClearPageLRU. > So now isolating a page need both actions: TestClearPageLRU and hold the > lru_lock. > > The typical usage of this is isolate_migratepages_block() in compaction.c > we have to take lru bit before lru lock, that serialized the page isolation > in memcg page charge/migration which will change page's lruvec and new > lru_lock in it. > > The above solution suggested by Johannes Weiner, and based on his new memcg > charge path, then have this patchset. (Hugh Dickins tested and contributed much > code from compaction fix to general code polish, thanks a lot!). > > The patchset includes 3 parts: > 1, some code cleanup and minimum optimization as a preparation. > 2, use TestCleanPageLRU as page isolation's precondition > 3, replace per node lru_lock with per memcg per node lru_lock > > Following Daniel Jordan's suggestion, I have run 208 'dd' with on 104 > containers on a 2s * 26cores * HT box with a modefied case: > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice > With this patchset, the readtwice performance increased about 80% > in concurrent containers. > > Thanks Hugh Dickins and Konstantin Khlebnikov, they both brought this > idea 8 years ago, and others who give comments as well: Daniel Jordan, > Mel Gorman, Shakeel Butt, Matthew Wilcox etc. > > Thanks for Testing support from Intel 0day and Rong Chen, Fengguang Wu, > and Yun Wang. Hugh Dickins also shared his kbuild-swap case. Thanks! > > > Alex Shi (19): > mm/vmscan: remove unnecessary lruvec adding > mm/page_idle: no unlikely double check for idle page counting > mm/compaction: correct the comments of compact_defer_shift > mm/compaction: rename compact_deferred as compact_should_defer > mm/thp: move lru_add_page_tail func to huge_memory.c > mm/thp: clean up lru_add_page_tail > mm/thp: remove code path which never got into > mm/thp: narrow lru locking > mm/memcg: add debug checking in lock_page_memcg > mm/swap: fold vm event PGROTATED into pagevec_move_tail_fn > mm/lru: move lru_lock holding in func lru_note_cost_page > mm/lru: move lock into lru_note_cost > mm/lru: introduce TestClearPageLRU > mm/compaction: do page isolation first in compaction > mm/thp: add tail pages into lru anyway in split_huge_page() > mm/swap: serialize memcg changes in pagevec_lru_move_fn > mm/lru: replace pgdat lru_lock with lruvec lock > mm/lru: introduce the relock_page_lruvec function > mm/pgdat: remove pgdat lru_lock > > Hugh Dickins (2): > mm/vmscan: use relock for move_pages_to_lru > mm/lru: revise the comments of lru_lock > > Documentation/admin-guide/cgroup-v1/memcg_test.rst | 15 +- > Documentation/admin-guide/cgroup-v1/memory.rst | 21 +-- > Documentation/trace/events-kmem.rst | 2 +- > Documentation/vm/unevictable-lru.rst | 22 +-- > include/linux/compaction.h | 4 +- > include/linux/memcontrol.h | 98 ++++++++++ > include/linux/mm_types.h | 2 +- > include/linux/mmzone.h | 6 +- > include/linux/page-flags.h | 1 + > include/linux/swap.h | 4 +- > include/trace/events/compaction.h | 2 +- > mm/compaction.c | 113 ++++++++---- > mm/filemap.c | 4 +- > mm/huge_memory.c | 48 +++-- > mm/memcontrol.c | 71 ++++++- > mm/memory.c | 3 - > mm/mlock.c | 43 +++-- > mm/mmzone.c | 1 + > mm/page_alloc.c | 1 - > mm/page_idle.c | 8 - > mm/rmap.c | 4 +- > mm/swap.c | 203 ++++++++------------- > mm/swap_state.c | 2 - > mm/vmscan.c | 174 ++++++++++-------- > mm/workingset.c | 2 - > 25 files changed, 510 insertions(+), 344 deletions(-) >