Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp4640762pxu; Thu, 10 Dec 2020 01:31:34 -0800 (PST) X-Google-Smtp-Source: ABdhPJxBzH3E4MiJ9diJCkSiHhNnFsPU5/Iqp3hV1rqbbuXK+CF8j8MuF9mKM12I2YrsrZEHKZ5j X-Received: by 2002:a50:bf4a:: with SMTP id g10mr5733212edk.201.1607592694427; Thu, 10 Dec 2020 01:31:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607592694; cv=none; d=google.com; s=arc-20160816; b=gncwO43OV4Q2I/1U9EIO3CjVjAQ8zfNTJN1T6LzH030rAdbs4mfUtlbO/k9UekV3xt w9SjgVKfqNeuOrPZN0eDM2baDMsKXrK0z9bRcmU15VY9Rn1PuwtjulclK+QsxNRVS4p6 gPXDiZBlpJ7cuf6B5WVupKfSGGJ7nJl+EHncf50CyZAzCSYMjxn/OEBownBr3HlszmzE EcXIZXL2MbV2+6GBjpM8Hieu6LNu/UYeFXrqsJgmtBw0+YJkXZagP1kml1nImlzv/mxI WU54Xhhn0yD8MuasgdzXXYzuLuYxwFKb5YP9reK2oIMc1UN8U7E9JqB/RP6tcZW2wvJm uVLQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=9uWujy+o06ktzvQ3gNFLL4AaGBQCoMauNX3CwqNpwHk=; b=va6lIEBc1hLg3vPvLLN3mcC+uKtbgVNiFtVGehTorBtEmS41qwfT8S0i+K1pIGDj9p QD/BmMF8WnfuT4qvCfQBUSUIa3cGoYRTK6RDPB0sMuwm07FGjeAhp/g6SmfMMn4E+zt/ G7HmU7urh5cw4W2AATINJ7jqJSWt7hWFvvsLaEUFQ914SbNk0A83j8oXc9EXgTtRdVIc jkSqtKPouLVErgqiSo1vmn1ECZ7NH+yTjTgi6RObfxE9HQrqYs08Ai8a3WhV9j0rgZnI 0E0ZV+KZbslFWjBxW4QVCEQ5df0SXGQy7x4fAmkd7TKvR9bP9ilIDtcfKemJ0k7vccs9 g2rA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g10si2429105edy.201.2020.12.10.01.31.09; Thu, 10 Dec 2020 01:31:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732260AbgLJJ2z (ORCPT + 99 others); Thu, 10 Dec 2020 04:28:55 -0500 Received: from out30-44.freemail.mail.aliyun.com ([115.124.30.44]:37324 "EHLO out30-44.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726087AbgLJJ2x (ORCPT ); Thu, 10 Dec 2020 04:28:53 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0UI810Cr_1607592488; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UI810Cr_1607592488) by smtp.aliyun-inc.com(127.0.0.1); Thu, 10 Dec 2020 17:28:08 +0800 Subject: Re: [PATCH 00/11] mm: lru related cleanups To: Yu Zhao , Andrew Morton , Hugh Dickins Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20201207220949.830352-1-yuzhao@google.com> From: Alex Shi Message-ID: <54bdbe42-023a-4e32-9b94-173d0ad2dc16@linux.alibaba.com> Date: Thu, 10 Dec 2020 17:28:08 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Yu, btw, after this patchset, to do cacheline alignment on each of lru lists are possible, so did you try that to see performance changes? Thanks Alex 在 2020/12/8 上午6:09, Yu Zhao 写道: > The cleanups are intended to reduce the verbosity in lru list > operations and make them less error-prone. A typical example > would be how the patches change __activate_page(): > > static void __activate_page(struct page *page, struct lruvec *lruvec) > { > if (!PageActive(page) && !PageUnevictable(page)) { > - int lru = page_lru_base_type(page); > int nr_pages = thp_nr_pages(page); > > - del_page_from_lru_list(page, lruvec, lru); > + del_page_from_lru_list(page, lruvec); > SetPageActive(page); > - lru += LRU_ACTIVE; > - add_page_to_lru_list(page, lruvec, lru); > + add_page_to_lru_list(page, lruvec); > trace_mm_lru_activate(page); > > There are a few more places like __activate_page() and they are > unnecessarily repetitive in terms of figuring out which list a page > should be added onto or deleted from. And with the duplicated code > removed, they are easier to read, IMO. > > Patch 1 to 5 basically cover the above. Patch 6 and 7 make code more > robust by improving bug reporting. Patch 8, 9 and 10 take care of > some dangling helpers left in header files. Patch 11 isn't strictly a > clean-up patch, but it seems still relevant to include it here. > > Yu Zhao (11): > mm: use add_page_to_lru_list() > mm: shuffle lru list addition and deletion functions > mm: don't pass "enum lru_list" to lru list addition functions > mm: don't pass "enum lru_list" to trace_mm_lru_insertion() > mm: don't pass "enum lru_list" to del_page_from_lru_list() > mm: add __clear_page_lru_flags() to replace page_off_lru() > mm: VM_BUG_ON lru page flags > mm: fold page_lru_base_type() into its sole caller > mm: fold __update_lru_size() into its sole caller > mm: make lruvec_lru_size() static > mm: enlarge the "int nr_pages" parameter of update_lru_size() > > include/linux/memcontrol.h | 10 +-- > include/linux/mm_inline.h | 115 ++++++++++++++------------------- > include/linux/mmzone.h | 2 - > include/linux/vmstat.h | 6 +- > include/trace/events/pagemap.h | 11 ++-- > mm/compaction.c | 2 +- > mm/memcontrol.c | 10 +-- > mm/mlock.c | 3 +- > mm/swap.c | 50 ++++++-------- > mm/vmscan.c | 21 ++---- > 10 files changed, 91 insertions(+), 139 deletions(-) >