Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp442316pxu; Thu, 26 Nov 2020 02:44:03 -0800 (PST) X-Google-Smtp-Source: ABdhPJw0w+MQq0D+oIh+lbUyAMwa12YZtOhhcINhHZpKIqSUNDSW7XRhP5ikr8KiyanguuG0O6+F X-Received: by 2002:a17:906:4704:: with SMTP id y4mr2021877ejq.449.1606387443120; Thu, 26 Nov 2020 02:44:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606387443; cv=none; d=google.com; s=arc-20160816; b=geEe8SxEt5OhLIw42QXl+KXb/DhHDAqrZlaQIJVeEcc/1K34N2XA878oly3bwG4sc9 vYZdk3QkJreigFWeN1IyDCZCqxUSTddQjp3qBJ5Ov7A6vOLNzNUlSPxUCTq3btuFww3k PQm0tq1VDL+fd9GaUafnb1EHCOe2nM/81pcqztV4FIPfazbWgkNfSXBmplb5sbAsSUZ5 rdxH1o+moqf2/7uWga4Ntk6vm6hsjKI3KkRwN0a4+zovnM1QK+Ni3iL6N6bVlxrgdhBn IUN08K95E2rMXJIbgHKDarN7MrQulVuRL9l8vVM6GHsGybpc6bnhCoIRADK/j/WhX1LI kJWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=sdtKrgAoD2vgdswOzD/Op6qKZjzYV714TPf+bidsyRo=; b=mJiAQNA+q9hlYhNvft1Svdc5JYlTmQapg0QI2hbd3en2XUstOU4aYQt/NLAHNyCtw5 yI659Yt00SeyItcppYCrmSQ+oTH5aZ4agdpgB/xCJLXxoCJudQ8yVWJ2vdDOzDsesss0 11yFXamm3rg5y08U6dfAoe6N+LVa/b7HpobhGdtbQOQeO5kMbMV8tGxTdlTQfFP6UVEe b6NkdSgh4Ie9tDe6Gxgj54IDpOtDYGVKxtk3LiLzhGwS70z9GFgGSGYoa68cgGYfPwmV i2JG6UXqcF+RmySRREBtS9rZj7/y5wUBwaDeh7WdNMlNB2RbNWIf0f+blNSD1cYLZr+B 4vvw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w17si2699382eju.619.2020.11.26.02.43.38; Thu, 26 Nov 2020 02:44:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388659AbgKZIJc (ORCPT + 99 others); Thu, 26 Nov 2020 03:09:32 -0500 Received: from out30-133.freemail.mail.aliyun.com ([115.124.30.133]:39780 "EHLO out30-133.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731457AbgKZIJc (ORCPT ); Thu, 26 Nov 2020 03:09:32 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R291e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0UGa28sv_1606378169; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UGa28sv_1606378169) by smtp.aliyun-inc.com(127.0.0.1); Thu, 26 Nov 2020 16:09:29 +0800 Subject: Re: [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add To: Yu Zhao Cc: Konstantin Khlebnikov , Andrew Morton , Hugh Dickins , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605860847-47445-1-git-send-email-alex.shi@linux.alibaba.com> <20201126045234.GA1014081@google.com> <20201126072402.GA1047005@google.com> From: Alex Shi Message-ID: <0e14f1dc-31bb-5965-4711-9e59c51ee36d@linux.alibaba.com> Date: Thu, 26 Nov 2020 16:09:29 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201126072402.GA1047005@google.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2020/11/26 下午3:24, Yu Zhao 写道: > Oh, no, I'm not against your idea. I was saying it doesn't seem > necessary to sort -- a nested loop would just do the job given > pagevec is small. > > diff --git a/mm/swap.c b/mm/swap.c > index cb3794e13b48..1d238edc2907 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -996,15 +996,26 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) > */ > void __pagevec_lru_add(struct pagevec *pvec) > { > - int i; > + int i, j; > struct lruvec *lruvec = NULL; > unsigned long flags = 0; > > for (i = 0; i < pagevec_count(pvec); i++) { > struct page *page = pvec->pages[i]; > > + if (!page) > + continue; > + > lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); > - __pagevec_lru_add_fn(page, lruvec); > + > + for (j = i; j < pagevec_count(pvec); j++) { > + if (page_to_nid(pvec->pages[j]) != page_to_nid(page) || > + page_memcg(pvec->pages[j]) != page_memcg(page)) > + continue; > + > + __pagevec_lru_add_fn(pvec->pages[j], lruvec); > + pvec->pages[j] = NULL; > + } Uh, I have to say your method is more better than mine. And this could be reused for all relock_page_lruvec. I expect this could speed up lru performance a lot! > } > if (lruvec) > unlock_page_lruvec_irqrestore(lruvec, flags);