Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp992556ybg; Wed, 10 Jun 2020 20:32:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzD+r8YvwlGowX8S6VYU9vg6tnb80MGwOzkchdRX9NVuUEpnqqHVypi65e7FWOb/t7hDb7X X-Received: by 2002:a05:6402:30ae:: with SMTP id df14mr5053717edb.310.1591846336785; Wed, 10 Jun 2020 20:32:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591846336; cv=none; d=google.com; s=arc-20160816; b=pEzQNg4NBbFR94gkZATpqzQdgwGYa4ufLWEvKCpQUtLVStLYikmldN8tFSD1vpIFHs GiIUlQpGM5jaqcqYusUFAjTtkMueB+eIQCUAk0tdhOAlFjS7M84tLdBNVDautKuhy8EH JMRWv5bYVdvGQmMIt4Z2HJaTTU32rIODYjxA1LbQIaxBBbD32Kp+e8zCpcL88Rx1pDMh EQTrHI9/FufSklGWmNyxuJmujDL01q5y+STAhrTuWiuGtjkddjcEUveFkSqiO4ci+Pq5 7faVq9HxFwRl63nNnRWNlJ7mNK6iUJBBBLI1SHePTCehbAY7ONnpSHfOnafCTn9RRflt 2Otw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:references:cc:to:subject :from; bh=BkZyrM8UE7Oc5co0eqBIVMLVPFg/0XmKBjM7by8JMvI=; b=ajewmSQyUdsbGfkwCWg/pmRthmcSQAsCMZESkzqYvo3sIBrAhk5wADfnxq0hd5SJqB 4Em67Yem801omrZ0mFTUXOTZmtfGD16M0DEwFnueeS33mefZ/SUeWoHLnAhDLqNYG2zE vcAsdRmmwe/Ja9FsxmuWjyInTRdWXa2kOtybeOg2JdFuqhdXiNSSDIBcOECvHWM3h0qn RTQQ8gDWkrAJchsCIWEYs11lguJvJ6V3pPZr0W7BDfXadCz5OF7Fp+2RS0oa24jncFKC zia1u81l0yr1jDEGBPBD6SQp1B4t5IXx0igaruCQuOR/naoaVVjnyNRco1qWKAlMPj6Q eeOQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k26si615485edq.512.2020.06.10.20.31.54; Wed, 10 Jun 2020 20:32:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726488AbgFKD2z (ORCPT + 99 others); Wed, 10 Jun 2020 23:28:55 -0400 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:34042 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726290AbgFKD2x (ORCPT ); Wed, 10 Jun 2020 23:28:53 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R871e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07425;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0U.E6nrM_1591846127; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U.E6nrM_1591846127) by smtp.aliyun-inc.com(127.0.0.1); Thu, 11 Jun 2020 11:28:48 +0800 From: Alex Shi Subject: Re: [patch 113/131] mm: balance LRU lists based on relative thrashing To: Joonsoo Kim , Johannes Weiner Cc: LKML , Andrew Morton , Joonsoo Kim , Linux Memory Management List , Michal Hocko , =?UTF-8?B?6rmA66+87LCs?= , mm-commits@vger.kernel.org, Rik van Riel , Linus Torvalds References: <20200603230303.kSkT62Lb5%akpm@linux-foundation.org> <20200609144551.GA452252@cmpxchg.org> Message-ID: Date: Thu, 11 Jun 2020 11:28:47 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2020/6/10 下午1:23, Joonsoo Kim 写道: > 2020년 6월 9일 (화) 오후 11:46, Johannes Weiner 님이 작성: >> >> On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote: >>> >>> >>> 在 2020/6/4 上午7:03, Andrew Morton 写道: >>>> >>>> + /* XXX: Move to lru_cache_add() when it supports new vs putback */ >>> >>> Hi Hannes, >>> >>> Sorry for a bit lost, would you like to explain a bit more of your idea here? >>> >>>> + spin_lock_irq(&page_pgdat(page)->lru_lock); >>>> + lru_note_cost(page); >>>> + spin_unlock_irq(&page_pgdat(page)->lru_lock); >>>> + >>> >>> >>> What could we see here w/o the lru_lock? Why I want to know the lru_lock protection here is that currently we have 5 lru lists but only guarded by one lock, that would cause much contention when different apps active on a server. I guess originally we have only one lru_lock, since 5 locks would cause cacheline bouncing if we put them together, or a bit cacheline waste to separate them in cacheline. But after we have qspinlock, each of cpu will just loop lock on their cacheline, no interfer to others. It would much much relief the performance drop by cacheline bounce. And we could use page.mapping bits to store the using lru list index for the page. As a quick thought, I guess, except the 5 locks for 5 lists, we still need 1 more lock for common lruvec data or for others which relay on lru_lock now, like mlock, hpage_nr_pages.. That's the reason I want to know everything under lru_lock. :) Any comments for this idea? :) Thanks Alex >> >> It'll just be part of the existing LRU locking in >> pagevec_lru_move_fn(), when the new pages are added to the LRU in >> batch. See this older patch for example: >> >> https://lore.kernel.org/linux-mm/20160606194836.3624-6-hannes@cmpxchg.org/ >> >> I didn't include it in this series to reduce conflict with Joonsoo's >> WIP series that also operates in this area and does something similar: > > Thanks! > >> https://lkml.org/lkml/2020/4/3/63 > > I haven't completed the rebase of my series but I guess that referenced patch > "https://lkml.org/lkml/2020/4/3/63" would be removed in the next version. Thanks a lot for the info, Johannes&Joonsoo! A long history for a interesting idea. :) > > Before the I/O cost model, a new anonymous page contributes to the LRU reclaim > balance. But, now, a new anonymous page doesn't contributes to the I/O cost > so this adjusting patch would not be needed anymore. > > If anyone wants to change this part, > "/* XXX: Move to lru_cache_add() when it supports new vs putback */", feel free > to do it.