Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp410279pxu; Thu, 26 Nov 2020 01:39:26 -0800 (PST) X-Google-Smtp-Source: ABdhPJzkEZvkplJ76EUnVq1RlS9kcCsM4T6VTmmv++6iX9zdHLdpYNDAgGYwDBEbldnGRLO28E3/ X-Received: by 2002:a17:906:768a:: with SMTP id o10mr1906602ejm.212.1606383566643; Thu, 26 Nov 2020 01:39:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606383566; cv=none; d=google.com; s=arc-20160816; b=cTKqBkL/KjAswf/TwrCmaWsQMBbP70uTbaefb93MQkAaKBpybNUGaWeH+bQvkTNwKR x1JSLuWsCDmr3lQX9aqtmuOvhByUmaE6zven3EgemVC4qysWYemDKvVRAlHWPstGQmK1 FhyRRXvp/ZxloWqi0BXEWA7ci/KpfS/Fl7R2FWHRqcFwBPRgJ/RvtMXjYtwi4ZRntAfV gj8Lkb1J5rRfF7ZfRTGwa1OIyoGUgH4Nk3bFs4yepCMKiutkClBo5isVmhi+nL+VO/ZE LNQBjBisBcO+D/lTIXbdxY++s5UZLJSabWqc1kMC7OfTmV2fw9XyyLVFmavDKMJDCn6W V5uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:references:cc:to :subject; bh=QLax+a8tEsUWlNNuLycDSt1++DPv6FlFlCffB9OHEN0=; b=Uq+/Zog5pKOS8IEuIjcuMWQL3OST5NhBSviXoOEU+rsOphCoK5EnwpPnqho5x4+pl2 tsYPuXiCbzP/BdDmTF7mD0J/rAjT/VkqZeVult8uJsASLeQ+1RAt7VmDZanmYy2lSCyy z5z1tFrXL5S3PUgbaWONgopwA+FjkrwzYycV1uGN6qMDsWk9P21Be2zMLHBNTqOqyGuA /eHbRlesdXWJMu4O+h+4YaqiRyO501AMxasJ8PGi6SZPSv/FFoYkTZE9aT3oD8TUmNWC Edq5VCFc+RKlhtQPVzvfrcZDwQZ0Yonq8UL5BHSRJCkcL1He+vFQL/OJ8YnrS0i7bMMn DXcg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cw3si2724131edb.444.2020.11.26.01.39.04; Thu, 26 Nov 2020 01:39:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733076AbgKZDMn (ORCPT + 99 others); Wed, 25 Nov 2020 22:12:43 -0500 Received: from out4436.biz.mail.alibaba.com ([47.88.44.36]:8944 "EHLO out4436.biz.mail.alibaba.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732389AbgKZDMn (ORCPT ); Wed, 25 Nov 2020 22:12:43 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0UGYkbqq_1606360350; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UGYkbqq_1606360350) by smtp.aliyun-inc.com(127.0.0.1); Thu, 26 Nov 2020 11:12:31 +0800 Subject: Re: [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add To: Vlastimil Babka Cc: Konstantin Khlebnikov , Andrew Morton , Hugh Dickins , Yu Zhao , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605860847-47445-1-git-send-email-alex.shi@linux.alibaba.com> From: Alex Shi Message-ID: Date: Thu, 26 Nov 2020 11:12:30 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 在 2020/11/25 下午11:38, Vlastimil Babka 写道: > On 11/20/20 9:27 AM, Alex Shi wrote: >> The current relock logical will change lru_lock when found a new >> lruvec, so if 2 memcgs are reading file or alloc page at same time, >> they could hold the lru_lock alternately, and wait for each other for >> fairness attribute of ticket spin lock. >> >> This patch will sort that all lru_locks and only hold them once in >> above scenario. That could reduce fairness waiting for lock reget. >> Than, vm-scalability/case-lru-file-readtwice could get ~5% performance >> gain on my 2P*20core*HT machine. > > Hm, once you sort the pages like this, it's a shame not to splice them instead of more list_del() + list_add() iterations. update_lru_size() could be also called once? Yes, looks it's a good idea to use splice instead of list_del/add, but pages may on different lru list in a same lruvec, and also may come from different zones. That could involve 5 cycles for different lists, and more for zones... I give up the try.