Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1141045pxu; Fri, 27 Nov 2020 00:13:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJy6+DuiJEzoZ7VPMlfdnXwtI7g+YLXktGWD1JvUuj7wLtD1KfW2oAeEIU42wPLv77K377LN X-Received: by 2002:a50:85c6:: with SMTP id q6mr6640634edh.126.1606464832416; Fri, 27 Nov 2020 00:13:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606464832; cv=none; d=google.com; s=arc-20160816; b=t36yJXTpAXjeVITCmJ+wof0zco1ZnAVX6s6SeqXzZ9xSh08NrqnDQpYFLWBHZE/6wW mZ75Wd3AUWKICK12sYq06dXOhEIRNm7rwHMa7/ihQi/B27immUkNQ5ykXFlKXMUjyheE +9opKRmRXNCgBgRrF7dpPI7bKeisUXjw060m11bN4obX26IWMoOHNb4NYu0l5CkkL/0+ 2zzi2PZgRwBYfA81reMOrsyP5wZR0qZaCMxVpjRY9qvR5TM2fynvvgGZCJpuWm4ESLxi nLQKBwx1GpJJJ4dpozQT7cdTlF24Zi0yApV4uvOBDyTW1CAu75oxs5CgrgAUuDAgHjxB Gmzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=aQ7GBx5GnfKVgeI73cXd5EOW+j16c0KaAPLpoNJ0+G4=; b=kCktPUZ5nDbdfN+Rw9UGdk8bUxX4jWfUNgPwkYuIY4PNqgLd9mGTBYNF1dPYMqJLXn 8+XqfWB5BETGCZ4hav2Q7Lu//phNBU5dJAnmf9MQfi5QYvmnPXP2zSFcwldfXLpUQkHU 0OC5uPJHkq0sQkCauhlTtSWNUtSftTGj5vr9QbtXWv3dllZdaq/nu8C49rM/WL0AWJim e0al7qGge7fgPgpk+FNV+Os1yJ+sM3a45GzzfyC2VyfoSABWYn/wOHEfMBkpuSuZcqQ6 9hcsI0NSCLT6DJqY0xEGGVMZgRc0SL7FcT2hCbpdnTy6vc5RhDhOj0OPp3RAFX1K0f14 FnPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ot12si4568786ejb.239.2020.11.27.00.13.29; Fri, 27 Nov 2020 00:13:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730074AbgKZLFb (ORCPT + 99 others); Thu, 26 Nov 2020 06:05:31 -0500 Received: from mx2.suse.de ([195.135.220.15]:39538 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728495AbgKZLFb (ORCPT ); Thu, 26 Nov 2020 06:05:31 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 1DC73AC23; Thu, 26 Nov 2020 11:05:30 +0000 (UTC) Subject: Re: [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add To: Alex Shi Cc: Konstantin Khlebnikov , Andrew Morton , Hugh Dickins , Yu Zhao , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605860847-47445-1-git-send-email-alex.shi@linux.alibaba.com> From: Vlastimil Babka Message-ID: Date: Thu, 26 Nov 2020 12:05:28 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/26/20 4:12 AM, Alex Shi wrote: > > > 在 2020/11/25 下午11:38, Vlastimil Babka 写道: >> On 11/20/20 9:27 AM, Alex Shi wrote: >>> The current relock logical will change lru_lock when found a new >>> lruvec, so if 2 memcgs are reading file or alloc page at same time, >>> they could hold the lru_lock alternately, and wait for each other for >>> fairness attribute of ticket spin lock. >>> >>> This patch will sort that all lru_locks and only hold them once in >>> above scenario. That could reduce fairness waiting for lock reget. >>> Than, vm-scalability/case-lru-file-readtwice could get ~5% performance >>> gain on my 2P*20core*HT machine. >> >> Hm, once you sort the pages like this, it's a shame not to splice them instead of more list_del() + list_add() iterations. update_lru_size() could be also called once? > > Yes, looks it's a good idea to use splice instead of list_del/add, but pages > may on different lru list in a same lruvec, and also may come from different > zones. That could involve 5 cycles for different lists, and more for zones... Hmm, zones wouldn't affect splicing (there's a per-node lru these days), but would affect accounting. And yeah, there are 5 lru lists, and we probably need to be under lru lock to stabilize where a page belongs, so pre-sorting without lock wouldn't be safe? Bummer. > I give up the try. >