Received: by 2002:a05:6a10:413:0:0:0:0 with SMTP id 19csp2143238pxp; Mon, 21 Mar 2022 12:19:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw2iNbNlGh/90hbL1sXei4ic4AXsTLoaqi+wNk6gryDVEJd2A6di+u5fYsloko+KmGjbL5q X-Received: by 2002:a17:902:f243:b0:154:57eb:c748 with SMTP id j3-20020a170902f24300b0015457ebc748mr6686109plc.164.1647890359139; Mon, 21 Mar 2022 12:19:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1647890359; cv=none; d=google.com; s=arc-20160816; b=A6F/6QhYC0GZV+RTt5BXoezDeTgguQ+lsOBg/dcOYesU5R3r1gACC/xHatu7Rjy1p/ 5rh0Nk47y/PV6+A3hBeVrI7wLSMpxCNPOcWrOTKvCk8H1E/WHocTkclr3h5hYkM0S/Uh ZiocoNuLUrTvo/B2SNV1FHHkrZOt/gznAKHpvNtgpckAuYiB/zsXpy1m1ErWsayfPfVI dqi7DpeNW6brfh34AseSs149qrp37+ebggShIgocwOdPF3C2jGtW3mcnsuIcB6z1nV6t RsbQI9fiNBYB5UK9L9IG+9EEwX8c7J8/DNFPgPqKahnia+36Kx7FeSMjharp/LfbuL9N N4kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from:dkim-signature; bh=9q5x7jpzH9Kb8+FLz9u89zDvKSGyC1oaIx+i6oW/HNM=; b=RaFdX7uWFvRww4mApTu+FtsH3eQXuY1iVNsdDXNZFK/e2qsZ/jtBdVjrht/mlJvh0s xUu85zVx06dfnL46BwMm9YzM6UPnmz8W9EhYjGYf9gWGxFUWcA1RjLHASMSS6VGbFr0Y hEZHPfNsOoFcjmZcx5671wicmuXjFUP7a4B3qaZXT+g8obOfRCj92APix5PV43LLHC9x 1akG0OTNgSrF3QYuwtpWtF0W6ZXtzZAxn/KHY12AWkVTvx165D8Pj58hlpMK6MXscGcf cWHF3zje0HyZ33697zYqJvUDojJwRgNoOtiT+u8hNN+6s8hP6CnIMcKcLicHyWagDI+c iUSA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=Op5eDAAL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bg3-20020a056a02010300b0038160c4545csi15478593pgb.115.2022.03.21.12.19.05; Mon, 21 Mar 2022 12:19:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=Op5eDAAL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347628AbiCUNE7 (ORCPT + 99 others); Mon, 21 Mar 2022 09:04:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347679AbiCUNEz (ORCPT ); Mon, 21 Mar 2022 09:04:55 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F3CF21B0; Mon, 21 Mar 2022 06:03:29 -0700 (PDT) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22LC415D010632; Mon, 21 Mar 2022 13:01:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : in-reply-to : references : date : message-id : content-type : mime-version; s=pp1; bh=9q5x7jpzH9Kb8+FLz9u89zDvKSGyC1oaIx+i6oW/HNM=; b=Op5eDAALkL4ufuldSPM++aY5vqiCWENsrfEPEoSF1kXOQSIDDc6aAPQm+VSFQg1XrXMZ Cnw3atW3I5h3aQcHkZnowzfEbpuGCNztHKXAE2/z+t1EFKBphDpbiWBj6L3VBpnuClDW D1WR9CYKlT8B5OO4oM5JEaGoHeA7nA8z/rE4oiMppmSLaouz/Y6YMJvlntcunbS2NcLT B6LsX05tvOS0lqJ3peHrsWUrBETdVTyWowD0xu5LY/+d6mfcimk9/6bPiTGQwf3xjuZB oNnoz6av38QFX01iQHQV09hZN8SqGkstqAKb5FbYkRufuNH5Mr7mf1RAecrD2WBOlsEB pg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3exppbm4dk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 21 Mar 2022 13:01:58 +0000 Received: from m0098416.ppops.net (m0098416.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22LC5WvT026483; Mon, 21 Mar 2022 13:01:57 GMT Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0b-001b2d01.pphosted.com with ESMTP id 3exppbm4cq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 21 Mar 2022 13:01:56 +0000 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22LCvPKU000463; Mon, 21 Mar 2022 13:01:55 GMT Received: from b03cxnp07028.gho.boulder.ibm.com (b03cxnp07028.gho.boulder.ibm.com [9.17.130.15]) by ppma01wdc.us.ibm.com with ESMTP id 3ew6t8ncus-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 21 Mar 2022 13:01:55 +0000 Received: from b03ledav005.gho.boulder.ibm.com (b03ledav005.gho.boulder.ibm.com [9.17.130.236]) by b03cxnp07028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22LD1sg016253284 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 21 Mar 2022 13:01:54 GMT Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8A25BBE056; Mon, 21 Mar 2022 13:01:54 +0000 (GMT) Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6AD81BE059; Mon, 21 Mar 2022 13:01:39 +0000 (GMT) Received: from skywalker.linux.ibm.com (unknown [9.43.41.132]) by b03ledav005.gho.boulder.ibm.com (Postfix) with ESMTP; Mon, 21 Mar 2022 13:01:39 +0000 (GMT) X-Mailer: emacs 29.0.50 (via feedmail 11-beta-1 I) From: "Aneesh Kumar K.V" To: Yu Zhao , Andrew Morton , Linus Torvalds Cc: Andi Kleen , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, page-reclaim@google.com, x86@kernel.org, Yu Zhao , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , Holger =?utf-8?Q?Hoffst=C3=A4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Subject: Re: [PATCH v9 06/14] mm: multi-gen LRU: minimal implementation In-Reply-To: <20220309021230.721028-7-yuzhao@google.com> References: <20220309021230.721028-1-yuzhao@google.com> <20220309021230.721028-7-yuzhao@google.com> Date: Mon, 21 Mar 2022 18:31:36 +0530 Message-ID: <87a6dj793j.fsf@linux.ibm.com> Content-Type: text/plain X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: EclEponSA0OxvGCNiA4_beJ9dykZ1kuZ X-Proofpoint-GUID: cYsxcsw5Zm1m1Yp3PYGmzehV4oBKwLM8 X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-21_05,2022-03-21_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 mlxscore=0 bulkscore=0 phishscore=0 adultscore=0 malwarescore=0 lowpriorityscore=0 impostorscore=0 priorityscore=1501 mlxlogscore=999 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203210085 X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_EF,RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Yu Zhao writes: > To avoid confusion, the terms "promotion" and "demotion" will be > applied to the multi-gen LRU, as a new convention; the terms > "activation" and "deactivation" will be applied to the active/inactive > LRU, as usual. > > The aging produces young generations. Given an lruvec, it increments > max_seq when max_seq-min_seq+1 approaches MIN_NR_GENS. The aging > promotes hot pages to the youngest generation when it finds them > accessed through page tables; the demotion of cold pages happens > consequently when it increments max_seq. The aging has the complexity > O(nr_hot_pages), since it is only interested in hot pages. Promotion > in the aging path does not require any LRU list operations, only the > updates of the gen counter and lrugen->nr_pages[]; demotion, unless as > the result of the increment of max_seq, requires LRU list operations, > e.g., lru_deactivate_fn(). > > The eviction consumes old generations. Given an lruvec, it increments > min_seq when the lists indexed by min_seq%MAX_NR_GENS become empty. A > feedback loop modeled after the PID controller monitors refaults over > anon and file types and decides which type to evict when both types > are available from the same generation. > > Each generation is divided into multiple tiers. Tiers represent > different ranges of numbers of accesses through file descriptors. A > page accessed N times through file descriptors is in tier > order_base_2(N). Tiers do not have dedicated lrugen->lists[], only > bits in folio->flags. In contrast to moving across generations, which > requires the LRU lock, moving across tiers only involves operations on > folio->flags. The feedback loop also monitors refaults over all tiers > and decides when to protect pages in which tiers (N>1), using the > first tier (N=0,1) as a baseline. The first tier contains single-use > unmapped clean pages, which are most likely the best choices. The > eviction moves a page to the next generation, i.e., min_seq+1, if the > feedback loop decides so. This approach has the following advantages: > 1. It removes the cost of activation in the buffered access path by > inferring whether pages accessed multiple times through file > descriptors are statistically hot and thus worth protecting in the > eviction path. > 2. It takes pages accessed through page tables into account and avoids > overprotecting pages accessed multiple times through file > descriptors. (Pages accessed through page tables are in the first > tier, since N=0.) > 3. More tiers provide better protection for pages accessed more than > twice through file descriptors, when under heavy buffered I/O > workloads. > > Server benchmark results: > Single workload: > fio (buffered I/O): +[47, 49]% > IOPS BW > 5.17-rc2: 2242k 8759MiB/s > patch1-5: 3321k 12.7GiB/s > > Single workload: > memcached (anon): +[101, 105]% > Ops/sec KB/sec > 5.17-rc2: 476771.79 18544.31 > patch1-5: 972526.07 37826.95 > > Configurations: > CPU: two Xeon 6154 > Mem: total 256G > > Node 1 was only used as a ram disk to reduce the variance in the > results. > > patch drivers/block/brd.c < 99,100c99,100 > < gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM; > < page = alloc_page(gfp_flags); > --- > > gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM | __GFP_THISNODE; > > page = alloc_pages_node(1, gfp_flags, 0); > EOF > > cat >>/etc/systemd/system.conf < CPUAffinity=numa > NUMAPolicy=bind > NUMAMask=0 > EOF > > cat >>/etc/memcached.conf < -m 184320 > -s /var/run/memcached/memcached.sock > -a 0766 > -t 36 > -B binary > EOF > > cat fio.sh > modprobe brd rd_nr=1 rd_size=113246208 > mkfs.ext4 /dev/ram0 > mount -t ext4 /dev/ram0 /mnt > > mkdir /sys/fs/cgroup/user.slice/test > echo 38654705664 >/sys/fs/cgroup/user.slice/test/memory.max > echo $$ >/sys/fs/cgroup/user.slice/test/cgroup.procs > fio -name=mglru --numjobs=72 --directory=/mnt --size=1408m \ > --buffered=1 --ioengine=io_uring --iodepth=128 \ > --iodepth_batch_submit=32 --iodepth_batch_complete=32 \ > --rw=randread --random_distribution=random --norandommap \ > --time_based --ramp_time=10m --runtime=5m --group_reporting > > cat memcached.sh > modprobe brd rd_nr=1 rd_size=113246208 > swapoff -a > mkswap /dev/ram0 > swapon /dev/ram0 > > memtier_benchmark -S /var/run/memcached/memcached.sock \ > -P memcache_binary -n allkeys --key-minimum=1 \ > --key-maximum=65000000 --key-pattern=P:P -c 1 -t 36 \ > --ratio 1:0 --pipeline 8 -d 2000 > > memtier_benchmark -S /var/run/memcached/memcached.sock \ > -P memcache_binary -n allkeys --key-minimum=1 \ > --key-maximum=65000000 --key-pattern=R:R -c 1 -t 36 \ > --ratio 0:1 --pipeline 8 --randomize --distinct-client-seed > > Client benchmark results: > kswapd profiles: > 5.17-rc2 > 38.05% page_vma_mapped_walk > 20.86% lzo1x_1_do_compress (real work) > 6.16% do_raw_spin_lock > 4.61% _raw_spin_unlock_irq > 2.20% vma_interval_tree_iter_next > 2.19% vma_interval_tree_subtree_search > 2.15% page_referenced_one > 1.93% anon_vma_interval_tree_iter_first > 1.65% ptep_clear_flush > 1.00% __zram_bvec_write > > patch1-5 > 39.73% lzo1x_1_do_compress (real work) > 14.96% page_vma_mapped_walk > 6.97% _raw_spin_unlock_irq > 3.07% do_raw_spin_lock > 2.53% anon_vma_interval_tree_iter_first > 2.04% ptep_clear_flush > 1.82% __zram_bvec_write > 1.76% __anon_vma_interval_tree_subtree_search > 1.57% memmove > 1.45% free_unref_page_list > > Configurations: > CPU: single Snapdragon 7c > Mem: total 4G > > Chrome OS MemoryPressure [1] > > [1] https://chromium.googlesource.com/chromiumos/platform/tast-tests/ > In shrink_active_list we do preferential treatment of VM_EXEC pages. Do we do similar thing with MGLRU? if not why is that not needed? if (page_referenced(page, 0, sc->target_mem_cgroup, &vm_flags)) { /* * Identify referenced, file-backed active pages and * give them one more trip around the active list. So * that executable code get better chances to stay in * memory under moderate memory pressure. Anon pages * are not likely to be evicted by use-once streaming * IO, plus JVM can create lots of anon VM_EXEC pages, * so we ignore them here. */ if ((vm_flags & VM_EXEC) && page_is_file_lru(page)) { nr_rotated += thp_nr_pages(page); list_add(&page->lru, &l_active); continue; } }