Received: by 2002:ab2:6816:0:b0:1f9:5764:f03e with SMTP id t22csp688690lqo; Thu, 16 May 2024 20:30:27 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWx3oSlVU5fBwPsh0qElwmWEZM1coU+AK1i84PEVryNQ7kDxffTubpqNplImqkOzhhFCtBuSDUfHCeJJ9ehAAMjhHMZMi8W0pcJwi+6Fg== X-Google-Smtp-Source: AGHT+IHAQqaA+9fV1aYS/gb0PVlA+gkGJJ67oF63wIa2ORcJJWmjKn2o1GswbZCD9nEwQ3e4FKaM X-Received: by 2002:a50:a6dc:0:b0:574:ebf4:f786 with SMTP id 4fb4d7f45d1cf-574ebf4f8c7mr6990743a12.16.1715916627008; Thu, 16 May 2024 20:30:27 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715916626; cv=pass; d=google.com; s=arc-20160816; b=0JS4S0utKJZvURqvOvWtKnayV8wsZyd6PA0oqK2iHO1yrrVhDxMMEDALB8p1kqoubg vfYotpROcizxWl69XwzDvOvCjOgEBiRIziAyWJWpgyG4j6u0ctq/67NB8DfCSWLBXvVL iN0JyBbbPYsp5tvMzCtarN76Yqpi8WFl/HFAri6hafZ3ADLIjkjIZFIb/5n+07bHO38y P+6+uXvCUWvf0i08MH+QL1dcwNwThzRXsrTlxW7lL3PvzyeMVibm57oM9aSiobGsrJ/h seDmjutnxf3YhWyNqmpt+ssxU3PDajO+NSZv02mRYwZ+L++3rTb1J3YwO3HJHvM+Zy9a TbVQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:list-unsubscribe:list-subscribe :list-id:precedence:dkim-signature; bh=5Cz7/vI1LH0f2WLQEBvwGZZcGzYd1g9y6LcZk4JclSo=; fh=KThL5j2wgt0Pr/iq33BSgsCJD7X7LBBhDLB6CdaW0ls=; b=M0kxwBjsUtTHqCGYZqr7tHmdcEmuqkimRCnE5pSe1qBnsXkUmS+EWSkfulFEiVowz0 hk5f5Z2KbGQuQkTGmUVsf1i+hURu9+QIJhVX/+HSs0xPap/8UV8lFaWqfDSRWCzO2+y4 RyZ01h6z4GYRHdZlRRDiYpyQe8UPDNg6XdE7vxMxpTgNwugJLvszY0Z6tupJGxM/ueRO TTfx3eyzO+6PmKdHRyFcCEayBEqyWiHCxVBewrMmbedr6AiNarhKn8uPXKI5TfM704+z LiVSM5kJ5dXrv0m+qtCDgws5oDiH7u5lUs1hqVhwbMqNcCSzELCp8tDuSVL+7u2/Rz+l kUmQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=gihMkjok; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-181718-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-181718-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 4fb4d7f45d1cf-5733bec15c8si9534637a12.167.2024.05.16.20.30.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 May 2024 20:30:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-181718-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=gihMkjok; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-181718-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-181718-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 95B581F22408 for ; Fri, 17 May 2024 03:30:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B080A10A28; Fri, 17 May 2024 03:30:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gihMkjok" Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9DFD10A09 for ; Fri, 17 May 2024 03:30:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.176 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715916618; cv=none; b=isnheUgNwLUF/iw7Nyple9qspH0BYMnwo2KL4Sc+Lohd5rUArI/hkuOLJH7aq5xq7NE1c0HoXo+dDKq2VKApdAw4t3TMcocMBtKmeGkotpZykXvfODIrmfh0WIMy5Nd32M7R78sM+omvF0wSlrDhCs7UeJQLIjljD7OjlClKSyc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715916618; c=relaxed/simple; bh=Xm9gOqUBaf688SaRu8s+Kakp+APap5QeqRhDKUwrOlM=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=Ldoq+ngyZfaVYnH2Uy+faAw7UdUBibQSTxt1BSWKM4DhuA3vqLclu87DRgCcCvqYHX4Y0cVbK3Wi8ngDYfKqi442K0z85oV1TMlAXbDdFQgT+rg2rGIP4ba0FnC7uph+KP3iUX1/W43dRt6Q4IVUy7NgnqRSO0fXZVgkyugyugs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gihMkjok; arc=none smtp.client-ip=209.85.208.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-lj1-f176.google.com with SMTP id 38308e7fff4ca-2e2b468ea12so13067111fa.1 for ; Thu, 16 May 2024 20:30:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1715916615; x=1716521415; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=5Cz7/vI1LH0f2WLQEBvwGZZcGzYd1g9y6LcZk4JclSo=; b=gihMkjokOXP8g7kbB2+83rv7ejT94BnJHEnaTuJ0xVs9HfVPXE1ES1HJgIaTfVHKZr AFltd+/UT0cFu6bcJap1NsEP1DjDs+/cAT45AYONbwAMa7QcHk26/hJIPpBZHwppf7Cr Un95dDeWpn6VwpU6SGLnH06hNzlsfHqD4ywoKvOHOq5XSSRxbNJ0BqR/LnkNwxAYjyn5 96HO8nRLP7DgqpwgfwyKztWkvNSZ7KBeKo0CL8HND+Qevleo+bkoLMKqBxKkikz3EVvj 2Kv7EOA4vzSFphqy683LnAXmilUlH8aVsuqPdsVfC5cBQUookz+v8gAkN8s6INQdXql4 olXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715916615; x=1716521415; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5Cz7/vI1LH0f2WLQEBvwGZZcGzYd1g9y6LcZk4JclSo=; b=gHY9Wr4Ps/xRr3tbZ5FCRSvT+2jfVQSxF2KE6Woe9m8SAiMGd38oVhDPoQCWH5R+dj yr+URi8inrZgQeEOakfp3/Hqoy2ee2bYzJc8zmxO/KfOD0I7L7LEKMlfJKd1KWhLNP9d s0LoSokPOR9B8RYg5JAJHbwvpeaLLM0f3coPpKUuoRn7P13oQvJ08l/mX7l9XVzx72Bq aEZ1fF9Qkdks+I3f1KmgEtGnfGFze9NYm83g3swp6gdn4fVMBMJDArwGfSb9ViGPOqIS p7NDWZt3m56zdk13eNTeb7Hi+48Ubd0B5jfUX5hKUYcITxCdxtb4adnjlajZIa+ciI0B lkpg== X-Forwarded-Encrypted: i=1; AJvYcCW/dAHhZ103KY+9dyGwfHvgWHVlQ2oG1IWQR3PO7fHy3sY8UPkMCjPEvYJqfH9+hUpqOkdkaSorl3WZv1Ws6fiEPAu8KG4fd0qnJil0 X-Gm-Message-State: AOJu0YzaBm7Yg4L2UneZ0WsbJJQKxbNs0/DECN/3l7AlsoMcJ0Q11S3w lHGM+e1z4MW+QMKz3yYDeqIMiOrkceD4KjijLD21akacr2I/nT1KElI7Th8Nu/7teNKjSOA+4eC WUyKfBf+0xGSvcJU2bNWcMpuYcoo= X-Received: by 2002:a2e:3518:0:b0:2e5:67a8:10fc with SMTP id 38308e7fff4ca-2e567a811e0mr56894041fa.9.1715916614622; Thu, 16 May 2024 20:30:14 -0700 (PDT) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20240418142008.2775308-1-zhangpeng362@huawei.com> <20240418142008.2775308-3-zhangpeng362@huawei.com> <5nv3r7qilsye5jgqcjrrbiry6on7wtjmce6twqbxg6nmvczue3@ikc22ggphg3h> In-Reply-To: <5nv3r7qilsye5jgqcjrrbiry6on7wtjmce6twqbxg6nmvczue3@ikc22ggphg3h> From: Kairui Song Date: Fri, 17 May 2024 11:29:57 +0800 Message-ID: Subject: Re: [RFC PATCH v2 2/2] mm: convert mm's rss stats to use atomic mode To: Mateusz Guzik Cc: "zhangpeng (AS)" , Rongwei Wang , linux-mm@kvack.org, LKML , Andrew Morton , dennisszhou@gmail.com, shakeelb@google.com, jack@suse.cz, Suren Baghdasaryan , kent.overstreet@linux.dev, mhocko@suse.cz, vbabka@suse.cz, Yu Zhao , yu.ma@intel.com, wangkefeng.wang@huawei.com, sunnanyong@huawei.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Mateusz Guzik =E4=BA=8E 2024=E5=B9=B45=E6=9C=8816=E6=97= =A5=E5=91=A8=E5=9B=9B 23:14=E5=86=99=E9=81=93=EF=BC=9A > > On Thu, May 16, 2024 at 07:50:52PM +0800, Kairui Song wrote: > > > > On 2024/4/18 22:20, Peng Zhang wrote: > > > >> From: ZhangPeng > > > >> > > > >> Since commit f1a7941243c1 ("mm: convert mm's rss stats into > > > >> percpu_counter"), the rss_stats have converted into percpu_counter= , > > > >> which convert the error margin from (nr_threads * 64) to approxima= tely > > > >> (nr_cpus ^ 2). However, the new percpu allocation in mm_init() cau= ses a > > > >> performance regression on fork/exec/shell. Even after commit > > > >> 14ef95be6f55 > > > >> ("kernel/fork: group allocation/free of per-cpu counters for mm > > > >> struct"), > > > >> the performance of fork/exec/shell is still poor compared to previ= ous > > > >> kernel versions. > > > >> > > > >> To mitigate performance regression, we delay the allocation of per= cpu > > > >> memory for rss_stats. Therefore, we convert mm's rss stats to use > > > >> percpu_counter atomic mode. For single-thread processes, rss_stat = is in > > > >> atomic mode, which reduces the memory consumption and performance > > > >> regression caused by using percpu. For multiple-thread processes, > > > >> rss_stat is switched to the percpu mode to reduce the error margin= . > > > >> We convert rss_stats from atomic mode to percpu mode only when the > > > >> second thread is created. > > > > I've a patch series that is earlier than commit f1a7941243c1 ("mm: > > convert mm's rss stats into > > percpu_counter"): > > > > https://lwn.net/ml/linux-kernel/20220728204511.56348-1-ryncsn@gmail.com= / > > > > Instead of a per-mm-per-cpu cache, it used only one global per-cpu > > cache, and flush it on schedule. Or, if the arch supports, flush and > > fetch it use mm bitmap as an optimization (like tlb shootdown). > > > > I just spotted this thread. > > I have a rather long rant to write about the entire ordeal, but don't > have the time at the moment. I do have time to make some remarks though. > > Rolling with a centralized counter and only distributing per-cpu upon > creation of a thread is something which was discussed last time and > which I was considering doing. Then life got it in the way and in the > meantime I managed to conclude it's a questionable idea anyway. > > The state prior to the counters moving to per-cpu was not that great to > begin with, with quite a few serialization points. As far as allocating > stuff goes one example is mm_alloc_cid, with the following: > mm->pcpu_cid =3D alloc_percpu(struct mm_cid); > > Converting the code to avoid per-cpu rss counters in the common case or > the above patchset only damage-control the state back to what it was, > don't do anything to push things further. > > Another note is that unfortunately userspace is increasingly > multithreaded for no good reason, see the Rust ecosystem as an example. > > All that to say is that the multithreaded case is what has to get > faster, as a side effect possibly obsoleting both approaches proposed > above. I concede if there is nobody wiling to commit to doing the work > in the foreseeable future then indeed a damage-controlling solution > should land. Hi, Mateusz, Which patch are you referencing? My series didn't need any allocations on thread creation or destruction. Also RSS update is extremely lightweight (pretty much just read GS and do a few ADD/INC, that's all), performance is better than all even with micro benchmarks. RSS read only collects info from CPUs that may contain real updates. I understand you may not have time to go through my series... but I think I should add some details here. > On that note in check_mm there is this loop: > for (i =3D 0; i < NR_MM_COUNTERS; i++) { > long x =3D percpu_counter_sum(&mm->rss_stat[i]); > > This avoidably walks all cpus 4 times with a preemption and lock trip > for each round. Instead one can observe all modifications are supposed > to have already stopped and that this is allocated in a banch. A > routine, say percpu_counter_sum_many_unsafe, could do one iteration > without any locks or interrupt play and return an array. This should be > markedly faster and I perhaps will hack it up. Which is similar to the RSS read in my earlier series... It is based on the assumption that updates are likely stopped so just read the counter "unsafely" with a double (and fast) check to ensure no race. And even more, when coupled with mm shootdown (CONFIG_ARCH_PCP_RSS_USE_CPUMASK), it doesn't need to collect RSS info on thread exit at all. > > A part of The Real Solution(tm) would make counter allocations scale > (including mcid, not just rss) or dodge them (while maintaining the > per-cpu distribution, see below for one idea), but that boils down to > balancing scalability versus total memory usage. It is trivial to just > slap together a per-cpu cache of these allocations and have the problem > go away for benchmarking purposes, while being probably being too memory > hungry for actual usage. > > I was pondering an allocator with caches per some number of cores (say 4 > or 8). Microbenchmarks aside I suspect real workloads would not suffer > from contention at this kind of granularity. This would trivially reduce > memory usage compared to per-cpu caching. I suspect things like > mm_struct, task_struct, task stacks and similar would be fine with it. > > Suppose mm_struct is allocated from a more coarse grained allocator than > per-cpu. Total number of cached objects would be lower than it is now. > That would also mean these allocated but not currently used mms could > hold on to other stuff, for example per-cpu rss and mcid counters. Then > should someone fork or exit, alloc/free_percpu would be avoided for most > cases. This would scale better and be faster single-threaded than the > current state. And what is the issue with using only one CPU cache, and flush on mm switch? No more alloc after boot, and the total (and fixed) memory usage is just about a few unsigned long per CPU, which should be even lower that the old RSS cache solution (4 unsigned long per task). And it scaled very well with many kinds of microbench or workload I've tested. Unless the workload keeps doing something like "alloc one page then switch to another mm", I think the performance will be horrible already due to cache invalidations and many switch_*()s, RSS isn't really a concern there. > > (believe it or not this is not the actual long rant I have in mind) > > I can't commit to work on the Real Solution though. > > In the meantime I can submit percpu_counter_sum_many_unsafe as described > above if Denis likes the idea.