Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp12344pxt; Wed, 11 Aug 2021 13:13:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxckbqPpj0bPMejWblO00PRz5yNfdtB3BJF6IbDrcyz7vH1DdUOQ4KGLS18KzwyqZLYrdHZ X-Received: by 2002:a05:6e02:10a:: with SMTP id t10mr177359ilm.52.1628712832909; Wed, 11 Aug 2021 13:13:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628712832; cv=none; d=google.com; s=arc-20160816; b=KFCqRB3ELQJ0VACY1/KOXt9tbL09cP/14rl46L12S2oZ6PNGTzvukSMfW0kJFjZYLa Mo8uqSPl9WkcmcPwKtCW/eXKXbfcMRbwuUy5cZbvjTwvRq8ogElRFn0qtvFT3+wt1lUR XWf3fY+cc3zKHNgQetSHFz7FNGsHVdlfl72kFef6k/ZPRymSJOGF2dpsPGXTHUpCk5Hj T1rFmqKgnGP9ApE7LWvJPJ8s4s3Na9veYSG2hgTCsx3Z/U0vs1dg47BpMZK5buuL3EFC lJzr5MqrjFj5Qw17Onhy2LCW1QNqsF2BCz0y4qVIYfb0GZ6j64KexHm8Tu0JWtuKh9wB y/Fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=dseBtZhhg9TsasXaHWSLbsjrv3u/VkKskBnthNFAxT4=; b=zqEavVxWYlmNMJgJ+OWzgeyW6WkG6OOdSF7DuosIzhjyk5FZ00jwhaUfQ/qw1zY1Nx XoqxP6DzyuYjDMT8ZM29Aqo2cGLDjSef0iK6RwUNyrUpY2tJT67Fw/3MztWrYYx2yUhu bHMGF3btDiscGSgrvw7IRSQMQ1omXgslFIq8CC6/1dRQ2rQdQeKgpgpwrVaatOfZVY5w PdLiyaANJvfT7WE14Y45k6HgB6rq0xszQc2eW/0VtexU42NIvkGwqYn8GPfMNJFJMzve IIbl1b58yiI8XJbJYa0sVNNTvSdvpuPZZda8i/AiOjd9XWHuAe5YMKGsMrrlKTh/MFn4 ICCA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=hvzuQOAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q9si464504jap.5.2021.08.11.13.13.17; Wed, 11 Aug 2021 13:13:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=hvzuQOAZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230487AbhHKUM3 (ORCPT + 99 others); Wed, 11 Aug 2021 16:12:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229655AbhHKUM2 (ORCPT ); Wed, 11 Aug 2021 16:12:28 -0400 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D7F9C061765 for ; Wed, 11 Aug 2021 13:12:04 -0700 (PDT) Received: by mail-qt1-x82c.google.com with SMTP id z24so3114192qtn.8 for ; Wed, 11 Aug 2021 13:12:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=dseBtZhhg9TsasXaHWSLbsjrv3u/VkKskBnthNFAxT4=; b=hvzuQOAZu6QoXPV3ctaRS+JQBK2U322/9mTsXzYjnDwldMWR9zenMkNZXC50YrigSX JqAwY9JGBj1vTZg1ujQZQBFc2cgfTVO/T6CBUaiQzF+Uba8a7Hb9eRYdqBppmT6cia4Z rw84fr5+fHMv3f7+TRh04LB3M1alBfygFd3S048oec2VQY3a0Xjn0t362Reb720wqzJe 6CT4sljeyFyi7w6ZV0/6uBnCoEyYkFd02rwgtmSUGXHc2TNpboGDwLxeM0pkfcZpredt RbC39ozGmqI14YoCIMjDt3ft2UjCnhYuuj9oY1UTc5TjXbi2OsoRs9M+cGgS7WnDQL8f nf4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=dseBtZhhg9TsasXaHWSLbsjrv3u/VkKskBnthNFAxT4=; b=UHWbgz3ZurCKSKK/6UN5ROKJfOVtgzO5FPxnxjMtskaSBvgmoSQmidJQXWWOd7O03E d60vH3yuAiLq5MwC91HRXeHj3P1yBNP9t274Rp9k0ajIexonDsJ569evre7pInWVFrWD ElG3WgiI/BZv1DWHhdP+JFIhiWF9QMzaciJU8WiVad61/zUJPnTDshnfDchnPdfRhAOy c4ij21zGIIhKkqv5h/SeDGjR5AJxaXtd9/lv53ETHlSXfEBtU5Rm2zmJIwiefXUgwzY2 6WP7qbFm8ZVanTPvDs4dvvoX8o6sA44+h1CZFPFHl2zFhQwEti+OE/W+72MnmJE4WyXV PAAA== X-Gm-Message-State: AOAM530DM8x4ugOvTlJBELqYx7rJhspLVbjr0oryYaTG1zQZleVEpYyn tP2wk+rVL+kjH6IkF7hRxt+kTw== X-Received: by 2002:ac8:e4f:: with SMTP id j15mr476657qti.124.1628712723577; Wed, 11 Aug 2021 13:12:03 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:d9fa]) by smtp.gmail.com with ESMTPSA id j185sm132039qkf.28.2021.08.11.13.12.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Aug 2021 13:12:02 -0700 (PDT) Date: Wed, 11 Aug 2021 16:12:01 -0400 From: Johannes Weiner To: Linus Torvalds Cc: kernel test robot , Roman Gushchin , Michal Hocko , Shakeel Butt , Michal =?iso-8859-1?Q?Koutn=FD?= , Balbir Singh , Tejun Heo , Andrew Morton , LKML , lkp@lists.01.org, kernel test robot , "Huang, Ying" , Feng Tang , Zhengjun Xing Subject: Re: [mm] 2d146aa3aa: vm-scalability.throughput -36.4% regression Message-ID: References: <20210811031734.GA5193@xsang-OptiPlex-9020> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 10, 2021 at 07:59:53PM -1000, Linus Torvalds wrote: > On Tue, Aug 10, 2021 at 4:59 PM kernel test robot wrote: > > > > FYI, we noticed a -36.4% regression of vm-scalability.throughput due to commit: > > 2d146aa3aa84 ("mm: memcontrol: switch to rstat") > > Hmm. I guess some cost is to be expected, but that's a big regression. > > I'm not sure what the code ends up doing, and how relevant this test > is, but Johannes - could you please take a look? > > I can't make heads nor tails of the profile. The profile kind of points at this: > > > 2.77 ą 12% +27.4 30.19 ą 8% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath > > 2.86 ą 12% +27.4 30.29 ą 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave > > 2.77 ą 12% +27.4 30.21 ą 8% perf-profile.children.cycles-pp.lock_page_lruvec_irqsave > > 4.26 ą 10% +28.1 32.32 ą 7% perf-profile.children.cycles-pp.lru_cache_add > > 4.15 ą 10% +28.2 32.33 ą 7% perf-profile.children.cycles-pp.__pagevec_lru_add > > and that seems to be from the chain __do_fault -> shmem_fault -> > shmem_getpage_gfp -> lru_cache_add -> __pagevec_lru_add -> > lock_page_lruvec_irqsave -> _raw_spin_lock_irqsave -> > native_queued_spin_lock_slowpath. > > That shmem_fault codepath being hot may make sense for sokme VM > scalability test. But it seems to make little sense when I look at the > commit that it bisected to. > > We had another report of this commit causing a much more reasonable > small slowdown (-2.8%) back in May. > > I'm not sure what's up with this new report. Johannes, does this make > sense to you? > > Is it perhaps another "unlucky cache line placement" thing? Or has the > statistics changes just changed the behavior of the test? I'm at a loss as well. The patch only changes how we aggregate the cgroup's memory.stat file, it doesn't influence reclaim/LRU operations. The test itself isn't interacting with memory.stat either - IIRC it doesn't even run inside a dedicated cgroup in this test environment. The patch should actually reduce accounting overhead here because we switched from batched percpu flushing during updates to only flushing when the stats are *read* - which doesn't happen here. That would leave cachelines. But the cachelines the patch touched are in struct mem_cgroup, whereas the lock this profile points out is in a separately allocated per-node structure. The cache footprint on the percpu data this test is hammering also didn't increase; it actually decreased a bit, but not sure where this could cause conflicts. I'll try to reproduce this on a smaller setup. But I have to say, I've seen a few of these bisection reports now that didn't seem to make any sense, which is why I've started to take these with a grain of salt.