Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1378451ybt; Thu, 9 Jul 2020 05:44:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwL5lA1zqR0YD3sSXSA0N4zQeU1zMjao60wpgAdhVb3J+Kr/f6zhwf+XPTBYNOYh52aE3Qh X-Received: by 2002:a17:906:2786:: with SMTP id j6mr55518014ejc.216.1594298653739; Thu, 09 Jul 2020 05:44:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594298653; cv=none; d=google.com; s=arc-20160816; b=qRd7C5zPPR1JtvJ0XPWwM0W2GnbRP1lfWoyQ/v7SevjHY634ssAA+UQ11fG6/MapcF xoYb3VDR9stWARRIVU4fOmdHbOo/bGVxEJMx7Kvj7lM7G3k7XWRVT3qeR7IKYJmpXdjF Ok8FOuWYDsCR/vcxWHCH3zSqIXGpIj1qrNkgG74wWR6/K5heLdyeXbN4guYgF+kWuFhw 2YOtogTwy67Jy9kO/0VFsiEx/qHnCfD0xoF6yUrfUS4YNPa6dMv6Axm3Q4Jk9n7O7iz6 k6Iwz4vAaf4TgNSk4EBxDTbcTfd+HF5Vzpcg+/H/OA6z08/KyoSSlCxaemyJPbq8zqJK 0jVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:date:cc:to:from:subject:message-id; bh=pLVAaOfN3uc7FnkYZRDrbrvrenzEpfnjEtvvndOFkEQ=; b=b9jrxfMkyF3OIlq1HkU8gN/nUJDENuJJc689HCU5rbgACg+8bP4AvHMrqQIGrQb+3S 3tkrhukt/VIS8yKGvQ2J+uWC/0NWVyaRaRFVCTTDwWtlv2Zyr3ndyzxstimLGKuwYu8W g6aa68loTg7ewr6VGTRDP6AtknTyTObgltR2jqIXP82gde9DmFJWWHfZ7RVfye06WiaK shp+BPDb/R9LDDGuC0EzHmH1bQSVOeXEv69dimul1XkEZgSGdZrCR3NbrbXkSUuFsJ4G 2bGR5/cU1Ex/PeQREeVUHsNCd4MWWbzkuirOfQeVU0oVl2x0hhQ7eMIBsQcxoSue/pTi i4tw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g10si2001773ejf.54.2020.07.09.05.43.49; Thu, 09 Jul 2020 05:44:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726497AbgGIMnl (ORCPT + 99 others); Thu, 9 Jul 2020 08:43:41 -0400 Received: from mx2.suse.de ([195.135.220.15]:33228 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726347AbgGIMnk (ORCPT ); Thu, 9 Jul 2020 08:43:40 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id F386BAC2E; Thu, 9 Jul 2020 12:43:38 +0000 (UTC) Message-ID: <1594298618.15464.147.camel@suse.cz> Subject: Re: [LKP] [x86, sched] 1567c3e346: vm-scalability.median -15.8% regression From: Giovanni Gherdovich To: Xing Zhengjun , kernel test robot Cc: Ingo Molnar , Peter Zijlstra , Doug Smythies , "Rafael J. Wysocki" , LKML , Andrew Morton , Stephen Rothwell , lkp@lists.01.org Date: Thu, 09 Jul 2020 14:43:38 +0200 In-Reply-To: References: <20200306051916.GA23395@xsang-OptiPlex-9020> <1587018059.32139.22.camel@suse.cz> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.26.6 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2020-07-07 at 10:58 +0800, Xing Zhengjun wrote: > > On 6/12/2020 4:11 PM, Xing Zhengjun wrote: > > Hi Giovanni, > > > > I test the regression, it still existed in v5.7. Do you have time > > to take a look at this? Thanks. > > > > Ping... > Hello, I haven't sat down to reproduce this yet but I've read the benchmark code and configuration, and this regression seems likely to be more of a benchmarking artifact than an actual performance bug. Likely a benchmarking artifact: First off, the test used the "performance" governor from the "intel_pstate" cpufreq driver, but points at the patch introducing the "frequency invariance on x86" feature as the culprit. This is suspicious because "frequency invariance on x86" influences frequency selection when the "schedutil" governor is in use (not your case). It may also affect the scheduler load balancing but here you have $NUM_CPUS processes so there isn't a lot of room for creativity there, each CPU gets a process. Some notes on this benchmark for my future reference: The test in question is "anon-cow-seq" from "vm-scalability", which is based on the "usemem" program originally written by Andrew Morton and exercises the memory management subsystem. The invocation is: usemem --nproc $NUM_CPUS \ --prealloc \ --prefault \ $SIZE What this does is to create an anonymous mmap()-ing of $SIZE bytes in the main process, fork $NUM_CPUS distinct child processes and have all of them scan the mapping sequentially from byte 0 to byte N, writing 0, 1, 2, ..., N on the region as they scan it, all together at the same time. So we have the "anon" part (the mapping isn't file-backed), the "cow" part (the parent process allocates the region, then each children copy-on-write's to it) and the "seq" part (memory accesses happen sequentially from low to high address). The test measures how quick this happens; I believe the regression happens in the median time it takes a process to finish (or the median throughput, but $SIZE is fixed so it's equivalent). The $SIZE parameter is selected so that there is enough space for everybody: each children plus the parent need a copy of the mapped region, so that makes $NUM_CPUS+1 instances. The formula for $SIZE adds a factor 2 for good measure: SIZE = $MEM_SIZE / ($NUM_CPUS + 1) / 2 So we have a benchmark dominated by page allocation and copying, run with the "performance" cpufreq governor, and your bisections points to a commit such as 1567c3e3467cddeb019a7b53ec632f834b6a9239 ("x86, sched: Add support for frequency invariance") which: * changes how frequency is selected by a governor you're not using * doesn't touch the memory management subsystem or related functions I'm not entirely dismissing your finding, just explaining why this analysis hasn't been in my top priorities lately (plus, I've just returned from a 3 weeks vacation :). I'm curious too about what causes the test to go red, but I'm not overly worried given the above context. Thanks, Giovanni Gherdovich