Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp796160pxy; Thu, 22 Apr 2021 13:43:18 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyzxGK670rUAXuj9+ND1Ot8m9ErzRyzIoIUI4yNR1innPHywGoOE6mbjracBVs4ATNKG5j+ X-Received: by 2002:a17:90a:a581:: with SMTP id b1mr603947pjq.53.1619124198816; Thu, 22 Apr 2021 13:43:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619124198; cv=none; d=google.com; s=arc-20160816; b=rItJ1XkZCiYbTqN7bkuNJZlKHF7H3myLtj7UkWnkMUXC1BrTuJDBOdugFGnlhd7Vst mfrk9WdWviQJMaG4fSElw6/n7I+y1AjnRbcfAvIdMdkvopSrGPzLxnV9lt4+qALIRNfL N2zds3m7MIQEtA7t/frDVTNcBf8r933foTgpstjQC0aw1C0APLkhjESyCzcfrQq7qULV HCNr9kIM5Vdqwra3qASdS9DjzhUiJFnM3OZ1juB4pTwDyzAmqa03nEG/9xzIDZfmHSOy GJUykiEP9zWEs8G2TYxLVEz0vanmTX9Xay9TwuEBcI1xrbFgDn4h0G/irD17mtzATt5U Eicg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from; bh=afjKMXOBg3XywHdJufsZ8kao4k402U25wa12QsUlch8=; b=Cr1wfsOjkYR4H0XHx7NPsd0uqT4aYW1nOu6NlR3CH+lV+8mmZjAbBkVSs7nk8vSt9i 2x57XSZ9A7mCOU6ndyq7JocwJaGnzUzP9tKkE1bOdsVbFDMuvEMagfUlDrD5Ge17K3K+ +g2HtT6bSycwoYi0Tuj5A1P9ndjw3Tym8ZpS43z21mHBmnVzOaBIRt4XvovJsFh0MNGh 8iZtsPbfGSovEMI1JUJQF0hlPijfcR8nN7lFv23AdfL05EyhuZ1XcTcuw6M8iMdu+c75 Ti5TzakFT3lk/5iT0w7VIy08gZYPH7pt9i5Lh+lCZ5Pp0IJC7ye6uRPs/PSEaVYicWVp exbA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q2si4373157pfu.186.2021.04.22.13.43.05; Thu, 22 Apr 2021 13:43:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237002AbhDVUnM (ORCPT + 99 others); Thu, 22 Apr 2021 16:43:12 -0400 Received: from foss.arm.com ([217.140.110.172]:55746 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236877AbhDVUnL (ORCPT ); Thu, 22 Apr 2021 16:43:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B2CB13A1; Thu, 22 Apr 2021 13:42:36 -0700 (PDT) Received: from e113632-lin (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DDBCE3F694; Thu, 22 Apr 2021 13:42:33 -0700 (PDT) From: Valentin Schneider To: Oliver Sang Cc: 0day robot , Vincent Guittot , Dietmar Eggemann , LKML , lkp@lists.01.org, ying.huang@intel.com, feng.tang@intel.com, zhengjun.xing@intel.com, Lingutla Chandrasekhar , Peter Zijlstra , Ingo Molnar , Morten Rasmussen , Qais Yousef , Quentin Perret , Pavan Kondeti , Rik van Riel , aubrey.li@linux.intel.com, yu.c.chen@intel.com Subject: Re: [sched/fair] 38ac256d1c: stress-ng.vm-segv.ops_per_sec -13.8% regression In-Reply-To: <87wnsutzi9.mognet@arm.com> References: <20210414052151.GB21236@xsang-OptiPlex-9020> <87im4on5u5.mognet@arm.com> <20210421032022.GA13430@xsang-OptiPlex-9020> <87bla8ue3e.mognet@arm.com> <20210422074742.GE31382@xsang-OptiPlex-9020> <87wnsutzi9.mognet@arm.com> Date: Thu, 22 Apr 2021 21:42:31 +0100 Message-ID: <87mttqt5jc.mognet@arm.com> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 22/04/21 10:55, Valentin Schneider wrote: > I'll go find myself some other x86 box and dig into it; > I'd rather not leave this hanging for too long. So I found myself a dual-socket Xeon Gold 5120 @ 2.20GHz (64 CPUs) and *there* I get a somewhat consistent ~-6% regression. As I'm suspecting cacheline shenanigans, I also ran that with Peter's recent kthread_is_per_cpu() change, and that brings it down to ~-3% I'll leave it at here for today, but at least that's something I can work with.