Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp2985744pxb; Sun, 8 Nov 2020 22:06:24 -0800 (PST) X-Google-Smtp-Source: ABdhPJx9X1wCx/yt6DK1kdN18MYKhVrnB+OcLDH4c7Af/7WiZele2879KAnvFy1L8oYLdnbvo8W8 X-Received: by 2002:a50:9e05:: with SMTP id z5mr13390412ede.231.1604901984714; Sun, 08 Nov 2020 22:06:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604901984; cv=none; d=google.com; s=arc-20160816; b=Vl4bZpoqNv7Wlxl+b/ISJ5Z3h0y2GpDOGOoH2mcg7M5EbbZsWiBcWvUKJ9tjpZP0eu VswOWfNcUgPx9nfEpaoAIuVf4X9lz/YE3Qae5jv9ZlDZ6QDYwUxrRDjCDWH8nC3bt7n8 ledVtvXAASJfedBumUl9j5q+axWn+lscLTuPI8SBVjEC7ig9nJehkUU5pWLeucVAW0cp 0rqMSVP1unEFAx7DqKkuidXOMQ5WVXgxBfapNFHWBwMr6yDbOx9CW656WMZin46sLkvW /W/owE8tVMr2k8mVhhtctT/XJJUDuqJYrHI8Cpr2dluFjbYSLd11AvZSjxJGi1+u6La+ I1AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:ironport-sdr:ironport-sdr; bh=2qzGRaZxWqEsWnkk4pt2CjlkpQ8nC/FRz6F2LpfZwD0=; b=gaG7S5t1u9PG3WPsWdlvR6fC/LHxfhrP1iu/+zISAeYobGiRgDMLkR2iYFyA5amNiu b6iJbuMLdcuBzT3mbl/bJC5GRFMQbXqSl2n3yk5i+XRMXh8JKTiyCMwCfQNTTlTil79x kUsPX74qZK94LUmWIjpRzJQjUdZI+EjFy9RrMOGelKYw9knD2IdAtFOAljvFZq7itg9I WZ6gvdESTHAgQMOj/au6GXQbcBG1dO/hQ8E7HlpkSWqGk0b9tnWW8HC/VyWNAldNpS8C HYoRl/kSuTn5kQFpo3DxeX6xVLpkQH05XqTLhOD+bG9EEyiNjVsIObn9Ivmu5kLY2j3m 9r7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h14si6484785ejs.645.2020.11.08.22.06.01; Sun, 08 Nov 2020 22:06:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729479AbgKIGEd (ORCPT + 99 others); Mon, 9 Nov 2020 01:04:33 -0500 Received: from mga18.intel.com ([134.134.136.126]:42433 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728951AbgKIGEa (ORCPT ); Mon, 9 Nov 2020 01:04:30 -0500 IronPort-SDR: VV7xUNGl4MXHrTsIPf137Rl5i0L/PD2GL7684qKq8ooII4kxAR4pOYhhsqWX85bo8sDABbm0PC aNzmC6v/KAgA== X-IronPort-AV: E=McAfee;i="6000,8403,9799"; a="157537415" X-IronPort-AV: E=Sophos;i="5.77,462,1596524400"; d="scan'208";a="157537415" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Nov 2020 22:04:29 -0800 IronPort-SDR: CevAJtqNhn6fkNYKP/cmXcMeIvFH4HyBgizUYf5Dr4bU0x+n8oHzeOa2ZTW+cYzNVUtnwmKm/E fChLzqLB2oVg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,462,1596524400"; d="scan'208";a="364922360" Received: from cli6-desk1.ccr.corp.intel.com (HELO [10.239.161.125]) ([10.239.161.125]) by FMSMGA003.fm.intel.com with ESMTP; 08 Nov 2020 22:04:14 -0800 Subject: Re: [PATCH v8 -tip 00/26] Core scheduling To: Joel Fernandes Cc: "Ning, Hongyu" , Nishanth Aravamudan , Julien Desfossez , Peter Zijlstra , Tim Chen , Vineeth Pillai , Aaron Lu , Aubrey Li , tglx@linutronix.de, linux-kernel@vger.kernel.org, mingo@kernel.org, torvalds@linux-foundation.org, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, Phil Auld , Valentin Schneider , Mel Gorman , Pawan Gupta , Paolo Bonzini , vineeth@bitbyteword.org, Chen Yu , Christian Brauner , Agata Gruza , Antonio Gomez Iglesias , graf@amazon.com, konrad.wilk@oracle.com, dfaggioli@suse.com, pjt@google.com, rostedt@goodmis.org, derkling@google.com, benbjiang@tencent.com, Alexandre Chartre , James.Bottomley@hansenpartnership.com, OWeisse@umich.edu, Dhaval Giani , Junaid Shahid , jsbarnes@google.com, chris.hyser@oracle.com, Tim Chen References: <20201020014336.2076526-1-joel@joelfernandes.org> <20201106175427.GB2845264@google.com> From: "Li, Aubrey" Message-ID: <389de3ef-2e1f-c569-d3c8-eebb4e6b6bd1@linux.intel.com> Date: Mon, 9 Nov 2020 14:04:13 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0 MIME-Version: 1.0 In-Reply-To: <20201106175427.GB2845264@google.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/11/7 1:54, Joel Fernandes wrote: > On Fri, Nov 06, 2020 at 10:58:58AM +0800, Li, Aubrey wrote: > >>> >>> -- workload D, new added syscall workload, performance drop in cs_on: >>> +----------------------+------+-------------------------------+ >>> | | ** | will-it-scale * 192 | >>> | | | (pipe based context_switch) | >>> +======================+======+===============================+ >>> | cgroup | ** | cg_will-it-scale | >>> +----------------------+------+-------------------------------+ >>> | record_item | ** | threads_avg | >>> +----------------------+------+-------------------------------+ >>> | coresched_normalized | ** | 0.2 | >>> +----------------------+------+-------------------------------+ >>> | default_normalized | ** | 1 | >>> +----------------------+------+-------------------------------+ >>> | smtoff_normalized | ** | 0.89 | >>> +----------------------+------+-------------------------------+ >> >> will-it-scale may be a very extreme case. The story here is, >> - On one sibling reader/writer gets blocked and tries to schedule another reader/writer in. >> - The other sibling tries to wake up reader/writer. >> >> Both CPUs are acquiring rq->__lock, >> >> So when coresched off, they are two different locks, lock stat(1 second delta) below: >> >> class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg >> &rq->__lock: 210 210 0.10 3.04 180.87 0.86 797 79165021 0.03 20.69 60650198.34 0.77 >> >> But when coresched on, they are actually one same lock, lock stat(1 second delta) below: >> >> class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg >> &rq->__lock: 6479459 6484857 0.05 216.46 60829776.85 9.38 8346319 15399739 0.03 95.56 81119515.38 5.27 >> >> This nature of core scheduling may degrade the performance of similar workloads with frequent context switching. > > When core sched is off, is SMT off as well? From the above table, it seems to > be. So even for core sched off, there will be a single lock per physical CPU > core (assuming SMT is also off) right? Or did I miss something? > The table includes 3 cases: - default: SMT on, coresched off - coresched: SMT on, coresched on - smtoff: SMT off, coresched off I was comparing the default(coresched off & SMT on) case with (coresched on & SMT on) case. If SMT off, then reader and writer on the different cores have different rq->lock, so the lock contention is not that serious. class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg &rq->__lock: 60 60 0.11 1.92 41.33 0.69 127 67184172 0.03 22.95 33160428.37 0.49 Does this address your concern? Thanks, -Aubrey