Received: by 2002:ac0:8845:0:0:0:0:0 with SMTP id g63csp331483img; Tue, 26 Feb 2019 00:28:13 -0800 (PST) X-Google-Smtp-Source: AHgI3IaEnqE5Na6BE7A0/x01wxEEZQTdlyJmC2Vgs9a++GPfqBN+IQATrAhPAsow0RyWTLE0A/qs X-Received: by 2002:a17:902:6681:: with SMTP id e1mr24553470plk.98.1551169693153; Tue, 26 Feb 2019 00:28:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1551169693; cv=none; d=google.com; s=arc-20160816; b=g/a9f1jdcc8HjiXjjLJApeuQ/xu6y+9fKQhAMz8FDMcp5tDxtztnwXQ3dUW+lhs0FR loc8jXKlV151wU7CAeayHFxKEGsWIG4e1rTZnBjVZAFmuY7E0cZr6w/7GLWImjoAwmj2 ST/ycAZkOOurKDpGmo7ZaDKh5LBoDGyYqCWA9yU2+zN9dJfkZOg1Yxj+/7og7PcHH64A 1bAB0vRa/oRsVeqnMPDCUIR/HNaFUxmrn6qsyF9HWaSf7SGXvTP7jhy9/okWDkfWWP7+ lReCz6NeDcsHl0NHe3V7XGpRFC5fXKVyMFIMV6flTQAQI6M3GKh6FDqmtsO3dN1pJoBp 6v3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=GoghCpE3m09U1qCdk54uox+dcUb/swy+30E2QwXbWbc=; b=yKsSlF/9uIQgKodDZLyUJyd3eo2eRCdlJ08Lj+Ha8BAViHwvEPQShZDV6ON0qeDbA+ 2IgbZ2QybBqua72fligqNYuyiOEwcTu1QVJyP/O7wGQMlw7VTHxX3xR2BtrY2wcULYho AVCQD2e+t5zWVJhStXVznwClDGUj/XmW5erFtcnF0e69fhXrLYUQ51msKhKDty7omloF WZlZFp1tIdRi6UiRLkZe+ToNuWuF9ELjgCBQoG6IHJ5i525CSACQ8QK6DhN4KKIoVnAO pEU7oeVDIlTrwyPdRo19BtfMuhqG1uT1CGy20iXundoEyLzIq66Z87XaocHh16KHTp4/ e4Ag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="oY/9MYni"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p4si12077958pli.159.2019.02.26.00.27.58; Tue, 26 Feb 2019 00:28:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="oY/9MYni"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726685AbfBZI1H (ORCPT + 99 others); Tue, 26 Feb 2019 03:27:07 -0500 Received: from mail-lj1-f195.google.com ([209.85.208.195]:44351 "EHLO mail-lj1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725879AbfBZI1H (ORCPT ); Tue, 26 Feb 2019 03:27:07 -0500 Received: by mail-lj1-f195.google.com with SMTP id q128so9928225ljb.11 for ; Tue, 26 Feb 2019 00:27:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=GoghCpE3m09U1qCdk54uox+dcUb/swy+30E2QwXbWbc=; b=oY/9MYnizFFUwqJF04cYtlg3fOhYSdqpBP3nWcJtFpRwG37osUp2aChr+Ak+wALvNo YhRotsYWSQcoJwuUgP3MXl6SuF0jBytrmXuXyGJT0Xbg0g6u/d5DsGMe8EL2/4UeiHzT jDRu8+vRr9C7aBaWRzP5RQkUuEHxN/rHQ9PWmYxb+ThD/VPyafSQwA9uRSIAGkEPbaPB Qu5qTfAanhnXOOmnzJpuG4Cbk/kTPyU5n3rVeErB0QEuMQYf6Ux1KK90oE8cUlS7Tpcf xQzM8Qm/+G3mt89pNfgz8w9soi1NdRttjqGQVfcO2pma0GLZl9qgZtPAxoA0m2GuzWWO ty0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=GoghCpE3m09U1qCdk54uox+dcUb/swy+30E2QwXbWbc=; b=rft9cBYdYUrhq/qT525h/4SW6hmBtr0aW+308wKiBG9FHG3J6KBTU+ss+x4qEDkzH4 fRRvtHtFRLsBBCpaR3BM7MQv/omxTqHYwMRuD58LtacMLcJG6odNFF+6Ke842kxTElWP yhGL3P/1KnDo/VjjqrP9gKYMdnfplDg0U4sB/2KswDcz9feSzWKroIB38CScKR3kIy8j 4nmoYSnbr4ERuP4KvvRSIGVC1Q1/pMUcMzO9s2TDos/LrqeBHWuuRVLXqbZCSJZEUHqc Wg3XWkXGpxbeUpNBAiEzIWFb4suk3pIVzTX+Ioik6mXvLSXhuC5uqKjrRFiwi9w4SwpC 5h8A== X-Gm-Message-State: AHQUAuYZh6K5EEx1l5G+L7exzJaf0BhL6ZiiRIoEEmJ566yThMh0dw+5 z8iS7nb8SDWV4uN6i2sJOvYSRBspjatssTxUmL0= X-Received: by 2002:a2e:4942:: with SMTP id b2-v6mr11420233ljd.168.1551169624403; Tue, 26 Feb 2019 00:27:04 -0800 (PST) MIME-Version: 1.0 References: <20190218165620.383905466@infradead.org> <20190218204020.GV32494@hirez.programming.kicks-ass.net> <407b6589-1801-20b5-e3b7-d7458370cfc0@redhat.com> <20190222142030.GA32494@hirez.programming.kicks-ass.net> <786668c1-fb52-508c-e916-f86707a1d791@linux.intel.com> In-Reply-To: <786668c1-fb52-508c-e916-f86707a1d791@linux.intel.com> From: Aubrey Li Date: Tue, 26 Feb 2019 16:26:53 +0800 Message-ID: Subject: Re: [RFC][PATCH 00/16] sched: Core scheduling To: Tim Chen Cc: Peter Zijlstra , Paolo Bonzini , Linus Torvalds , Ingo Molnar , Thomas Gleixner , Paul Turner , Linux List Kernel Mailing , subhra.mazumdar@oracle.com, =?UTF-8?B?RnLDqWTDqXJpYyBXZWlzYmVja2Vy?= , Kees Cook , kerrnel@google.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Feb 23, 2019 at 3:27 AM Tim Chen wrote: > > On 2/22/19 6:20 AM, Peter Zijlstra wrote: > > On Fri, Feb 22, 2019 at 01:17:01PM +0100, Paolo Bonzini wrote: > >> On 18/02/19 21:40, Peter Zijlstra wrote: > >>> On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote: > >>>> On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote: > >>>>> > >>>>> However; whichever way around you turn this cookie; it is expensive and nasty. > >>>> > >>>> Do you (or anybody else) have numbers for real loads? > >>>> > >>>> Because performance is all that matters. If performance is bad, then > >>>> it's pointless, since just turning off SMT is the answer. > >>> > >>> Not for these patches; they stopped crashing only yesterday and I > >>> cleaned them up and send them out. > >>> > >>> The previous version; which was more horrible; but L1TF complete, was > >>> between OK-ish and horrible depending on the number of VMEXITs a > >>> workload had. > >>> > >>> If there were close to no VMEXITs, it beat smt=off, if there were lots > >>> of VMEXITs it was far far worse. Supposedly hosting people try their > >>> very bestest to have no VMEXITs so it mostly works for them (with the > >>> obvious exception of single VCPU guests). > >> > >> If you are giving access to dedicated cores to guests, you also let them > >> do PAUSE/HLT/MWAIT without vmexits and the host just thinks it's a CPU > >> bound workload. > >> > >> In any case, IIUC what you are looking for is: > >> > >> 1) take a benchmark that *is* helped by SMT, this will be something CPU > >> bound. > >> > >> 2) compare two runs, one without SMT and without core scheduler, and one > >> with SMT+core scheduler. > >> > >> 3) find out whether performance is helped by SMT despite the increased > >> overhead of the core scheduler > >> > >> Do you want some other load in the host, so that the scheduler actually > >> does do something? Or is the point just that you show that the > >> performance isn't affected when the scheduler does not have anything to > >> do (which should be obvious, but having numbers is always better)? > > > > Well, what _I_ want is for all this to just go away :-) > > > > Tim did much of testing last time around; and I don't think he did > > core-pinning of VMs much (although I'm sure he did some of that). I'm > > Yes. The last time around I tested basic scenarios like: > 1. single VM pinned on a core > 2. 2 VMs pinned on a core > 3. system oversubscription (no pinning) > > In general, CPU bound benchmarks and even things without too much I/O > causing lots of VMexits perform better with HT than without for Peter's > last patchset. > > > still a complete virt noob; I can barely boot a VM to save my life. > > > > (you should be glad to not have heard my cursing at qemu cmdline when > > trying to reproduce some of Tim's results -- lets just say that I can > > deal with gpg) > > > > I'm sure he tried some oversubscribed scenarios without pinning. > > We did try some oversubscribed scenarios like SPECVirt, that tried to > squeeze tons of VMs on a single system in over subscription mode. > > There're two main problems in the last go around: > > 1. Workload with high rate of Vmexits (SpecVirt is one) > were a major source of pain when we tried Peter's previous patchset. > The switch from vcpus to qemu and back in previous version of Peter's patch > requires some coordination between the hyperthread siblings via IPI. And for > workload that does this a lot, the overhead quickly added up. > > For Peter's new patch, this overhead hopefully would be reduced and give > better performance. > > 2. Load balancing is quite tricky. Peter's last patchset did not have > load balancing for consolidating compatible running threads. > I did some non-sophisticated load balancing > to pair vcpus up. But the constant vcpu migrations overhead probably ate up > any improvements from better load pairing. So I didn't get much > improvement in the over-subscription case when turning on load balancing > to consolidate the VCPUs of the same VM. We'll probably have to try > out this incarnation of Peter's patch and see how well the load balancing > works. > > I'll try to line up some benchmarking folks to do some tests. I can help to do some basic tests. Cgroup bias looks weird to me. If I have hundreds of cgroups, should I turn core scheduling(cpu.tag) on one by one? Or Is there a global knob I missed? Thanks, -Aubrey