Received: by 2002:ac0:a679:0:0:0:0:0 with SMTP id p54csp1571068imp; Fri, 22 Feb 2019 06:21:30 -0800 (PST) X-Google-Smtp-Source: AHgI3IadK1YBb7xZtsHfXUTW1R1QHb+2KscmB8rr5WsGqFS7KOGihglsPIS+EIi9DruOJNvOqRdq X-Received: by 2002:a63:698a:: with SMTP id e132mr4255000pgc.136.1550845290454; Fri, 22 Feb 2019 06:21:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550845290; cv=none; d=google.com; s=arc-20160816; b=ySf3HVrHIT8fCpIlA9KxLPHEJRFKkhq65TSiLKMFUl5mdWVi+mtE5/YjVBCEAYhOMr DLcSRquY1YiLIIVPCcIjVgcRIBBYHnNIaPcMm00UEkALn6lPzxZ5sqU0k9FnUL7Px3XR vsoJSazUaF3GQXjU6CJKYw1QPKW5GZheQuRTrSICrI1Rzbwb514mbDwBwqvSgUIknGpe pcROPjT+RLWeVdGIvE7hlRODVFv3cYp9Ai6Ycmb2/yoAaIxkgvusuAhqfvc/taPacA2y WUJ0ea5Fdra385BIY8uWVht1yJU+GU6gn/kihdBbRuUS2CsZ31jrKvvrBcusuPzgwQZP D0fA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=cLLafuW3RFjalN77EqZqiogM2PFiUm3KfauqCgeJjg0=; b=hFRHtktlaPsagBrK1KB4PSB4sOKMqkz66ovpwJF+3NEvaaFt9gQxUiWQEnwsvXtqLx gP2ljBr1MAwQ6t1Ww3zXmi5YgEIMnJL5TLD9gjKq4roBbgU43MNDk1vladkB0H6Fteyc B5sJdwvlu7OVNcOvQv5Y24+ZuL3yTK1DzfGo9Z/+sKPz2csiE6tXlfrlpegC/NdKpvq1 8gm1dXWacpqdbtL0Sluz4k72eW+/xvWHhyFzEhaGEHAh/N/eEKHZP4dgOxR8myoyd5i7 wA/z1gW9MP4/drrfDCswHBN9ZrVt09R7+WVedws5e1iBTjfuhOJQQfdVhSAdsfNY8nHX LsqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=m2dNdYtn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d131si1451852pgc.7.2019.02.22.06.21.13; Fri, 22 Feb 2019 06:21:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b=m2dNdYtn; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726536AbfBVOUn (ORCPT + 99 others); Fri, 22 Feb 2019 09:20:43 -0500 Received: from merlin.infradead.org ([205.233.59.134]:48128 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725942AbfBVOUn (ORCPT ); Fri, 22 Feb 2019 09:20:43 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=cLLafuW3RFjalN77EqZqiogM2PFiUm3KfauqCgeJjg0=; b=m2dNdYtnZ6f7t9JkpE71tns6y edgzVdazOgAWrxtDQREo9ylGeVRmkJ5qLTqhD6YVMwE6Fvrq52D0AB/Mr6bxacQUhMJ50Jq/KDUju iXVsHpmCWdEDmly8j8Hhft2nBGwpEWkkZPYOt8hB9QNeiwgGhNLu/ohhU0WjrOHhLJM+/J/azVqij PkQ5oO5GpT9nqUoeGGziDs/7SC4tJ7J3WkFS/AZ2f07tjDVCKDhM4qR0nG8NmKvR7gFneL2BmJ3Nz wqc+DXF3IGKncs2Gz8tqO2a/RWO+eE2PnLCXXaJR1RMeU+Oz6yCl4DuXGH9qQgb5FRxz2w1tdxBE5 nMXX5krtA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gxBgm-0003LN-0h; Fri, 22 Feb 2019 14:20:32 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 9C29F2871C0AD; Fri, 22 Feb 2019 15:20:30 +0100 (CET) Date: Fri, 22 Feb 2019 15:20:30 +0100 From: Peter Zijlstra To: Paolo Bonzini Cc: Linus Torvalds , Ingo Molnar , Thomas Gleixner , Paul Turner , Tim Chen , Linux List Kernel Mailing , subhra.mazumdar@oracle.com, =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Kees Cook , kerrnel@google.com Subject: Re: [RFC][PATCH 00/16] sched: Core scheduling Message-ID: <20190222142030.GA32494@hirez.programming.kicks-ass.net> References: <20190218165620.383905466@infradead.org> <20190218204020.GV32494@hirez.programming.kicks-ass.net> <407b6589-1801-20b5-e3b7-d7458370cfc0@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <407b6589-1801-20b5-e3b7-d7458370cfc0@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 22, 2019 at 01:17:01PM +0100, Paolo Bonzini wrote: > On 18/02/19 21:40, Peter Zijlstra wrote: > > On Mon, Feb 18, 2019 at 09:49:10AM -0800, Linus Torvalds wrote: > >> On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra wrote: > >>> > >>> However; whichever way around you turn this cookie; it is expensive and nasty. > >> > >> Do you (or anybody else) have numbers for real loads? > >> > >> Because performance is all that matters. If performance is bad, then > >> it's pointless, since just turning off SMT is the answer. > > > > Not for these patches; they stopped crashing only yesterday and I > > cleaned them up and send them out. > > > > The previous version; which was more horrible; but L1TF complete, was > > between OK-ish and horrible depending on the number of VMEXITs a > > workload had. > > > > If there were close to no VMEXITs, it beat smt=off, if there were lots > > of VMEXITs it was far far worse. Supposedly hosting people try their > > very bestest to have no VMEXITs so it mostly works for them (with the > > obvious exception of single VCPU guests). > > If you are giving access to dedicated cores to guests, you also let them > do PAUSE/HLT/MWAIT without vmexits and the host just thinks it's a CPU > bound workload. > > In any case, IIUC what you are looking for is: > > 1) take a benchmark that *is* helped by SMT, this will be something CPU > bound. > > 2) compare two runs, one without SMT and without core scheduler, and one > with SMT+core scheduler. > > 3) find out whether performance is helped by SMT despite the increased > overhead of the core scheduler > > Do you want some other load in the host, so that the scheduler actually > does do something? Or is the point just that you show that the > performance isn't affected when the scheduler does not have anything to > do (which should be obvious, but having numbers is always better)? Well, what _I_ want is for all this to just go away :-) Tim did much of testing last time around; and I don't think he did core-pinning of VMs much (although I'm sure he did some of that). I'm still a complete virt noob; I can barely boot a VM to save my life. (you should be glad to not have heard my cursing at qemu cmdline when trying to reproduce some of Tim's results -- lets just say that I can deal with gpg) I'm sure he tried some oversubscribed scenarios without pinning. But even there, when all the vCPU threads are runnable, they don't schedule that much. Sure we take the preemption tick and thus schedule 100-1000 times a second, but that's managable. We spend quite some time tracing workloads and fixing funny behaviour -- none of that has been done for these patches yet. The moment KVM needed user space assist for things (and thus VMEXITs happened) things came apart real quick. Anyway, Tim, can you tell these fine folks what you did and for what scenarios the last incarnation did show promise?