Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752253AbdIVMRE (ORCPT ); Fri, 22 Sep 2017 08:17:04 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57350 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751928AbdIVMRC (ORCPT ); Fri, 22 Sep 2017 08:17:02 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com A24A75F7BC Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mtosatti@redhat.com Date: Fri, 22 Sep 2017 09:16:40 -0300 From: Marcelo Tosatti To: Peter Zijlstra Cc: Konrad Rzeszutek Wilk , mingo@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Subject: Re: [patch 3/3] x86: kvm guest side support for KVM_HC_RT_PRIO hypercall Message-ID: <20170922121640.GA29589@amt.cnet> References: <20170921113835.031375194@redhat.com> <20170921114039.466130276@redhat.com> <20170921133653.GO26248@char.us.oracle.com> <20170921140628.zliqlz7mrlqs5pzz@hirez.programming.kicks-ass.net> <20170922011039.GB20133@amt.cnet> <20170922100004.ydmaxvgpc2zx7j25@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170922100004.ydmaxvgpc2zx7j25@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Fri, 22 Sep 2017 12:17:02 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3385 Lines: 105 On Fri, Sep 22, 2017 at 12:00:05PM +0200, Peter Zijlstra wrote: > On Thu, Sep 21, 2017 at 10:10:41PM -0300, Marcelo Tosatti wrote: > > When executing guest vcpu-0 with FIFO:1 priority, which is necessary > > to > > deal with the following situation: > > > > VCPU-0 (housekeeping VCPU) VCPU-1 (realtime VCPU) > > > > raw_spin_lock(A) > > interrupted, schedule task T-1 raw_spin_lock(A) (spin) > > > > raw_spin_unlock(A) > > > > Certain operations must interrupt guest vcpu-0 (see trace below). > > Those traces don't make any sense. All they include is kvm_exit and you > can't tell anything from that. Hi Peter, OK lets describe whats happening: With QEMU emulator thread and vcpu-0 sharing a physical CPU (which is a request from several NFV customers, to improve guest packing), the following occurs when the guest generates the following pattern: 1. submit IO. 2. busy spin. Hang trace ========== Without FIFO priority: qemu-kvm-6705 [002] ....1.. 767785.648964: kvm_exit: reason IO_INSTRUCTION rip 0xe8fe info 1f00039 0 qemu-kvm-6705 [002] ....1.. 767785.648965: kvm_exit: reason IO_INSTRUCTION rip 0xe911 info 3f60008 0 qemu-kvm-6705 [002] ....1.. 767785.648968: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0 qemu-kvm-6705 [002] ....1.. 767785.648971: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 qemu-kvm-6705 [002] ....1.. 767785.648974: kvm_exit: reason IO_INSTRUCTION rip 0xb514 info 3f60000 0 qemu-kvm-6705 [002] ....1.. 767785.648977: kvm_exit: reason PENDING_INTERRUPT rip 0x8052 info 0 0 qemu-kvm-6705 [002] ....1.. 767785.648980: kvm_exit: reason IO_INSTRUCTION rip 0xeee6 info 200040 0 qemu-kvm-6705 [002] ....1.. 767785.648999: kvm_exit: reason EPT_MISCONFIG rip 0x2120 info 0 0 The emulator thread is able to interrupt qemu vcpu0 at SCHED_NORMAL priority. With FIFO priority: Now, with qemu vcpu0 at SCHED_FIFO priority, which is necessary to avoid the following scenario: (*) VCPU-0 (housekeeping VCPU) VCPU-1 (realtime VCPU) raw_spin_lock(A) interrupted, schedule task T-1 raw_spin_lock(A) (spin) raw_spin_unlock(A) And the following code pattern by vcpu0: 1. submit IO. 2. busy spin. The emulator thread is unable to interrupt vcpu0 thread (vcpu0 busy spinning at SCHED_FIFO, emulator thread at SCHED_NORMAL), and you get a hang at boot as follows: qemu-kvm-7636 [002] ....1.. 768218.205065: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 qemu-kvm-7636 [002] ....1.. 768218.205068: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0 qemu-kvm-7636 [002] ....1.. 768218.205071: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 qemu-kvm-7636 [002] ....1.. 768218.205074: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0 qemu-kvm-7636 [002] ....1.. 768218.205077: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 So to fix this problem, the patchset changes the priority of the VCPU thread (to fix (*)), only when taking spinlocks. Does that make sense now? > > > To fix this issue, only change guest vcpu-0 to FIFO priority > > on spinlock critical sections (see patch). > > This doesn't make sense. So you're saying that if you run all VCPUs as > FIFO things come apart? Why? Please see above. > And why can't they still come apart when the guest holds a spinlock? Hopefully the above makes sense.