Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751857AbbEASGH (ORCPT ); Fri, 1 May 2015 14:06:07 -0400 Received: from mx1.redhat.com ([209.132.183.28]:44783 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751023AbbEASGD (ORCPT ); Fri, 1 May 2015 14:06:03 -0400 Message-ID: <5543C05E.9040209@redhat.com> Date: Fri, 01 May 2015 14:05:18 -0400 From: Rik van Riel User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Ingo Molnar CC: Andy Lutomirski , "linux-kernel@vger.kernel.org" , X86 ML , williams@redhat.com, Andrew Lutomirski , fweisbec@redhat.com, Peter Zijlstra , Heiko Carstens , Thomas Gleixner , Ingo Molnar , Paolo Bonzini Subject: Re: [PATCH 3/3] context_tracking,x86: remove extraneous irq disable & enable from context tracking on syscall entry References: <1430429035-25563-1-git-send-email-riel@redhat.com> <1430429035-25563-4-git-send-email-riel@redhat.com> <20150501064044.GA18957@gmail.com> <554399D1.6010405@redhat.com> <20150501155912.GA451@gmail.com> <20150501162109.GA1091@gmail.com> <5543A94B.3020108@redhat.com> <20150501163431.GB1327@gmail.com> In-Reply-To: <20150501163431.GB1327@gmail.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1520 Lines: 42 On 05/01/2015 12:34 PM, Ingo Molnar wrote: > > * Rik van Riel wrote: > >>> I can understand people running hard-RT workloads not wanting to >>> see the overhead of a timer tick or a scheduler tick with variable >>> (and occasionally heavy) work done in IRQ context, but the jitter >>> caused by a single trivial IPI with constant work should be very, >>> very low and constant. >> >> Not if the realtime workload is running inside a KVM guest. > > I don't buy this: > >> At that point an IPI, either on the host or in the guest, involves a >> full VMEXIT & VMENTER cycle. > > So a full VMEXIT/VMENTER costs how much, 2000 cycles? That's around 1 > usec on recent hardware, and I bet it will get better with time. > > I'm not aware of any hard-RT workload that cannot take 1 usec > latencies. Now think about doing this kind of IPI from inside a guest, to another VCPU on the same guest. Now you are looking at VMEXIT/VMENTER on the first VCPU, plus the cost of the IPI on the host, plus the cost of the emulation layer, plus VMEXIT/VMENTER on the second VCPU to trigger the IPI work, and possibly a second VMEXIT/VMENTER for IPI completion. I suspect it would be better to do RCU callback offload in some other way. -- All rights reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/