Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757740Ab1ELOYw (ORCPT ); Thu, 12 May 2011 10:24:52 -0400 Received: from 8bytes.org ([88.198.83.132]:47817 "EHLO 8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757683Ab1ELOYu (ORCPT ); Thu, 12 May 2011 10:24:50 -0400 Date: Thu, 12 May 2011 16:24:49 +0200 From: Joerg Roedel To: Avi Kivity Cc: Jan Kiszka , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Arnaldo Carvalho de Melo Subject: Re: [PATCH v1 0/5] KVM in-guest performance monitoring Message-ID: <20110512142449.GI8707@8bytes.org> References: <1305129333-7456-1-git-send-email-avi@redhat.com> <20110512093309.GD8707@8bytes.org> <4DCBACC7.8080000@siemens.com> <20110512131130.GG8707@8bytes.org> <4DCBE13A.2050204@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4DCBE13A.2050204@redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1580 Lines: 38 On Thu, May 12, 2011 at 04:31:38PM +0300, Avi Kivity wrote: > - when the cpu gains support for virtualizing the architectural feature, > we transparently speed the guest up, including support for live > migrating from a deployment that emulates the feature to a deployment > that properly virtualizes the feature, and back. Usually the > virtualized support will beat the pants off any paravirtualization we can > do > - following an existing spec is a lot easier to get right than doing > something from scratch > - no need to meticulously document the feature Need to be done, but not problematic I think. > - easier testing Testing shouldn't be different on both variants I think. > - existing guest support - only need to write the host side (sometimes > the only one available to us) Otherwise I agree. > Paravirtualizing does have its advantages. For the PMU, for example, we > can have a single hypercall read and reprogram all counters, saving > *many* exits. But I think we need to start from the architectural PMU > and see exactly what the problems are, before we optimize it to death. The problem certainly is that with arch-pmu we add a lot of msr-exits to the guest-context-switch path if it uses per-task profiling. Depending on the workload this can very much distort the results. Joerg -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/