Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752665AbbLNPfk (ORCPT ); Mon, 14 Dec 2015 10:35:40 -0500 Received: from smtp02.citrix.com ([66.165.176.63]:35955 "EHLO SMTP02.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751992AbbLNPfi (ORCPT ); Mon, 14 Dec 2015 10:35:38 -0500 X-IronPort-AV: E=Sophos;i="5.20,427,1444694400"; d="scan'208";a="324902849" Subject: Re: [Xen-devel] [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op To: Konrad Rzeszutek Wilk , Boris Ostrovsky References: <1449966355-10611-1-git-send-email-boris.ostrovsky@oracle.com> <20151214152713.GC23203@char.us.oracle.com> CC: <3.14+@char.us.oracle.com>, , , , , , <#@char.us.oracle.com> From: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= Message-ID: <566EE1C4.4080204@citrix.com> Date: Mon, 14 Dec 2015 16:35:32 +0100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:38.0) Gecko/20100101 Thunderbird/38.4.0 MIME-Version: 1.0 In-Reply-To: <20151214152713.GC23203@char.us.oracle.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-DLP: MIA1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1509 Lines: 38 El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit: > On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote: >> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor >> will likely perform same IPIs as would have the guest. >> > > But if the VCPU is asleep, doing it via the hypervisor will save us waking > up the guest VCPU, sending an IPI - just to do an TLB flush > of that CPU. Which is pointless as the CPU hadn't been running the > guest in the first place. > >> >> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the >> guest's address on remote CPU (when, for example, VCPU from another >> guest >> is running there). > > Right, so the hypervisor won't even send an IPI there. > > But if you do it via the normal guest IPI mechanism (which are opaque > to the hypervisor) you and up scheduling the guest VCPU to do > send an hypervisor callback. And the callback will go the IPI routine > which will do an TLB flush. Not necessary. > > This is all in case of oversubscription of course. In the case where > we are fine on vCPU resources it does not matter. > > Perhaps if we have PV aware TLB flush it could do this differently? Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall? Roger. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/