Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753682AbbLOOgP (ORCPT ); Tue, 15 Dec 2015 09:36:15 -0500 Received: from aserp1040.oracle.com ([141.146.126.69]:47960 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751308AbbLOOgN (ORCPT ); Tue, 15 Dec 2015 09:36:13 -0500 Subject: Re: [PATCH] xen/x86/pvh: Use HVM's flush_tlb_others op To: Konrad Rzeszutek Wilk , jbeulich@suse.com References: <1449966355-10611-1-git-send-email-boris.ostrovsky@oracle.com> <20151214152713.GC23203@char.us.oracle.com> Cc: david.vrabel@citrix.com, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, #@char.us.oracle.com, 3.14+@char.us.oracle.com From: Boris Ostrovsky Message-ID: <56702554.6000504@oracle.com> Date: Tue, 15 Dec 2015 09:36:04 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: <20151214152713.GC23203@char.us.oracle.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2665 Lines: 74 On 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote: > On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote: >> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor >> will likely perform same IPIs as would have the guest. >> > But if the VCPU is asleep, doing it via the hypervisor will save us waking > up the guest VCPU, sending an IPI - just to do an TLB flush > of that CPU. Which is pointless as the CPU hadn't been running the > guest in the first place. > >> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the >> guest's address on remote CPU (when, for example, VCPU from another >> guest >> is running there). > Right, so the hypervisor won't even send an IPI there. > > But if you do it via the normal guest IPI mechanism (which are opaque > to the hypervisor) you and up scheduling the guest VCPU to do > send an hypervisor callback. And the callback will go the IPI routine > which will do an TLB flush. Not necessary. > > This is all in case of oversubscription of course. In the case where > we are fine on vCPU resources it does not matter. So then should we keep these two operations (MMUEXT_INVLPG_MULTI and MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU is not running then TLBs must have been flushed. Jan? -boris > > Perhaps if we have PV aware TLB flush it could do this differently? > >> Signed-off-by: Boris Ostrovsky >> Suggested-by: Jan Beulich >> Cc: stable@vger.kernel.org # 3.14+ >> --- >> arch/x86/xen/mmu.c | 9 ++------- >> 1 files changed, 2 insertions(+), 7 deletions(-) >> >> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c >> index 9c479fe..9ed7eed 100644 >> --- a/arch/x86/xen/mmu.c >> +++ b/arch/x86/xen/mmu.c >> @@ -2495,14 +2495,9 @@ void __init xen_init_mmu_ops(void) >> { >> x86_init.paging.pagetable_init = xen_pagetable_init; >> >> - /* Optimization - we can use the HVM one but it has no idea which >> - * VCPUs are descheduled - which means that it will needlessly IPI >> - * them. Xen knows so let it do the job. >> - */ >> - if (xen_feature(XENFEAT_auto_translated_physmap)) { >> - pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others; >> + if (xen_feature(XENFEAT_auto_translated_physmap)) >> return; >> - } >> + >> pv_mmu_ops = xen_mmu_ops; >> >> memset(dummy_mapping, 0xff, PAGE_SIZE); >> -- >> 1.7.1 >> -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/