Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp1035622ybl; Thu, 22 Aug 2019 08:26:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqyerD5Hr1q6kQB8J2jBBF6TAaN0xNx8JpsS1KU6/1Cb9x72HeiTZPCYXy22S6W4PSQDLiPl X-Received: by 2002:a17:902:d684:: with SMTP id v4mr535313ply.238.1566487588608; Thu, 22 Aug 2019 08:26:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566487588; cv=none; d=google.com; s=arc-20160816; b=sa/PZp6QhYUMUJIPiaEqIdgnog8d2hioYLMgu1Vma7kTwiZvISHiDTnlz3h+adk/li s+XFh8yQKzed/OaDC4tJShGSm0npSRtq7cCCo5ednuMo93UywSNDFhFevuBDm37JwMT/ 3qkaQ97Ja/Upr1e+lvmQCtTCIGcJtW+AiwaQL1/j1CBNj6vccIXKFrh9VhwYcNDPENm2 vefc7tX8AwTwr2ZVGSkkbTL8v77PSYyi6egKlZjjLTl6tzdBb2tkfU5xdczchIOmVZ01 wQFjTGv2YBqI3XULmVCCGnP3/z0fopdYTrcP2a12ZwZV9lp8L0USMzwNjMr73BQNGcjk TNyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=n2ZETo25ZbVRtndwNoCMa7+VCH83yBcVxID8DRYyuKs=; b=PAXKegzpnVvHbiqApfedL5OQMs9eP7t3q/pd/rsUo1oaIgfFbtWn5xNzK66+1lhmgN WK+IbGX6sJk5BGQ0sWdVUPjuvLDQvhCPTbRo6OKqBhS8sAhdEE2TokDgAnsLT2pJyfRo YDE+TtckO+h/BykhX5mCx6sVTaUh/l7pggT1PPskKoOZop/8kq4eWDLSNo+cZLOJXo6K YpBJqebiwuYCtBiRrcrp708WB/CmMm4kX0vHfVzp8tilKcRHNOkgj3wgOE+MErMR4RzU c6AeWJMDVlhKjwicTg1RurHMzh3ofQ49wAVeyxwx3eC54BsBIHYbURP+DF3Qlso6JCJ7 VToA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=SsLoyyE5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p8si18332022pff.37.2019.08.22.08.26.12; Thu, 22 Aug 2019 08:26:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=SsLoyyE5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732047AbfHVN65 (ORCPT + 99 others); Thu, 22 Aug 2019 09:58:57 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:35245 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732000AbfHVN65 (ORCPT ); Thu, 22 Aug 2019 09:58:57 -0400 Received: by mail-wm1-f65.google.com with SMTP id l2so5916685wmg.0 for ; Thu, 22 Aug 2019 06:58:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=n2ZETo25ZbVRtndwNoCMa7+VCH83yBcVxID8DRYyuKs=; b=SsLoyyE522ywIbe4SxLwD0VKyIDByZrtNPvNWvYLTA4kRAmAWxmfateXO/jFyWaiR7 N/itX9bTIxwE4cBf/pxAO9XfvTUPiTk87Kn7jX1/FfAeODsDvA9iBb+yDXkT6oT0Xy87 j6wFSjbJKNHrmL8a5RrFX3Avr2XRfpsUngPHa+6HmtmUmq/veGgUq7xtokdA6m06/Zwx ZBDzRi+CKbpMKvw4mVcrjWZIFIig9wMQ5v0YyRwhyXQJj629DruxjxezhbWkStjt9bwb HMJeASWbV8d8wFeuRFJZ7wflqATCIDDFU1GvyIBW4aPzKmMV5aPsafsEqaX9ypjHFtMg OO0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=n2ZETo25ZbVRtndwNoCMa7+VCH83yBcVxID8DRYyuKs=; b=I4RoVtWDmk71egRJjq9pYCyFTXdW3lhJ1n4lfRkjMMhWGPBYoZuWKdHST524R78EtH bqT4t5So2O6GaJCe3Y9t3kHobXKBLozFMBV985GqFlX3X6SOCdsJCWSssEfx6yUdkx68 T+v+LhYj0ZAMXGH9OMqiGXtXBlg3HZfEkvCzKn0X4qWBt1NiWVeQ7nykz9lpjME94p+v QAf9MJeiW0x3yr0Wd462Cxlj8nbOZXXfNzQUb5kBwrdIybImyvQ3J7qApUvJnhS/6yIV ohqCkuFmroPmfC92ubBjGZ7RDYRVaHUt0ojj/RbiQDrRbH3MCNYE8U1S7+ukxvlrqI8d UBew== X-Gm-Message-State: APjAAAVPUwpKE8wxiex3FKU7vJhmiJLWya224uZfTmNvQWvq72czXj/+ onYHXvw1jcZcTvGYEpqbmhykShU12ynqcBF/3iAg2mSB X-Received: by 2002:a1c:c909:: with SMTP id f9mr6765062wmb.52.1566482334676; Thu, 22 Aug 2019 06:58:54 -0700 (PDT) MIME-Version: 1.0 References: <20190822084131.114764-1-anup.patel@wdc.com> <20190822084131.114764-14-anup.patel@wdc.com> <77b9ff3c-292f-ee17-ddbb-134c0666fde7@amazon.com> In-Reply-To: From: Anup Patel Date: Thu, 22 Aug 2019 19:28:43 +0530 Message-ID: Subject: Re: [PATCH v5 13/20] RISC-V: KVM: Implement stage2 page table programming To: Alexander Graf Cc: Anup Patel , Palmer Dabbelt , Paul Walmsley , Paolo Bonzini , Radim K , Daniel Lezcano , Thomas Gleixner , Atish Patra , Alistair Francis , Damien Le Moal , Christoph Hellwig , "kvm@vger.kernel.org" , "linux-riscv@lists.infradead.org" , "linux-kernel@vger.kernel.org" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 22, 2019 at 6:57 PM Alexander Graf wrote: > > > > On 22.08.19 14:38, Anup Patel wrote: > > On Thu, Aug 22, 2019 at 5:58 PM Alexander Graf wrote: > >> > >> On 22.08.19 10:45, Anup Patel wrote: > >>> This patch implements all required functions for programming > >>> the stage2 page table for each Guest/VM. > >>> > >>> At high-level, the flow of stage2 related functions is similar > >>> from KVM ARM/ARM64 implementation but the stage2 page table > >>> format is quite different for KVM RISC-V. > >>> > >>> Signed-off-by: Anup Patel > >>> Acked-by: Paolo Bonzini > >>> Reviewed-by: Paolo Bonzini > >>> --- > >>> arch/riscv/include/asm/kvm_host.h | 10 + > >>> arch/riscv/include/asm/pgtable-bits.h | 1 + > >>> arch/riscv/kvm/mmu.c | 637 +++++++++++++++++++++++++- > >>> 3 files changed, 638 insertions(+), 10 deletions(-) > >>> > >>> diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm_host.h > >>> index 3b09158f80f2..a37775c92586 100644 > >>> --- a/arch/riscv/include/asm/kvm_host.h > >>> +++ b/arch/riscv/include/asm/kvm_host.h > >>> @@ -72,6 +72,13 @@ struct kvm_mmio_decode { > >>> int shift; > >>> }; > >>> > >>> +#define KVM_MMU_PAGE_CACHE_NR_OBJS 32 > >>> + > >>> +struct kvm_mmu_page_cache { > >>> + int nobjs; > >>> + void *objects[KVM_MMU_PAGE_CACHE_NR_OBJS]; > >>> +}; > >>> + > >>> struct kvm_cpu_context { > >>> unsigned long zero; > >>> unsigned long ra; > >>> @@ -163,6 +170,9 @@ struct kvm_vcpu_arch { > >>> /* MMIO instruction details */ > >>> struct kvm_mmio_decode mmio_decode; > >>> > >>> + /* Cache pages needed to program page tables with spinlock held */ > >>> + struct kvm_mmu_page_cache mmu_page_cache; > >>> + > >>> /* VCPU power-off state */ > >>> bool power_off; > >>> > >>> diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h > >>> index bbaeb5d35842..be49d62fcc2b 100644 > >>> --- a/arch/riscv/include/asm/pgtable-bits.h > >>> +++ b/arch/riscv/include/asm/pgtable-bits.h > >>> @@ -26,6 +26,7 @@ > >>> > >>> #define _PAGE_SPECIAL _PAGE_SOFT > >>> #define _PAGE_TABLE _PAGE_PRESENT > >>> +#define _PAGE_LEAF (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) > >>> > >>> /* > >>> * _PAGE_PROT_NONE is set on not-present pages (and ignored by the hardware) to > >>> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c > >>> index 2b965f9aac07..9e95ab6769f6 100644 > >>> --- a/arch/riscv/kvm/mmu.c > >>> +++ b/arch/riscv/kvm/mmu.c > >>> @@ -18,6 +18,432 @@ > >>> #include > >>> #include > >>> > >>> +#ifdef CONFIG_64BIT > >>> +#define stage2_have_pmd true > >>> +#define stage2_gpa_size ((phys_addr_t)(1ULL << 39)) > >>> +#define stage2_cache_min_pages 2 > >>> +#else > >>> +#define pmd_index(x) 0 > >>> +#define pfn_pmd(x, y) ({ pmd_t __x = { 0 }; __x; }) > >>> +#define stage2_have_pmd false > >>> +#define stage2_gpa_size ((phys_addr_t)(1ULL << 32)) > >>> +#define stage2_cache_min_pages 1 > >>> +#endif > >>> + > >>> +static int stage2_cache_topup(struct kvm_mmu_page_cache *pcache, > >>> + int min, int max) > >>> +{ > >>> + void *page; > >>> + > >>> + BUG_ON(max > KVM_MMU_PAGE_CACHE_NR_OBJS); > >>> + if (pcache->nobjs >= min) > >>> + return 0; > >>> + while (pcache->nobjs < max) { > >>> + page = (void *)__get_free_page(GFP_KERNEL | __GFP_ZERO); > >>> + if (!page) > >>> + return -ENOMEM; > >>> + pcache->objects[pcache->nobjs++] = page; > >>> + } > >>> + > >>> + return 0; > >>> +} > >>> + > >>> +static void stage2_cache_flush(struct kvm_mmu_page_cache *pcache) > >>> +{ > >>> + while (pcache && pcache->nobjs) > >>> + free_page((unsigned long)pcache->objects[--pcache->nobjs]); > >>> +} > >>> + > >>> +static void *stage2_cache_alloc(struct kvm_mmu_page_cache *pcache) > >>> +{ > >>> + void *p; > >>> + > >>> + if (!pcache) > >>> + return NULL; > >>> + > >>> + BUG_ON(!pcache->nobjs); > >>> + p = pcache->objects[--pcache->nobjs]; > >>> + > >>> + return p; > >>> +} > >>> + > >>> +struct local_guest_tlb_info { > >>> + struct kvm_vmid *vmid; > >>> + gpa_t addr; > >>> +}; > >>> + > >>> +static void local_guest_tlb_flush_vmid_gpa(void *info) > >>> +{ > >>> + struct local_guest_tlb_info *infop = info; > >>> + > >>> + __kvm_riscv_hfence_gvma_vmid_gpa(READ_ONCE(infop->vmid->vmid_version), > >>> + infop->addr); > >>> +} > >>> + > >>> +static void stage2_remote_tlb_flush(struct kvm *kvm, gpa_t addr) > >>> +{ > >>> + struct local_guest_tlb_info info; > >>> + struct kvm_vmid *vmid = &kvm->arch.vmid; > >>> + > >>> + /* TODO: This should be SBI call */ > >>> + info.vmid = vmid; > >>> + info.addr = addr; > >>> + preempt_disable(); > >>> + smp_call_function_many(cpu_all_mask, local_guest_tlb_flush_vmid_gpa, > >>> + &info, true); > >> > >> This is all nice and dandy on the toy 4 core systems we have today, but > >> it will become a bottleneck further down the road. > >> > >> How many VMIDs do you have? Could you just allocate a new one every time > >> you switch host CPUs? Then you know exactly which CPUs to flush by > >> looking at all your vcpu structs and a local field that tells you which > >> pCPU they're on at this moment. > >> > >> Either way, it's nothing that should block inclusion. For today, we're fine. > > > > We are not happy about this either. > > > > Other two options, we have are: > > 1. Have SBI calls for remote HFENCEs > > 2. Propose RISC-V ISA extension for remote FENCEs > > > > Option1 is mostly extending SBI spec and implementing it in runtime > > firmware. > > > > Option2 is ideal solution but requires consensus among wider audience > > in RISC-V foundation. > > > > At this point, we are fine with a simple solution. > > It's fine to explicitly IPI other CPUs to flush their TLBs. What is not > fine is to IPI *all* CPUs to flush their TLBs. Ahh, this should have been cpu_online_mask instead of cpu_all_mask I will update this in next revision. Regards, Anup > > > Alex