Received: by 2002:a05:7412:a9a8:b0:f9:92ae:e617 with SMTP id o40csp111025rdh; Wed, 20 Dec 2023 18:13:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IG0e3NmLUKUJAly7L6W0h0HZG19m1AFYJlzgTeI+cnnDdB8qem7+Mq2/dZLSXatUZ10YtPD X-Received: by 2002:a17:902:6bc4:b0:1d0:6ffe:a22 with SMTP id m4-20020a1709026bc400b001d06ffe0a22mr21565644plt.128.1703124786326; Wed, 20 Dec 2023 18:13:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703124786; cv=none; d=google.com; s=arc-20160816; b=TkEFYRt8nueNA6F2cjxbizscS+dA3vOKEJ5skC5HbNuq1K0hUdTxq9Kc6DQsF6JE6q pLWzM6k9UcEKsgg8vyr06iPZ1/4W5Bv+ahKNa+YrsJIJfpQGrWtG2vHzyVqaPtIjMDJB cV/tNqO5cz0KAV19RCjDfUc1muV3rm0qICnwJfoNgKxYX+eqbLqdqjCl/ubvOKhW1MW2 e+W5/KyMtAHlpInOyv9a4p6tGnmAvZhpNlcJvsiNiirPso9KWtX8NHO8Zni43GintEt4 MoExC+mRYfnIjBlDSYrFeperDLjttwW5/7elNQRFUfkIEMu9J+qj0xhGG8DbauEw+ehG qG5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=qZL4eZ6u9CkNNM8zLim6V4IndhkBqKjZUwKwl+b+dig=; fh=uzZneOcHVQnC0ORX9hBweNiRMv/cfDsVpNJYzrd9ktQ=; b=uRqP0jYZNaz8uT0ZCbRGVZvWN4/Wl7EoMZnOOWBFhZvCqd2gSvRWuMRLP0N09O+FPY kkyUOoR/SUeZPxVytR5ew3LkuyaYjFE+q/L5MpgZAkRFRJ7oPeDgf3TQRS6FG8vUVp+v G0V7ID+o9YSfsVDZ+dgaebTIr4xDziSZzo9yGNtOZjmC003XBGwLTcu/0mu9g3Pyu43N tOQ85ORudashtXaF+1PQh7xUYe0nuSilUl/h08xilXOImUdyllkclgQLtrG22eJoQ6aL m1xcZPOL6mTbfO30Eel9HV0sHXtmDJRv8TLq1cVKJHf3KmtIo+CelmZ8TqHofuj+zuEA 4cwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Hz2VW0RL; spf=pass (google.com: domain of linux-kernel+bounces-7745-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7745-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id i10-20020a170902c94a00b001d3e12d1a7asi633775pla.471.2023.12.20.18.13.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 18:13:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-7745-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Hz2VW0RL; spf=pass (google.com: domain of linux-kernel+bounces-7745-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-7745-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id F12712876C0 for ; Thu, 21 Dec 2023 02:13:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B6D7F2104; Thu, 21 Dec 2023 02:13:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Hz2VW0RL" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7356C1843 for ; Thu, 21 Dec 2023 02:12:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbdac48c6aeso461764276.3 for ; Wed, 20 Dec 2023 18:12:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703124777; x=1703729577; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qZL4eZ6u9CkNNM8zLim6V4IndhkBqKjZUwKwl+b+dig=; b=Hz2VW0RLT2wu1n6xcs1cm6UhLe1h8ll/pa54CAUjY09etFDQyvnmK7L9axYzKKVbCn FlTRemeJ0+MMRmnBbqk0azbkpFEqbCJaIpUGvDc7dMtZ8wsnNauVe8BbjLA+UGlNMchV lmclyOQ0363DR9+77iJFQh6RNapZL5IU3vA9pIXcJTfjrw4gstnQtDktoZX4y3NvOVVt DRwM175xWXPRk4QEOi74mNXThiFTfLBiNhQEH/AwXMob2Awms+3ZBk0lvGtOsCryJ/vc HaPyIESvw7JWjWOOpoqYcbNOzq9adQpkiKJI3acon+2kqXp00mgoWpTvkbvNzD0u59i2 B/eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703124777; x=1703729577; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qZL4eZ6u9CkNNM8zLim6V4IndhkBqKjZUwKwl+b+dig=; b=Li1H7bIXRGMsaA2Pbi2Xarkl+Am8hVKPnMnWX80UVA6zRe8QVP+xh7HiAAdn2Uqx6W m7Y0X5eXJR1cRfsXP6UV/JJWH0TmNgs8v1SPdz8ZD7TAASjEJF4BcfdKJonoeI8lHJht lMkbLkOOm68g8MVMGyEZHG2pzRbYF/k27XkdKvZoH0K5yyWPqswpHasPcVzYS3jWhw1k uY0QuZgfuev5jjVTM9ucaF6XQKcfJnu7/V4fQ0FbIyi0VBHuXoo2eCsCR07fcii911vc emgfWOEYASlZskVI4NkkEgpkxsFoEuudqknd1zoowKA9yjcWRmnHtCCzpKMHAK+s+R+G XL0g== X-Gm-Message-State: AOJu0YxDzpRkKjKiWVF2xptYVe7ai2OxCa/hox90E9q/ynb05/UNQwUS vnM9NWjOzBKyg4HzRHRHwh0ojhaNwcg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:664d:0:b0:dbd:bf0f:6fff with SMTP id z13-20020a25664d000000b00dbdbf0f6fffmr260745ybm.1.1703124777494; Wed, 20 Dec 2023 18:12:57 -0800 (PST) Date: Wed, 20 Dec 2023 18:12:55 -0800 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231214103520.7198-1-yan.y.zhao@intel.com> Message-ID: Subject: Re: [RFC PATCH] KVM: Introduce KVM VIRTIO device From: Sean Christopherson To: Yan Zhao Cc: Kevin Tian , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , "pbonzini@redhat.com" , "olvaffe@gmail.com" , Zhiyuan Lv , Zhenyu Z Wang , Yongwei Ma , "vkuznets@redhat.com" , "wanpengli@tencent.com" , "jmattson@google.com" , "joro@8bytes.org" , "gurchetansingh@chromium.org" , "kraxel@redhat.com" , Yiwei Zhang Content-Type: text/plain; charset="us-ascii" On Wed, Dec 20, 2023, Yan Zhao wrote: > On Tue, Dec 19, 2023 at 12:26:45PM +0800, Yan Zhao wrote: > > On Mon, Dec 18, 2023 at 07:08:51AM -0800, Sean Christopherson wrote: > > > > > Implementation Consideration > > > > > === > > > > > There is a previous series [1] from google to serve the same purpose to > > > > > let KVM be aware of virtio GPU's noncoherent DMA status. That series > > > > > requires a new memslot flag, and special memslots in user space. > > > > > > > > > > We don't choose to use memslot flag to request honoring guest memory > > > > > type. > > > > > > > > memslot flag has the potential to restrict the impact e.g. when using > > > > clflush-before-read in migration? > > > > > > Yep, exactly. E.g. if KVM needs to ensure coherency when freeing memory back to > > > the host kernel, then the memslot flag will allow for a much more targeted > > > operation. > > > > > > > Of course the implication is to honor guest type only for the selected slot > > > > in KVM instead of applying to the entire guest memory as in previous series > > > > (which selects this way because vmx_get_mt_mask() is in perf-critical path > > > > hence not good to check memslot flag?) > > > > > > Checking a memslot flag won't impact performance. KVM already has the memslot > > > when creating SPTEs, e.g. the sole caller of vmx_get_mt_mask(), make_spte(), has > > > access to the memslot. > > > > > > That isn't coincidental, KVM _must_ have the memslot to construct the SPTE, e.g. > > > to retrieve the associated PFN, update write-tracking for shadow pages, etc. > > > > > Hi Sean, > > Do you prefer to introduce a memslot flag KVM_MEM_DMA or KVM_MEM_WC? > > For KVM_MEM_DMA, KVM needs to > > (a) search VMA for vma->vm_page_prot and convert it to page cache mode (with > > pgprot2cachemode()? ), or > > (b) look up memtype of the PFN, by calling lookup_memtype(), similar to that in > > pat_pfn_immune_to_uc_mtrr(). > > > > But pgprot2cachemode() and lookup_memtype() are not exported by x86 code now. > > > > For KVM_MEM_WC, it requires user to ensure the memory is actually mapped > > to WC, right? > > > > Then, vmx_get_mt_mask() just ignores guest PAT and programs host PAT as EPT type > > for the special memslot only, as below. > > Is this understanding correct? > > > > static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > > { > > if (is_mmio) > > return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; > > > > if (gfn_in_dma_slot(vcpu->kvm, gfn)) { > > u8 type = MTRR_TYPE_WRCOMB; > > //u8 type = pat_pfn_memtype(pfn); > > return (type << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; > > } > > > > if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) > > return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; > > > > if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { > > if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) > > return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; > > else > > return (MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT) | > > VMX_EPT_IPAT_BIT; > > } > > > > return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; > > } > > > > BTW, since the special memslot must be exposed to guest as virtio GPU BAR in > > order to prevent other guest drivers from access, I wonder if it's better to > > include some keyword like VIRTIO_GPU_BAR in memslot flag name. > Another choice is to add a memslot flag KVM_MEM_HONOR_GUEST_PAT, then user > (e.g. QEMU) does special treatment to this kind of memslots (e.g. skipping > reading/writing to them in general paths). > > @@ -7589,26 +7589,29 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > if (is_mmio) > return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; > > + if (in_slot_honor_guest_pat(vcpu->kvm, gfn)) > + return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; This is more along the lines of what I was thinking, though the name should be something like KVM_MEM_NON_COHERENT_DMA, i.e. not x86 specific and not contradictory for AMD (which already honors guest PAT). I also vote to deliberately ignore MTRRs, i.e. start us on the path of ripping those out. This is a new feature, so we have the luxury of defining KVM's ABI for that feature, i.e. can state that on x86 it honors guest PAT, but not MTRRs. Like so? diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index d21f55f323ea..ed527acb2bd3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7575,7 +7575,8 @@ static int vmx_vm_init(struct kvm *kvm) return 0; } -static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) +static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio, + struct kvm_memory_slot *slot) { /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in * memory aliases with conflicting memory types and sometimes MCEs. @@ -7598,6 +7599,9 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) if (is_mmio) return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT; + if (kvm_memslot_has_non_coherent_dma(slot)) + return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; + if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; I like the idea of pulling the memtype from the host, but if we can make that work then I don't see the need for a special memslot flag, i.e. just do it for *all* SPTEs on VMX. I don't think we need a VMA for that, e.g. we should be able to get the memtype from the host PTEs, just like we do the page size. KVM_MEM_WC is a hard "no" for me. It's far too x86 centric, and as you alluded to, it requires coordination from the guest, i.e. is effectively limited to paravirt scenarios. > + > if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) > return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; > > if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { > if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) > return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; > else > return (MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT) | > VMX_EPT_IPAT_BIT; > } > > return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; > }