Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp1190391rdb; Wed, 6 Dec 2023 10:59:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IFErfG3U7U15Jcf//0WS2i13Q/xTPGMD15axZmHHly5zKS0RDs9Gt+8F57nrPkCoguXwSEl X-Received: by 2002:a17:902:d203:b0:1d0:700b:3f7b with SMTP id t3-20020a170902d20300b001d0700b3f7bmr4005344ply.53.1701889158984; Wed, 06 Dec 2023 10:59:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701889158; cv=none; d=google.com; s=arc-20160816; b=vxc8SQyMyFHFhax8a+ZCiFRmd2QuQV+QJl99m32T3GjPlwQymFsnasPCncoHqaWQfq xiMJF0tPWwTyXxhLaNju5rN3eZHO4WRnEEHQW9Lvo1ebvoWXOdHeZVYxZ2ArUoPeffLr 000+RMfDPA0Ad7ogrpPL4tyZ4TL0UQ4voxFLd3AWdw5t9YiRBaknlvvDR/scutgxL1du tOxxAVTeP9MWw8ii7MUufXowReH1VcD329zNqSLrr6NZgmGvv41RpvQO+eZW7J9ZDJZZ m9pO7/mHHEFLcV5lOl7S/pyDpCj5Z2q71IPVmQP0XmWcW0eAtcTNdI/aTOwUSFmlRhgM B1tQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=xE8NRIJNJ0CFaNMv6X6d37/BGQA0AYh9HhnvcDJXp1E=; fh=bEyFLQknfHvZVRjPYONukTUGOd/Q5gJXTq+COzby4r4=; b=akR/EicTUq400MFf6mHt2/r7CjHfVZ4dGDQSq+8Yh9rq1TyZB2UZQaaOZPEB+UAdLK OIwVYFszFbf7iCRI4bZOhYO5e4jhy5FasNpP76cly6OhME5W709QMByNDVb6uOsM+LgR lRflCsT/RYRznYhFL1PIfqZonKLTUSjmOKjKFfvtEOeDPKSpor/VY5t7jwIYI07tbfAe /xfpdUovVD1/uZIh0cfOC+4BTRQ3gwlX9bDGiUCPHpanumH8jL9wHXS5uhdXcT7SfvXG FijbELChXC5Hy6gX3mGBWvsRhepV0EuDTHL0y7JxacBTXM5Gw/SlABrZZso3hColJFHT Djfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id n18-20020a170902d2d200b001cffce39be3si230569plc.218.2023.12.06.10.59.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 10:59:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id E367880D15B3; Wed, 6 Dec 2023 10:59:15 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379254AbjLFS6u (ORCPT + 99 others); Wed, 6 Dec 2023 13:58:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379188AbjLFS6s (ORCPT ); Wed, 6 Dec 2023 13:58:48 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B351BD66 for ; Wed, 6 Dec 2023 10:58:51 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19114C433C8; Wed, 6 Dec 2023 18:58:46 +0000 (UTC) Date: Wed, 6 Dec 2023 18:58:44 +0000 From: Catalin Marinas To: Jason Gunthorpe Cc: Marc Zyngier , ankita@nvidia.com, Shameerali Kolothum Thodi , oliver.upton@linux.dev, suzuki.poulose@arm.com, yuzenghui@huawei.com, will@kernel.org, ardb@kernel.org, akpm@linux-foundation.org, gshan@redhat.com, aniketa@nvidia.com, cjia@nvidia.com, kwankhede@nvidia.com, targupta@nvidia.com, vsethi@nvidia.com, acurrid@nvidia.com, apopple@nvidia.com, jhubbard@nvidia.com, danw@nvidia.com, mochs@nvidia.com, kvmarm@lists.linux.dev, kvm@vger.kernel.org, lpieralisi@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v2 1/1] KVM: arm64: allow the VM to select DEVICE_* and NORMAL_NC for IO memory Message-ID: References: <20231205164318.GG2692119@nvidia.com> <86bkb4bn2v.wl-maz@kernel.org> <86a5qobkt8.wl-maz@kernel.org> <868r67blwo.wl-maz@kernel.org> <20231206151603.GR2692119@nvidia.com> <20231206172035.GU2692119@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231206172035.GU2692119@nvidia.com> X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Wed, 06 Dec 2023 10:59:16 -0800 (PST) On Wed, Dec 06, 2023 at 01:20:35PM -0400, Jason Gunthorpe wrote: > On Wed, Dec 06, 2023 at 04:31:48PM +0000, Catalin Marinas wrote: > > > This would be fine, as would a VMA flag. Please pick one :) > > > > > > I think a VMA flag is simpler than messing with pgprot. > > > > I guess one could write a patch and see how it goes ;). > > A lot of patches have been sent on this already :( But not one with a VM_* flag. I guess we could also add a VM_VFIO flag which implies KVM has less restrictions on the memory type. I think that's more bike-shedding. The key point is that we don't want to relax this for whatever KVM may map in the guest but only for certain devices. Just having a vma may not be sufficient, we can't tell where that vma came from. So for the vfio bits, completely untested: -------------8<---------------------------- diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c index 1929103ee59a..b89d2dfcd534 100644 --- a/drivers/vfio/pci/vfio_pci_core.c +++ b/drivers/vfio/pci/vfio_pci_core.c @@ -1863,7 +1863,7 @@ int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma * See remap_pfn_range(), called from vfio_pci_fault() but we can't * change vm_flags within the fault handler. Set them now. */ - vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP); + vm_flags_set(vma, VM_VFIO | VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP); vma->vm_ops = &vfio_pci_mmap_ops; return 0; diff --git a/include/linux/mm.h b/include/linux/mm.h index 418d26608ece..6df46fd7836a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -391,6 +391,13 @@ extern unsigned int kobjsize(const void *objp); # define VM_UFFD_MINOR VM_NONE #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ +#ifdef CONFIG_64BIT +#define VM_VFIO_BIT 39 +#define VM_VFIO BIT(VM_VFIO_BIT) +#else +#define VM_VFIO VM_NONE +#endif + /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_EARLY) -------------8<---------------------------- In KVM, Akita's patch would take this into account, not just rely on "device==true". > > > > If we want the VMM to drive this entirely, we could add a new mmap() > > > > flag like MAP_WRITECOMBINE or PROT_WRITECOMBINE. They do feel a bit > > > > > > As in the other thread, we cannot unconditionally map NORMAL_NC into > > > the VMM. > > > > I'm not suggesting this but rather the VMM map portions of the BAR with > > either Device or Normal-NC, concatenate them (MAP_FIXED) and pass this > > range as a memory slot (or multiple if a slot doesn't allow multiple > > vmas). > > The VMM can't know what to do. We already talked about this. The VMM > cannot be involved in the decision to make pages NORMAL_NC or > not. That idea ignores how actual devices work. [...] > > Are the Device/Normal offsets within a BAR fixed, documented in e.g. the > > spec or this is something configurable via some MMIO that the guest > > does. > > No, it is fully dynamic on demand with firmware RPCs. I think that's a key argument. The VMM cannot, on its own, configure the BAR and figure a way to communicate this to the guest. We could invent some para-virtualisation/trapping mechanism but that's unnecessarily complicated. In the DPDK case, DPDK both configures and interacts with the device. In the VMM/VM case, we need the VM to do this, we can't split the configuration in VMM and interaction with the device in the VM. -- Catalin