Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp2998834pxb; Mon, 16 Nov 2020 03:06:32 -0800 (PST) X-Google-Smtp-Source: ABdhPJyzlzzeKGLK5GnyRWXbMK7WzoRXBzvih0ouSeCmQ7CvkeH0eAswqXcXnZvsWqQm0r/y7CEy X-Received: by 2002:a50:99c3:: with SMTP id n3mr15516945edb.213.1605524792137; Mon, 16 Nov 2020 03:06:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605524792; cv=none; d=google.com; s=arc-20160816; b=lXn2933fdUcWKcXA9Mdzs+SGQCgpNVd3mZOW07AeMYfB0Bvifyo7PvdorwqTDmzOQ2 vp6BoS6InsoWiPqhtGuwIr8/c0k7UKbWIc6bW7mwgABLgFFi/58yApE66s3suB6MJqYp 8ZLCjfWb6qgKnSAVpkYE1/4dYU7J7XS8+tnmWVVBnbdEBEn4ptTRoCmgE0lDLjiYiBdC 8vSD4zWc75Wu0IZwghEO/gtltl4aYg/gRh4dPioWEYjlQTwwvgZeqV9cRvzswi96a7SS IRA60caIwl5JpcPE37DsL0+Dz8o5FoaQqUYurmlIzZo3YPDxqFJitCsmZgvf0ngLrWsU r3sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=F99+4LrQGeiMmFWvyRyG+kHqMraVHek9ga88XuDdvSc=; b=FtQkgfTOQxVl4dOiSU29oQjkKUnK5/vtNyCQDmI7dxRT5jYlvR9oNnug8bsn6HFdO+ v51fQwp9T//OgPUxiDSdOcKRgOCve4OsRbjC1pKq4ZP7NkvIxKBxPah+qiYVGLEqo9x9 yHuSefH29szQyXTYnpPuQZUGViwUC2rlc6RUO8wKQCPYf7S56N6Df9FA6hUCzPb90jND UB7n7ui8y3nHN1QxCu7qXzyc5u9g1iGzBdwD2YYIhm7w4JN6fBOPj2zRC+uZAT55NtpV HuGLnZJn0vfRdznL2DTqnWqhjs4sePmpFZW7mcRo9+OhDHbrYsvz0G9yPlmfFAfqBBXl sh8A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=d17tWkLW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p4si2796387edx.432.2020.11.16.03.06.08; Mon, 16 Nov 2020 03:06:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=d17tWkLW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729538AbgKPLBk (ORCPT + 99 others); Mon, 16 Nov 2020 06:01:40 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:59449 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729532AbgKPLBj (ORCPT ); Mon, 16 Nov 2020 06:01:39 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605524498; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F99+4LrQGeiMmFWvyRyG+kHqMraVHek9ga88XuDdvSc=; b=d17tWkLWYCmsAIuVg09I/BZ/nceI8yNXH6Hplu4Rguix7sHz4bxZ8YD/FRseIv3Nl9Kjq9 QJycPpPuecV03OJf6/As5ztSKz2TBvQ1YX455wVheHlyvLZDp4N6zRAEC13BmFSp9sY5+O KdeYz+TLuTKGWpU63gDWJ9UHwW1305Y= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-335-mmVpwwOTP-i0irTemowpCg-1; Mon, 16 Nov 2020 06:01:34 -0500 X-MC-Unique: mmVpwwOTP-i0irTemowpCg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3B7821087D60; Mon, 16 Nov 2020 11:01:31 +0000 (UTC) Received: from laptop.redhat.com (ovpn-113-230.ams2.redhat.com [10.36.113.230]) by smtp.corp.redhat.com (Postfix) with ESMTP id 81BEA5C5AF; Mon, 16 Nov 2020 11:01:20 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, joro@8bytes.org, maz@kernel.org, robin.murphy@arm.com, alex.williamson@redhat.com Cc: jean-philippe@linaro.org, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, tn@semihalf.com, nicoleotsuka@gmail.com, yuzenghui@huawei.com Subject: [PATCH v11 06/13] vfio/pci: Allow to mmap the fault queue Date: Mon, 16 Nov 2020 12:00:23 +0100 Message-Id: <20201116110030.32335-7-eric.auger@redhat.com> In-Reply-To: <20201116110030.32335-1-eric.auger@redhat.com> References: <20201116110030.32335-1-eric.auger@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The DMA FAULT region contains the fault ring buffer. There is benefit to let the userspace mmap this area. Expose this mmappable area through a sparse mmap entry and implement the mmap operation. Signed-off-by: Eric Auger --- v8 -> v9: - remove unused index local variable in vfio_pci_fault_mmap --- drivers/vfio/pci/vfio_pci.c | 61 +++++++++++++++++++++++++++++++++++-- 1 file changed, 58 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index b39d6ed66c71..2a6cc1a87323 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -318,21 +318,75 @@ static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev, kfree(vdev->fault_pages); } +static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, + struct vm_area_struct *vma) +{ + u64 phys_len, req_len, pgoff, req_start; + unsigned long long addr; + unsigned int ret; + + phys_len = region->size; + + req_len = vma->vm_end - vma->vm_start; + pgoff = vma->vm_pgoff & + ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1); + req_start = pgoff << PAGE_SHIFT; + + /* only the second page of the producer fault region is mmappable */ + if (req_start < PAGE_SIZE) + return -EINVAL; + + if (req_start + req_len > phys_len) + return -EINVAL; + + addr = virt_to_phys(vdev->fault_pages); + vma->vm_private_data = vdev; + vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff; + + ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, + req_len, vma->vm_page_prot); + return ret; +} + static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev, struct vfio_pci_region *region, struct vfio_info_cap *caps) { + struct vfio_region_info_cap_sparse_mmap *sparse = NULL; struct vfio_region_info_cap_fault cap = { .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT, .header.version = 1, .version = 1, }; - return vfio_info_add_capability(caps, &cap.header, sizeof(cap)); + size_t size = sizeof(*sparse) + sizeof(*sparse->areas); + int ret; + + ret = vfio_info_add_capability(caps, &cap.header, sizeof(cap)); + if (ret) + return ret; + + sparse = kzalloc(size, GFP_KERNEL); + if (!sparse) + return -ENOMEM; + + sparse->header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP; + sparse->header.version = 1; + sparse->nr_areas = 1; + sparse->areas[0].offset = PAGE_SIZE; + sparse->areas[0].size = region->size - PAGE_SIZE; + + ret = vfio_info_add_capability(caps, &sparse->header, size); + if (ret) + kfree(sparse); + + return ret; } static const struct vfio_pci_regops vfio_pci_dma_fault_regops = { .rw = vfio_pci_dma_fault_rw, .release = vfio_pci_dma_fault_release, + .mmap = vfio_pci_dma_fault_mmap, .add_capability = vfio_pci_dma_fault_add_capability, }; @@ -403,7 +457,8 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) VFIO_REGION_TYPE_NESTED, VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT, &vfio_pci_dma_fault_regops, size, - VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE, + VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE | + VFIO_REGION_INFO_FLAG_MMAP, vdev->fault_pages); if (ret) goto out; @@ -411,7 +466,7 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) header = (struct vfio_region_dma_fault *)vdev->fault_pages; header->entry_size = sizeof(struct iommu_fault); header->nb_entries = DMA_FAULT_RING_LENGTH; - header->offset = sizeof(struct vfio_region_dma_fault); + header->offset = PAGE_SIZE; ret = iommu_register_device_fault_handler(&vdev->pdev->dev, vfio_pci_iommu_dev_fault_handler, -- 2.21.3