Received: by 2002:a25:23cc:0:0:0:0:0 with SMTP id j195csp1183739ybj; Tue, 5 May 2020 14:57:19 -0700 (PDT) X-Google-Smtp-Source: APiQypIPLhQWR5+/cDnE/ZDNODsy4rZ2W7Jf459Dy4FV81o38eef1OnpfH6jSp987S7OpnvvOYfr X-Received: by 2002:a05:6402:698:: with SMTP id f24mr4407946edy.260.1588715838925; Tue, 05 May 2020 14:57:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588715838; cv=none; d=google.com; s=arc-20160816; b=p4a/Rb0SRmYChX9gZ9Nv+oDAT1hvxWXgCWuwiWzoZEnNEhwoW6xNAYA+Juf0FpMm6Y k7pAXiAXVRK3jYok0VaE4H3j2texPkVv9NACxh8VX8fGCRvO3/x8K+PQrEEGFY8M8HN/ 3HxYIPCmCv90rJUdR16hp6IjyYAABJPiXm4esoZILM4wMYA09DtNCTvxK5yZcqXlqcDZ +uuZBRll8QXbtI49Z6Fml4g9sJyzctwWdSA2pVr53YhCGrUMrY5VvwnTjoN1nc3woHI/ qXEhryPgq/cji0RbFah9jzalkzm5qmhJbfItdlcJgLcTq0jN1QOy0+eYhKuMOplVpd6E d36g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:cc:to:from :subject:dkim-signature; bh=39SUQ2OamQJQKnCpKUSUCZUwpdhkR6UlS1NEXTShVQY=; b=fQCINnw8AD95z9JKmL7pNT0DQFGFRTZ7XrsooqsFKV3PSqzX5PhTUwMoQ6DwSWln6E ykz7B8DJ5XPBizHk2YOWlPgO7bo7agYGDgX+8FQWneCNYTNCnZgGBaTNDgVj46/U2Zc2 f+22OxMmUwfhYGtWG7zIKyjcLynZrrDXqNhIagHt0J7hGPE++ULlZtd2lLuXF4yFZXw5 1lMOaGHX4Jy5P3L06+ddIU7CKieuc6ESisiQUsrMFJCJNH38eym2L5/cpqBXXgwds4B8 PKXGpy0cilMr7i2mjHzSxf51a0gf7yglaEEZROn5BUKmqCSxt2z54D7CawioFI/QOR50 BlNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UqCDnyZM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h15si1837592edj.37.2020.05.05.14.56.56; Tue, 05 May 2020 14:57:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UqCDnyZM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729348AbgEEVzE (ORCPT + 99 others); Tue, 5 May 2020 17:55:04 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:24722 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727089AbgEEVzE (ORCPT ); Tue, 5 May 2020 17:55:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1588715702; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=39SUQ2OamQJQKnCpKUSUCZUwpdhkR6UlS1NEXTShVQY=; b=UqCDnyZMUxKc6FQSwoRd5Q6+UJVfSV6FnLpPFZykUA8ASBW+9u1QrZBaFdV6nJdeY3MGGV rGsHtLOQfTikDZKx+JE5hRAAIRq6dK2n6yCj+ryWyrczK+mzV7M64RtqkYskCZwWA4Wz3D hamc+KSnGt3lH5gKuxYnLotCLH6w+AU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-280-5MpTdOu_OX-cSIeMqm0TCw-1; Tue, 05 May 2020 17:54:58 -0400 X-MC-Unique: 5MpTdOu_OX-cSIeMqm0TCw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A7439100A8E8; Tue, 5 May 2020 21:54:57 +0000 (UTC) Received: from gimli.home (ovpn-113-95.phx2.redhat.com [10.3.113.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 39D6F5C1B2; Tue, 5 May 2020 21:54:54 +0000 (UTC) Subject: [PATCH v2 2/3] vfio-pci: Fault mmaps to enable vma tracking From: Alex Williamson To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, cohuck@redhat.com, jgg@ziepe.ca Date: Tue, 05 May 2020 15:54:53 -0600 Message-ID: <158871569380.15589.16950418949340311053.stgit@gimli.home> In-Reply-To: <158871401328.15589.17598154478222071285.stgit@gimli.home> References: <158871401328.15589.17598154478222071285.stgit@gimli.home> User-Agent: StGit/0.19-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Rather than calling remap_pfn_range() when a region is mmap'd, setup a vm_ops handler to support dynamic faulting of the range on access. This allows us to manage a list of vmas actively mapping the area that we can later use to invalidate those mappings. The open callback invalidates the vma range so that all tracking is inserted in the fault handler and removed in the close handler. Signed-off-by: Alex Williamson --- drivers/vfio/pci/vfio_pci.c | 76 ++++++++++++++++++++++++++++++++++- drivers/vfio/pci/vfio_pci_private.h | 7 +++ 2 files changed, 81 insertions(+), 2 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 6c6b37b5c04e..66a545a01f8f 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -1299,6 +1299,70 @@ static ssize_t vfio_pci_write(void *device_data, const char __user *buf, return vfio_pci_rw(device_data, (char __user *)buf, count, ppos, true); } +static int vfio_pci_add_vma(struct vfio_pci_device *vdev, + struct vm_area_struct *vma) +{ + struct vfio_pci_mmap_vma *mmap_vma; + + mmap_vma = kmalloc(sizeof(*mmap_vma), GFP_KERNEL); + if (!mmap_vma) + return -ENOMEM; + + mmap_vma->vma = vma; + + mutex_lock(&vdev->vma_lock); + list_add(&mmap_vma->vma_next, &vdev->vma_list); + mutex_unlock(&vdev->vma_lock); + + return 0; +} + +/* + * Zap mmaps on open so that we can fault them in on access and therefore + * our vma_list only tracks mappings accessed since last zap. + */ +static void vfio_pci_mmap_open(struct vm_area_struct *vma) +{ + zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); +} + +static void vfio_pci_mmap_close(struct vm_area_struct *vma) +{ + struct vfio_pci_device *vdev = vma->vm_private_data; + struct vfio_pci_mmap_vma *mmap_vma; + + mutex_lock(&vdev->vma_lock); + list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) { + if (mmap_vma->vma == vma) { + list_del(&mmap_vma->vma_next); + kfree(mmap_vma); + break; + } + } + mutex_unlock(&vdev->vma_lock); +} + +static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct vfio_pci_device *vdev = vma->vm_private_data; + + if (vfio_pci_add_vma(vdev, vma)) + return VM_FAULT_OOM; + + if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, + vma->vm_end - vma->vm_start, vma->vm_page_prot)) + return VM_FAULT_SIGBUS; + + return VM_FAULT_NOPAGE; +} + +static const struct vm_operations_struct vfio_pci_mmap_ops = { + .open = vfio_pci_mmap_open, + .close = vfio_pci_mmap_close, + .fault = vfio_pci_mmap_fault, +}; + static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma) { struct vfio_pci_device *vdev = device_data; @@ -1357,8 +1421,14 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma) vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff; - return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, - req_len, vma->vm_page_prot); + /* + * See remap_pfn_range(), called from vfio_pci_fault() but we can't + * change vm_flags within the fault handler. Set them now. + */ + vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; + vma->vm_ops = &vfio_pci_mmap_ops; + + return 0; } static void vfio_pci_request(void *device_data, unsigned int count) @@ -1608,6 +1678,8 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) spin_lock_init(&vdev->irqlock); mutex_init(&vdev->ioeventfds_lock); INIT_LIST_HEAD(&vdev->ioeventfds_list); + mutex_init(&vdev->vma_lock); + INIT_LIST_HEAD(&vdev->vma_list); ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev); if (ret) diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h index 36ec69081ecd..9b25f9f6ce1d 100644 --- a/drivers/vfio/pci/vfio_pci_private.h +++ b/drivers/vfio/pci/vfio_pci_private.h @@ -92,6 +92,11 @@ struct vfio_pci_vf_token { int users; }; +struct vfio_pci_mmap_vma { + struct vm_area_struct *vma; + struct list_head vma_next; +}; + struct vfio_pci_device { struct pci_dev *pdev; void __iomem *barmap[PCI_STD_NUM_BARS]; @@ -132,6 +137,8 @@ struct vfio_pci_device { struct list_head ioeventfds_list; struct vfio_pci_vf_token *vf_token; struct notifier_block nb; + struct mutex vma_lock; + struct list_head vma_list; }; #define is_intx(vdev) (vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX)