Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp5109902imu; Tue, 29 Jan 2019 12:59:51 -0800 (PST) X-Google-Smtp-Source: ALg8bN7J+TutHr+PyclGBBB4USwj+7n0VVnnJylNUn3h5RcYWUyRTaYjkrEKDKmOWBP2fMISFgZZ X-Received: by 2002:a63:4e41:: with SMTP id o1mr25721662pgl.282.1548795591072; Tue, 29 Jan 2019 12:59:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548795591; cv=none; d=google.com; s=arc-20160816; b=Cg4LSuBq/c1mIJhjWE+JSKo1bd7K972ZEU2Q9pYp3N4BLYJr+ZkYV/1tiYEJ9mj5QK VJbBeilspHN1603mD1OLElSAWUQ1hbwcJfl6NbtZYAZjuL+2wtwM50EKl0JdciHaurdr /Tr/DqT56kEpx1q/Y9xTNNfDuEmbptVrxcq99Ka5uWVnH0Ed1BV2655+Vi9jAeXcgo7T U5fuAmH3P9dYvzCjVPxsuUXfV9mrxoynbPyywuK5xoGEgkfxOumJ83vD8YUrADkioSSe PtRLH8CXDM2ywcmeZQ2SVwnJ2lrp4so6gdGyPkZ8RqfVClfabaG9Fk2z0F5W5AHXEcUy DaZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=MNCvqzS1AdnHITVzgsI2Q0u25f/OXmifvxfJdUAqf7I=; b=sftwJZyDzSJSQgRsGoHDNrWo2qxLniGeMl+MoiRHTDrXI0EJpXw9pz68e6HDuOnZnB CbQF0k/oncpPxIJOnAExAQyJ2opjm3QDgK6mtUQLa1JG1zLMpP2DLqtRble3GsNPl1F0 6ttQMHMEoC4w/FzYEqorVpbidnzOsqNSrQ6gxtePKqw5x4LqYwLWUOVZFnAoPSTBg5I5 gLZINtjWw4jVicQrge74G3B1KG+sl3EXScpvyzJXri/M3VcUG1YWN/2jalFPEqvahRoq uD98WMMWzTvRu3jWPalgzf+cknmAwmxJWcogaK6sKcgzPjxu+PZjKDaj43O7+/Gudujq 8m2g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r12si2708699pgf.22.2019.01.29.12.59.35; Tue, 29 Jan 2019 12:59:51 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729176AbfA2U57 (ORCPT + 99 others); Tue, 29 Jan 2019 15:57:59 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55146 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726982AbfA2U57 (ORCPT ); Tue, 29 Jan 2019 15:57:59 -0500 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 256123DE03; Tue, 29 Jan 2019 20:57:58 +0000 (UTC) Received: from redhat.com (ovpn-122-2.rdu2.redhat.com [10.10.122.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A46D119745; Tue, 29 Jan 2019 20:57:54 +0000 (UTC) Date: Tue, 29 Jan 2019 15:57:50 -0500 From: Jerome Glisse To: Logan Gunthorpe Cc: Jason Gunthorpe , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Greg Kroah-Hartman , "Rafael J . Wysocki" , Bjorn Helgaas , Christian Koenig , Felix Kuehling , "linux-pci@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , "iommu@lists.linux-foundation.org" Subject: Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma Message-ID: <20190129205749.GN3176@redhat.com> References: <20190129174728.6430-1-jglisse@redhat.com> <20190129174728.6430-4-jglisse@redhat.com> <20190129191120.GE3176@redhat.com> <20190129193250.GK10108@mellanox.com> <99c228c6-ef96-7594-cb43-78931966c75d@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <99c228c6-ef96-7594-cb43-78931966c75d@deltatee.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 29 Jan 2019 20:57:58 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 29, 2019 at 01:39:49PM -0700, Logan Gunthorpe wrote: > > > On 2019-01-29 12:32 p.m., Jason Gunthorpe wrote: > > Jerome, I think it would be nice to have a helper scheme - I think the > > simple case would be simple remapping of PCI BAR memory, so if we > > could have, say something like: > > > > static const struct vm_operations_struct my_ops { > > .p2p_map = p2p_ioremap_map_op, > > .p2p_unmap = p2p_ioremap_unmap_op, > > } > > > > struct ioremap_data { > > [..] > > } > > > > fops_mmap() { > > vma->private_data = &driver_priv->ioremap_data; > > return p2p_ioremap_device_memory(vma, exporting_device, [..]); > > } > > This is roughly what I was expecting, except I don't see exactly what > the p2p_map and p2p_unmap callbacks are for. The importing driver should > see p2pdma/hmm struct pages and use the appropriate function to map > them. It shouldn't be the responsibility of the exporting driver to > implement the mapping. And I don't think we should have 'special' vma's > for this (though we may need something to ensure we don't get mapping > requests mixed with different types of pages...). GPU driver must be in control and must be call to. Here there is 2 cases in this patchset and i should have instead posted 2 separate patchset as it seems that it is confusing things. For the HMM page, the physical address of the page ie the pfn does not correspond to anything ie there is nothing behind it. So the importing device has no idea how to get a valid physical address from an HMM page only the device driver exporting its memory with HMM device memory knows that. For the special vma ie mmap of a device file. GPU driver do manage their BAR ie the GPU have a page table that map BAR page to GPU memory and the driver _constantly_ update this page table, it is reflected by invalidating the CPU mapping. In fact most of the time the CPU mapping of GPU object are invalid they are valid only a small fraction of their lifetime. So you _must_ have some call to inform the exporting device driver that another device would like to map one of its vma. The exporting device can then try to avoid as much churn as possible for the importing device. But this has consequence and the exporting device driver must be allow to apply policy and make decission on wether or not it authorize the other device to peer map its memory. For GPU the userspace application have to call specific API that translate into specific ioctl which themself set flags on object (in the kernel struct tracking the user space object). The only way to allow program predictability is if the application can ask and know if it can peer export an object (ie is there enough BAR space left). Moreover i would like to be able to use this API between GPUs that are inter-connected between each other and for those the CPU page table are just invalid and the physical address to use are only meaning full to the exporting and importing device. So again here core kernel has no idea of what the physical address would be. So in both cases, at very least for GPU, we do want total control to be given to the exporter. > > I also figured there'd be a fault version of p2p_ioremap_device_memory() > for when you are mapping P2P memory and you want to assign the pages > lazily. Though, this can come later when someone wants to implement that. For GPU the BAR address space is manage page by page and thus you do not want to map a range of BAR addresses but you want to allow mapping of multiple page of BAR address that are not adjacent to each other nor ordered in anyway. But providing helper for simpler device does make sense. Cheers, J?r?me