Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp5319977imu; Tue, 29 Jan 2019 17:18:45 -0800 (PST) X-Google-Smtp-Source: ALg8bN62e5+qImiBq/s8+sA5zeI4m5WQI2gN8aYWUpaxqgu7ghIDbDRifFVbkz1MsRO7AFjWGzF7 X-Received: by 2002:a17:902:b03:: with SMTP id 3mr28040898plq.91.1548811125355; Tue, 29 Jan 2019 17:18:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548811125; cv=none; d=google.com; s=arc-20160816; b=N1MEAvrjg0gnMeoFYvNIOnQ1Ct/+l6j10xP0a55cHP0sE3QLJFd0YaV05eQt3hjU0I XsptZA452aqaVbtBDiCUZR7SWzwOavExLUz2AWSu2NcVdJg/uXsk0S07Rn9Vr7C7Ctzp D8Zk05A16q/3Ee5xFjBf3ZLodtEpVe32UEzdWR3r832jMkaIcxYqmd5Q6ErubnRUbxrO JMLrWz8qNzGOHOk1M4IWtVqDtEukbcStMj01tIEodsqe3fzNM2Jw7Uu6P0Pdr0OToANC pmzCZZWcF57bzuHgobjBbLGvNaKy8yBaZaoZnkAzDIGVXLw2PeJvJguR3gtaqYxtvoWI 6kaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:subject:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to; bh=d0hqJBNO/ehDEoDh0aiZqq406rzWFwWHaEXfTA/EhkQ=; b=avfD7jjmCRlQ5V5/BG4IBZtLQFQaPiT9TYnhC2WLmGglD8x54G9gclcO2euPl58aA5 uj0dh1/MGnEINV0y7HLLfjRMEgfutoV1OnJYrQK+KlpJ4+QH613tonzZN0QOcRz/wcRi QhU7b4OYAb6cWCSc+62Kr/n5eBJSf8f8xbPq5EAbDhxmsvSu6YmQRTvN4DhObgbb3AUg wXlEBhMdGnyzx7rK7HaWCr0T/h2JAFwSdPKHKLHl8yiuGWCy5Xumu+6LJsHzl6GsR6k4 isb0Ya68G/fNuRo++iGMr0H/DJDKX3W7DYpWw31JPmTIM2H3E7tF7uqfcp1VDJuBl2zp bjtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a28si19478pgl.530.2019.01.29.17.18.30; Tue, 29 Jan 2019 17:18:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728176AbfA3BR6 (ORCPT + 99 others); Tue, 29 Jan 2019 20:17:58 -0500 Received: from ale.deltatee.com ([207.54.116.67]:34102 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727360AbfA3BR6 (ORCPT ); Tue, 29 Jan 2019 20:17:58 -0500 Received: from guinness.priv.deltatee.com ([172.16.1.162]) by ale.deltatee.com with esmtp (Exim 4.89) (envelope-from ) id 1goeVd-0001rh-FE; Tue, 29 Jan 2019 18:17:46 -0700 To: Jerome Glisse Cc: Jason Gunthorpe , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Greg Kroah-Hartman , "Rafael J . Wysocki" , Bjorn Helgaas , Christian Koenig , Felix Kuehling , "linux-pci@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , "iommu@lists.linux-foundation.org" References: <20190129174728.6430-1-jglisse@redhat.com> <20190129174728.6430-4-jglisse@redhat.com> <20190129191120.GE3176@redhat.com> <20190129193250.GK10108@mellanox.com> <99c228c6-ef96-7594-cb43-78931966c75d@deltatee.com> <20190129205749.GN3176@redhat.com> <2b704e96-9c7c-3024-b87f-364b9ba22208@deltatee.com> <20190129215028.GQ3176@redhat.com> <20190129234752.GR3176@redhat.com> From: Logan Gunthorpe Message-ID: <655a335c-ab91-d1fc-1ed3-b5f0d37c6226@deltatee.com> Date: Tue, 29 Jan 2019 18:17:43 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: <20190129234752.GR3176@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-CA Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 172.16.1.162 X-SA-Exim-Rcpt-To: iommu@lists.linux-foundation.org, jroedel@suse.de, robin.murphy@arm.com, m.szyprowski@samsung.com, hch@lst.de, dri-devel@lists.freedesktop.org, linux-pci@vger.kernel.org, Felix.Kuehling@amd.com, christian.koenig@amd.com, bhelgaas@google.com, rafael@kernel.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, jgg@mellanox.com, jglisse@redhat.com X-SA-Exim-Mail-From: logang@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.9 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE autolearn=ham autolearn_force=no version=3.4.2 Subject: Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-01-29 4:47 p.m., Jerome Glisse wrote: > The whole point is to allow to use device memory for range of virtual > address of a process when it does make sense to use device memory for > that range. So they are multiple cases where it does make sense: > [1] - Only the device is accessing the range and they are no CPU access > For instance the program is executing/running a big function on > the GPU and they are not concurrent CPU access, this is very > common in all the existing GPGPU code. In fact AFAICT It is the > most common pattern. So here you can use HMM private or public > memory. > [2] - Both device and CPU access a common range of virtul address > concurrently. In that case if you are on a platform with cache > coherent inter-connect like OpenCAPI or CCIX then you can use > HMM public device memory and have both access the same memory. > You can not use HMM private memory. > > So far on x86 we only have PCIE and thus so far on x86 we only have > private HMM device memory that is not accessible by the CPU in any > way. I feel like you're just moving the rug out from under us... Before you said ignore HMM and I was asking about the use case that wasn't using HMM and how it works without HMM. In response, you just give me *way* too much information describing HMM. And still, as best as I can see, managing DMA mappings (which is different from the userspace mappings) for GPU P2P should be handled by HMM and the userspace mappings should *just* link VMAs to HMM pages using the standard infrastructure we already have. >> And what struct pages are actually going to be backing these VMAs if >> it's not using HMM? > > When you have some range of virtual address migrated to HMM private > memory then the CPU pte are special swap entry and they behave just > as if the memory was swapped to disk. So CPU access to those will > fault and trigger a migration back to main memory. This isn't answering my question at all... I specifically asked what is backing the VMA when we are *not* using HMM. Logan