Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp429693pxb; Tue, 14 Sep 2021 00:13:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzFbSsKVwjOyK2zjwzF0iQvvs3PWZSqVrQVS/PM5942QPeti9aFqXK6J2HGJFiO03wb5Py5 X-Received: by 2002:a17:906:1d07:: with SMTP id n7mr17199378ejh.53.1631603631739; Tue, 14 Sep 2021 00:13:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631603631; cv=none; d=google.com; s=arc-20160816; b=hm2GW12iEVADFv8JHNNl/VllElPalrkG1t4mLeQvhV/CFWxBSpYaPId4ANp5m8NF1y z5kDjyZg4A9rbNXaJD/FDAeBA/eCjtsC3FnCwZtbp7xkEEcRWbmXvpFKgFLy5jaFIAst J9p675ZVgayJSQ8T3BZvYrxtdoZN7La7Y6WKY6cFG2KHyiQIvCm/ky336TPeAnHBtYch ao1VK0JI8Shs/rYQ0ktFeNT4KJgs7pMvSNmrK0STUyD8oMCA1yffRsB+J/dAxt5C1k9w Fvo5b83B6XDZdaYykxnHQmIftX0oYyWawjRXr9ure2qz2hdHjqCuIfxSqR/Vvi3MWqds Mxpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=mmfjy5Th0WsOwkJt7Ir/JFNee8szHt3Y0EugGTp0ia0=; b=UxD7P3mvUDYwyFOsml17WPY6Kx/71D65ArGlt6gEF7chIbB27IK7RCq44a8xTBP7xF UfTxCbE5ColKfw+UN4SzY6AclDRG7SnBk6LU03q+fNrN9R3snLZ2t/7TFRuNF7ciXlYI F46yNA2/PUeHIGDnhbwEZg7fSwsMuRNTGeu5Gw16Tdlspzn2GkM/vimoYfWtwPf8y3Rt 9MKjaKf8z8lteGs0mkE3x+Lk/LUI6kWkZnz3vID7EZUa1muqu5G2irFQ3SKS1H70PsXX srrp/KrJ2D5Bm+u21l5AJsa8Mw2Der804h9mSqmCKL5UsiWyjpTYVUmk9jO6R3FCKKAs /zHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@igel-co-jp.20150623.gappssmtp.com header.s=20150623 header.b=1JlTTQK5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a10si10360105ejf.122.2021.09.14.00.13.28; Tue, 14 Sep 2021 00:13:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@igel-co-jp.20150623.gappssmtp.com header.s=20150623 header.b=1JlTTQK5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240573AbhINHNN (ORCPT + 99 others); Tue, 14 Sep 2021 03:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240575AbhINHNF (ORCPT ); Tue, 14 Sep 2021 03:13:05 -0400 Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com [IPv6:2a00:1450:4864:20::129]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16D3DC061762 for ; Tue, 14 Sep 2021 00:11:48 -0700 (PDT) Received: by mail-lf1-x129.google.com with SMTP id h16so26658512lfk.10 for ; Tue, 14 Sep 2021 00:11:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=mmfjy5Th0WsOwkJt7Ir/JFNee8szHt3Y0EugGTp0ia0=; b=1JlTTQK545gD4gENkjtzDBBttLJSew1xXd5vdCvxPNjtrEIr1RUw7tM82TRpRzJNnD /EweNJnNWjq1AHSY2ilH3NKTgEK6aSlHrC/cYSoQgrDhyxKlV4HASpkpcw9hiwqANtkT nVH8j+GyM+5w/H3xnUYFiRy60ViBW8qKEHd12BUB5+doy9xEp0mQ5VnBz/tnavt/Aamo PL2+LC5vzWc0cLDQn584bgIJ//1HRv6aQv8Knu52ewfQOpCZ7JV/2/+imPZ0YoSa0cP2 Y0jpslrkGOQysod+TPif+nc6oL8QzdZSFzdAccSdIIN1OOhIuUF0x0qw3FaqOrw6NEaT 4x4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=mmfjy5Th0WsOwkJt7Ir/JFNee8szHt3Y0EugGTp0ia0=; b=YNiPa7ZAWsOQM1TnhT0t/5XTlEAmuxYZoMza7URpc4ZolETCBUUGGPJ4e1wUMJM4xd e/fxcgVobX3YX8vbHDoBd1dJYAsArlEz14ezncJlEOE+TuA9Us2mV5Qq74o76Ndb1Fgr OM9MJd0pyn29uyvlwbENLg8HrQEM1gx4QsfzfghGndz7aBnyJa4zweyiPeAi/c/NsUSM zmzMTISqFMhGqME0D8a9H9IHJCdkpgHyB5RGNgyKfVE6S5FX4CBRfGfAgAZbvAVj4IY0 /dFqMD1Sv62fAScmYdJvhanJaf0MTetInCOF3KvhlxsCUYZ/4X2Za2USVSrymTqr/g41 nfhA== X-Gm-Message-State: AOAM530BOk6G9JAwUAOHcUS2LRT5KSvnjFM+PUusTAnsCh/SKU/9ehSF /s78g1VP+OndDrKuXSOgnFTbURgu3aIAvcQ4BhNgkw== X-Received: by 2002:a05:6512:38a1:: with SMTP id o1mr12050131lft.645.1631603506141; Tue, 14 Sep 2021 00:11:46 -0700 (PDT) MIME-Version: 1.0 References: <20210908061611.69823-1-mie@igel.co.jp> <20210908061611.69823-2-mie@igel.co.jp> <20210908111804.GX1200268@ziepe.ca> <1c0356f5-19cf-e883-3d96-82a87d0cffcb@amd.com> <20210908233354.GB3544071@ziepe.ca> In-Reply-To: From: Shunsuke Mie Date: Tue, 14 Sep 2021 16:11:34 +0900 Message-ID: Subject: Re: [RFC PATCH 1/3] RDMA/umem: Change for rdma devices has not dma device To: Daniel Vetter Cc: Jason Gunthorpe , =?UTF-8?Q?Christian_K=C3=B6nig?= , Christoph Hellwig , Zhu Yanjun , Alex Deucher , Doug Ledford , Jianxin Xiong , Leon Romanovsky , Linux Kernel Mailing List , linux-rdma , Damian Hobson-Garcia , Takanari Hayama , Tomohito Esaki Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2021=E5=B9=B49=E6=9C=8814=E6=97=A5(=E7=81=AB) 4:23 Daniel Vetter : > > On Fri, Sep 10, 2021 at 3:46 AM Shunsuke Mie wrote: > > > > 2021=E5=B9=B49=E6=9C=889=E6=97=A5(=E6=9C=A8) 18:26 Daniel Vetter : > > > > > > On Thu, Sep 9, 2021 at 1:33 AM Jason Gunthorpe wrote: > > > > On Wed, Sep 08, 2021 at 09:22:37PM +0200, Daniel Vetter wrote: > > > > > On Wed, Sep 8, 2021 at 3:33 PM Christian K=C3=B6nig wrote: > > > > > > Am 08.09.21 um 13:18 schrieb Jason Gunthorpe: > > > > > > > On Wed, Sep 08, 2021 at 05:41:39PM +0900, Shunsuke Mie wrote: > > > > > > >> 2021=E5=B9=B49=E6=9C=888=E6=97=A5(=E6=B0=B4) 16:20 Christoph= Hellwig : > > > > > > >>> On Wed, Sep 08, 2021 at 04:01:14PM +0900, Shunsuke Mie wrot= e: > > > > > > >>>> Thank you for your comment. > > > > > > >>>>> On Wed, Sep 08, 2021 at 03:16:09PM +0900, Shunsuke Mie wr= ote: > > > > > > >>>>>> To share memory space using dma-buf, a API of the dma-bu= f requires dma > > > > > > >>>>>> device, but devices such as rxe do not have a dma device= . For those case, > > > > > > >>>>>> change to specify a device of struct ib instead of the d= ma device. > > > > > > >>>>> So if dma-buf doesn't actually need a device to dma map w= hy do we ever > > > > > > >>>>> pass the dma_device here? Something does not add up. > > > > > > >>>> As described in the dma-buf api guide [1], the dma_device = is used by dma-buf > > > > > > >>>> exporter to know the device buffer constraints of importer= . > > > > > > >>>> [1] https://nam11.safelinks.protection.outlook.com/?url=3D= https%3A%2F%2Flwn.net%2FArticles%2F489703%2F&data=3D04%7C01%7Cchristian= .koenig%40amd.com%7C4d18470a94df4ed24c8108d972ba5591%7C3dd8961fe4884e608e11= a82d994e183d%7C0%7C0%7C637666967356417448%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiM= C4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata= =3DARwQyo%2BCjMohaNbyREofToHIj2bndL5L0HaU9cOrYq4%3D&reserved=3D0 > > > > > > >>> Which means for rxe you'd also have to pass the one for the= underlying > > > > > > >>> net device. > > > > > > >> I thought of that way too. In that case, the memory region i= s constrained by the > > > > > > >> net device, but rxe driver copies data using CPU. To avoid t= he constraints, I > > > > > > >> decided to use the ib device. > > > > > > > Well, that is the whole problem. > > > > > > > > > > > > > > We can't mix the dmabuf stuff people are doing that doesn't f= ill in > > > > > > > the CPU pages in the SGL with RXE - it is simply impossible a= s things > > > > > > > currently are for RXE to acess this non-struct page memory. > > > > > > > > > > > > Yeah, agree that doesn't make much sense. > > > > > > > > > > > > When you want to access the data with the CPU then why do you w= ant to > > > > > > use DMA-buf in the first place? > > > > > > > > > > > > Please keep in mind that there is work ongoing to replace the s= g table > > > > > > with an DMA address array and so make the underlying struct pag= e > > > > > > inaccessible for importers. > > > > > > > > > > Also if you do have a dma-buf, you can just dma_buf_vmap() the bu= ffer > > > > > for cpu access. Which intentionally does not require any device. = No > > > > > idea why there's a dma_buf_attach involved. Now not all exporters > > > > > support this, but that's fixable, and you must call > > > > > dma_buf_begin/end_cpu_access for cache management if the allocati= on > > > > > isn't cpu coherent. But it's all there, no need to apply hacks of > > > > > allowing a wrong device or other fun things. > > > > > > > > Can rxe leave the vmap in place potentially forever? > > > > > > Yeah, it's like perma-pinning the buffer into system memory for > > > non-p2p dma-buf sharing. We just squint and pretend that can't be > > > abused too badly :-) On 32bit you'll run out of vmap space rather > > > quickly, but that's not something anyone cares about here either. We > > > have a bunch of more sw modesetting drivers in drm which use > > > dma_buf_vmap() like this, so it's all fine. > > > -Daniel > > > -- > > > Daniel Vetter > > > Software Engineer, Intel Corporation > > > http://blog.ffwll.ch > > > > Thanks for your comments. > > > > In the first place, the CMA region cannot be used for RDMA because the > > region has no struct page. In addition, some GPU drivers use CMA and sh= are > > the region as dma-buf. As a result, RDMA cannot transfer for the region= . To > > solve this problem, rxe dma-buf support is better I thought. > > > > I'll consider and redesign the rxe dma-buf support using the dma_buf_vm= ap() > > instead of the dma_buf_dynamic_attach(). > > btw for next version please cc dri-devel. get_maintainers.pl should > pick it up for these patches. A CC list of these patches is generated by get_maintainers.pl but it didn't pick up the dri-devel. Should I add the dri-devel to the cc manually? Regards, Shunsuke