Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp530802pxb; Tue, 14 Sep 2021 03:15:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx0gkFqF2IrFlvIPF/2HXRsm8b6ybZJeqIefFcBWZ4JKxGi4J0yvHhHnukaAc+eF+tUrxzc X-Received: by 2002:a05:6638:2493:: with SMTP id x19mr13986560jat.57.1631614502831; Tue, 14 Sep 2021 03:15:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631614502; cv=none; d=google.com; s=arc-20160816; b=UulHovKyyxhnk8bq4gIUpKkpeG46k3NWEa0Flp6+8uZAz02HbWldjR/sLAdBGJXz1+ cnk1V8Pip1ri9HY9olnmVRtlzwIVLQYxYhuz4rjxNxFY69MfhpLx5wJJuT4nVkLtLFKo UHrxY4QWVKFoZYhNxEtVmtzALYrJ+3hQRPFdUEkwdesTEgX2ZMuDOcYQKJzKMXL2EM2L c64NES0SxjKpPbwcTEQw7q71LFNbrZjh88lo/pnBo9Hx+vF+Ljr+gnGf1XidFa9w4jgm ROSv7VNORzMlMxxAILC92zArcxVxYveaQw1e8gsNM1DBzx9dYXbDLDvWFHeU1c8jLHMj 3fPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=ayzPNtRNfY0z+RpTkRRnA3Io/5qC6jVpVhpucCF4zt0=; b=P4I//AcY0zXrm6LKYF7CrW2X8pLnXA04CHmPtL3k0R9lEuIR0/0DaKZ+CHbGClCJIR GG9FZvUJwDFP9A1DYhNDWzZgrc9XrfBqZFGFfls/0TuYLbmkEgq6y8fHkKD8bcWmmTQs 7WuriMeCL0znDRb9wFkiNLLsn6lB0mFK/8dNVwjhFGbsSe+QmxiV49lYazoS9myEi55d emTpEbzYuWyw/CWx7ZwWhfbWnoS9N+S9/it8sPvOc066vSox8/YM2BWMUTJxBxiQ1rwZ 4glxRrsz1G/SDa0JGvQ4bMJctTcIm62icrr0Moz8wCq1SlqbQx9C74JWehOUMM6Ktufg vyVw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@igel-co-jp.20150623.gappssmtp.com header.s=20150623 header.b=z1ATTbEj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b125si9913383jab.33.2021.09.14.03.14.51; Tue, 14 Sep 2021 03:15:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@igel-co-jp.20150623.gappssmtp.com header.s=20150623 header.b=z1ATTbEj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231460AbhINKPD (ORCPT + 99 others); Tue, 14 Sep 2021 06:15:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231389AbhINKPA (ORCPT ); Tue, 14 Sep 2021 06:15:00 -0400 Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com [IPv6:2a00:1450:4864:20::12a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDD95C061764 for ; Tue, 14 Sep 2021 03:13:42 -0700 (PDT) Received: by mail-lf1-x12a.google.com with SMTP id k13so27650885lfv.2 for ; Tue, 14 Sep 2021 03:13:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=igel-co-jp.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=ayzPNtRNfY0z+RpTkRRnA3Io/5qC6jVpVhpucCF4zt0=; b=z1ATTbEjr8tFhdfO4IecWUBapOPq+9+DyCj1G0yctetzU4paiFLNOec+ljEKDa0kx2 cO1YydNOw1JiJX0ZooC10x1o0AeziWL9ty8c9WlTygXiED0mPG6ek+Q21L+mywHZTh6E /PZ08nVHA8rmnl3rRlQ/D6uGqyBcZyNvKZv8WJ2UT2X5LVzMvVUc+hCqGt+GKhYBNmCN oMj27X9hesHYz3y4Xij8LQLBgJXttBFF8nUyEUl1lruzWLn4RM63RRK0W9JC77uw0dwZ q9deoBs0mCiQQ9Nn0sEDSaa31lUEcctS6+Ty6qbijr1Xjh1kxxoOgWqMntTiR/ddB6iW l/NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=ayzPNtRNfY0z+RpTkRRnA3Io/5qC6jVpVhpucCF4zt0=; b=rY9qiQHxfCf172NeiHCSGdhmhXYdygkcK1gM5KAT2it8TI5NviGlRjo7pPRgYqwYiB FM+cytnNCPpTXdw88/riPKoq4pcCB0J748afpngzSGdlH/LefqpouhLvYyo3rVPczC9v 71HcANDtB3LDYqX+2I51N5gtNzoW/XFDT6tAtdjLlBblTI0B7KUSGlsOOgrYuGj2ID2P VRfEoGq/HGr1ZeoMx3Idaa/TD7ArY3EqU371oES244v5fgWJc/Ox7rlDp9SCaa9K64OC 5UKFgsTRqsKMPERd+e7xR5MgYx/ZAvV+MfyR4/Q6J1Wtw6A+fnweFlVOhry15W/JXDNx mbFg== X-Gm-Message-State: AOAM533nvgkzZeacNWLnm4sk+G3864JXYYoEuBG7vDmfyin95j/OQAsw jHqXTkpLZH7G5LiInOXRY8PWdLLGjhKgXcy4ga/1pg== X-Received: by 2002:a19:6b18:: with SMTP id d24mr12332496lfa.46.1631614421035; Tue, 14 Sep 2021 03:13:41 -0700 (PDT) MIME-Version: 1.0 References: <20210908061611.69823-1-mie@igel.co.jp> <20210908061611.69823-2-mie@igel.co.jp> <20210908111804.GX1200268@ziepe.ca> <1c0356f5-19cf-e883-3d96-82a87d0cffcb@amd.com> <20210908233354.GB3544071@ziepe.ca> In-Reply-To: From: Shunsuke Mie Date: Tue, 14 Sep 2021 19:13:29 +0900 Message-ID: Subject: Re: [RFC PATCH 1/3] RDMA/umem: Change for rdma devices has not dma device To: Daniel Vetter Cc: Jason Gunthorpe , =?UTF-8?Q?Christian_K=C3=B6nig?= , Christoph Hellwig , Zhu Yanjun , Alex Deucher , Doug Ledford , Jianxin Xiong , Leon Romanovsky , Linux Kernel Mailing List , linux-rdma , Damian Hobson-Garcia , Takanari Hayama , Tomohito Esaki Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2021=E5=B9=B49=E6=9C=8814=E6=97=A5(=E7=81=AB) 18:38 Daniel Vetter : > > On Tue, Sep 14, 2021 at 9:11 AM Shunsuke Mie wrote: > > > > 2021=E5=B9=B49=E6=9C=8814=E6=97=A5(=E7=81=AB) 4:23 Daniel Vetter : > > > > > > On Fri, Sep 10, 2021 at 3:46 AM Shunsuke Mie wrote: > > > > > > > > 2021=E5=B9=B49=E6=9C=889=E6=97=A5(=E6=9C=A8) 18:26 Daniel Vetter : > > > > > > > > > > On Thu, Sep 9, 2021 at 1:33 AM Jason Gunthorpe wro= te: > > > > > > On Wed, Sep 08, 2021 at 09:22:37PM +0200, Daniel Vetter wrote: > > > > > > > On Wed, Sep 8, 2021 at 3:33 PM Christian K=C3=B6nig wrote: > > > > > > > > Am 08.09.21 um 13:18 schrieb Jason Gunthorpe: > > > > > > > > > On Wed, Sep 08, 2021 at 05:41:39PM +0900, Shunsuke Mie wr= ote: > > > > > > > > >> 2021=E5=B9=B49=E6=9C=888=E6=97=A5(=E6=B0=B4) 16:20 Chris= toph Hellwig : > > > > > > > > >>> On Wed, Sep 08, 2021 at 04:01:14PM +0900, Shunsuke Mie = wrote: > > > > > > > > >>>> Thank you for your comment. > > > > > > > > >>>>> On Wed, Sep 08, 2021 at 03:16:09PM +0900, Shunsuke Mi= e wrote: > > > > > > > > >>>>>> To share memory space using dma-buf, a API of the dm= a-buf requires dma > > > > > > > > >>>>>> device, but devices such as rxe do not have a dma de= vice. For those case, > > > > > > > > >>>>>> change to specify a device of struct ib instead of t= he dma device. > > > > > > > > >>>>> So if dma-buf doesn't actually need a device to dma m= ap why do we ever > > > > > > > > >>>>> pass the dma_device here? Something does not add up. > > > > > > > > >>>> As described in the dma-buf api guide [1], the dma_dev= ice is used by dma-buf > > > > > > > > >>>> exporter to know the device buffer constraints of impo= rter. > > > > > > > > >>>> [1] https://nam11.safelinks.protection.outlook.com/?ur= l=3Dhttps%3A%2F%2Flwn.net%2FArticles%2F489703%2F&data=3D04%7C01%7Cchris= tian.koenig%40amd.com%7C4d18470a94df4ed24c8108d972ba5591%7C3dd8961fe4884e60= 8e11a82d994e183d%7C0%7C0%7C637666967356417448%7CUnknown%7CTWFpbGZsb3d8eyJWI= joiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sd= ata=3DARwQyo%2BCjMohaNbyREofToHIj2bndL5L0HaU9cOrYq4%3D&reserved=3D0 > > > > > > > > >>> Which means for rxe you'd also have to pass the one for= the underlying > > > > > > > > >>> net device. > > > > > > > > >> I thought of that way too. In that case, the memory regi= on is constrained by the > > > > > > > > >> net device, but rxe driver copies data using CPU. To avo= id the constraints, I > > > > > > > > >> decided to use the ib device. > > > > > > > > > Well, that is the whole problem. > > > > > > > > > > > > > > > > > > We can't mix the dmabuf stuff people are doing that doesn= 't fill in > > > > > > > > > the CPU pages in the SGL with RXE - it is simply impossib= le as things > > > > > > > > > currently are for RXE to acess this non-struct page memor= y. > > > > > > > > > > > > > > > > Yeah, agree that doesn't make much sense. > > > > > > > > > > > > > > > > When you want to access the data with the CPU then why do y= ou want to > > > > > > > > use DMA-buf in the first place? > > > > > > > > > > > > > > > > Please keep in mind that there is work ongoing to replace t= he sg table > > > > > > > > with an DMA address array and so make the underlying struct= page > > > > > > > > inaccessible for importers. > > > > > > > > > > > > > > Also if you do have a dma-buf, you can just dma_buf_vmap() th= e buffer > > > > > > > for cpu access. Which intentionally does not require any devi= ce. No > > > > > > > idea why there's a dma_buf_attach involved. Now not all expor= ters > > > > > > > support this, but that's fixable, and you must call > > > > > > > dma_buf_begin/end_cpu_access for cache management if the allo= cation > > > > > > > isn't cpu coherent. But it's all there, no need to apply hack= s of > > > > > > > allowing a wrong device or other fun things. > > > > > > > > > > > > Can rxe leave the vmap in place potentially forever? > > > > > > > > > > Yeah, it's like perma-pinning the buffer into system memory for > > > > > non-p2p dma-buf sharing. We just squint and pretend that can't be > > > > > abused too badly :-) On 32bit you'll run out of vmap space rather > > > > > quickly, but that's not something anyone cares about here either.= We > > > > > have a bunch of more sw modesetting drivers in drm which use > > > > > dma_buf_vmap() like this, so it's all fine. > > > > > -Daniel > > > > > -- > > > > > Daniel Vetter > > > > > Software Engineer, Intel Corporation > > > > > http://blog.ffwll.ch > > > > > > > > Thanks for your comments. > > > > > > > > In the first place, the CMA region cannot be used for RDMA because = the > > > > region has no struct page. In addition, some GPU drivers use CMA an= d share > > > > the region as dma-buf. As a result, RDMA cannot transfer for the re= gion. To > > > > solve this problem, rxe dma-buf support is better I thought. > > > > > > > > I'll consider and redesign the rxe dma-buf support using the dma_bu= f_vmap() > > > > instead of the dma_buf_dynamic_attach(). > > > > > > btw for next version please cc dri-devel. get_maintainers.pl should > > > pick it up for these patches. > > A CC list of these patches is generated by get_maintainers.pl but it > > didn't pick up the dri-devel. Should I add the dri-devel to the cc > > manually? > > Hm yes, on rechecking the regex doesn't match since you're not > touching any dma-buf code directly. Or not directly enough for > get_maintainers.pl to pick it up. > > DMA BUFFER SHARING FRAMEWORK > M: Sumit Semwal > M: Christian K=C3=B6nig > L: linux-media@vger.kernel.org > L: dri-devel@lists.freedesktop.org > L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers) > S: Maintained > T: git git://anongit.freedesktop.org/drm/drm-misc > F: Documentation/driver-api/dma-buf.rst > F: drivers/dma-buf/ > F: include/linux/*fence.h > F: include/linux/dma-buf* > F: include/linux/dma-resv.h > K: \bdma_(?:buf|fence|resv)\b > > Above is the MAINTAINERS entry that's always good to cc for anything > related to dma_buf/fence/resv and any of these related things. > -Daniel > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch Yes, the dma-buf was not directly included in my changes. However, this is related to dma-buf. So I'll add the dma-buf related ML and members to cc using `./scripts/get_maintainer.pl -f drivers/infiniband/core/umem_dmabuf.c`. I think it is enough to list the email addresses. Thank you for letting me know that. Regards, Shunsuke,