Received: by 2002:a05:6a10:eb17:0:0:0:0 with SMTP id hx23csp23524pxb; Wed, 8 Sep 2021 16:35:17 -0700 (PDT) X-Google-Smtp-Source: ABdhPJykVwo31gSwKVOXuYgU5qyUNs2QKe0RpWsgHxUqQ3U2+fYYkkjYJ2BNc/I5MPgQ7xYnUIzb X-Received: by 2002:a05:6e02:1a0e:: with SMTP id s14mr7240ild.49.1631144117134; Wed, 08 Sep 2021 16:35:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1631144117; cv=none; d=google.com; s=arc-20160816; b=AJrjlRa+KEL2njP/kvVdvFtKzSPdhErvcwl9qmeHH3fI0pQqoSX8tVvFhV/yRmU8Yj SsPO7xoYX+Tc6qxGb8WuqjSLSIaGhPRxXVLfFSbL1628j7A4pAW1LtPU/p3MMHtkeY2N pftxmzVWE/zDU5t1H8qXkoSxzkJmCweRTwG69WKrNPyPhp+LPxSvwemrCR3vSw2GX4to rtxeTeOriyH9YqG0n37B1a1qX/LkqeL1hPRqis77OErDZrD8cT1ucH2xAlLXzJ6R1vEj IJLT6iUEOwPkeNGvq9I+GyfvqDDxnAJ2fdAL/sRQH5/lC3uxOO5698a8lm0GMganm/X+ WJIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=gwQYHck7BMUKkGamYdSoRVg09azr+jdJw6rIsi6S+vk=; b=VTcTUkhb+04LYLip2kbp0nb0acE7aufTF5FOiUkYDg02Bou7CExF0poJEHP59yGrG9 uv6Y+9Mk08iKU0BiIE5UBx4ch6YHHom2k9cRrsgwR0M0sRO4vhEX7skxIV+AMNnfb/uA s5lV9HLs3YtCnpDuC6t6lvxi7E5QHoGd5ZXOaXYeT7RuA8J7kLJNk9zsynoYrdUoGl9S Wxv2zs5QzHpNFXhEEnPdeBj/quxYYbGCNrkdJFuJnGvJDr8ApV6O6QIq7ANdjelVxk4V hE+rqbHNPmAKZA6CuJqBxi6XGXrjr2vAhYFKlhq+qJyem73jaMAPwK4ZxMbwcH59WQIz Fc1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ziepe.ca header.s=google header.b=DrMOX6OW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a4si41827ilm.68.2021.09.08.16.35.05; Wed, 08 Sep 2021 16:35:17 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ziepe.ca header.s=google header.b=DrMOX6OW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236746AbhIHXfJ (ORCPT + 99 others); Wed, 8 Sep 2021 19:35:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231161AbhIHXfF (ORCPT ); Wed, 8 Sep 2021 19:35:05 -0400 Received: from mail-qv1-xf2b.google.com (mail-qv1-xf2b.google.com [IPv6:2607:f8b0:4864:20::f2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0242CC061575 for ; Wed, 8 Sep 2021 16:33:57 -0700 (PDT) Received: by mail-qv1-xf2b.google.com with SMTP id g11so78004qvd.2 for ; Wed, 08 Sep 2021 16:33:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=gwQYHck7BMUKkGamYdSoRVg09azr+jdJw6rIsi6S+vk=; b=DrMOX6OWFsDUPMEr61rWc//+OocHVa+BlQarE2QJLAroD7R834CN2CdA60vYdbScf+ 06N2f9l0l9yQsJMNqKiXWFC8qRCdSHEkL0z7qpPDEx2pcy/iqghV3mhf+lT96tT8KR5c zKrOn5qw5MuX0+8xUDY53Hs1qmEQMNm72cW0DnCRjzHATsLp7ee9UPFA7Y5M+ZdvICfi m38Zys2Ll4OFtUhgbn2gBw4pYwgDQeXhHOEqOQ3jEUZ/VGKaj4JVHQaoa42ay/2by832 SQU9JPiK8z3x26iLy2h3TE5rJAI+j+sxvKNfhOItDU2bg8Eu2rR8tMqXnlIyS03xfjsy Vbgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=gwQYHck7BMUKkGamYdSoRVg09azr+jdJw6rIsi6S+vk=; b=PXT26HwCTR9o5SVsxHnRhCpmusB30DN5Kd1K852ERDk47Zn0XomS+e1FwGAoQktJ3v clC6ngTFg8UrUlI9PUnk0t3p7JTRX2OE1EGxHa6BIX1bTCybcUPC0UpSbtS/KSFt+UoL 9rHQbeyiLlGNy1zNZ0fVEZ2fQTUbK3+O7cAqfdMYWx2mfbKqZ2fxdBDC1X51p2Q+82// LtJ69GR2YyhLcJE3sPpIvncRYQGLBk73xJYxch3cc3xx6aRehjUnjU45+UQlYui6MAM1 xIRzZxR7JFIWzsOQzwp6+53jNGrOm3KcSo5ntZjvOoJsqkH4906RCOEo64f8WfpT3oDn 8sHw== X-Gm-Message-State: AOAM530UEWL+SAaRcAX/27sJ9CSaBL0/kFc39xhGwk+n0npiZmvJrxOV jPlUSbwv5wGjDW996DbGIHd6ew== X-Received: by 2002:a05:6214:104d:: with SMTP id l13mr69910qvr.13.1631144036203; Wed, 08 Sep 2021 16:33:56 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-142-162-113-129.dhcp-dynamic.fibreop.ns.bellaliant.net. [142.162.113.129]) by smtp.gmail.com with ESMTPSA id x3sm78338qkx.62.2021.09.08.16.33.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Sep 2021 16:33:55 -0700 (PDT) Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1mO74k-00EsBp-Ob; Wed, 08 Sep 2021 20:33:54 -0300 Date: Wed, 8 Sep 2021 20:33:54 -0300 From: Jason Gunthorpe To: Daniel Vetter Cc: Christian =?utf-8?B?S8O2bmln?= , Shunsuke Mie , Christoph Hellwig , Zhu Yanjun , Alex Deucher , Doug Ledford , Jianxin Xiong , Leon Romanovsky , Linux Kernel Mailing List , linux-rdma , Damian Hobson-Garcia , Takanari Hayama , Tomohito Esaki Subject: Re: [RFC PATCH 1/3] RDMA/umem: Change for rdma devices has not dma device Message-ID: <20210908233354.GB3544071@ziepe.ca> References: <20210908061611.69823-1-mie@igel.co.jp> <20210908061611.69823-2-mie@igel.co.jp> <20210908111804.GX1200268@ziepe.ca> <1c0356f5-19cf-e883-3d96-82a87d0cffcb@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 08, 2021 at 09:22:37PM +0200, Daniel Vetter wrote: > On Wed, Sep 8, 2021 at 3:33 PM Christian König wrote: > > Am 08.09.21 um 13:18 schrieb Jason Gunthorpe: > > > On Wed, Sep 08, 2021 at 05:41:39PM +0900, Shunsuke Mie wrote: > > >> 2021年9月8日(水) 16:20 Christoph Hellwig : > > >>> On Wed, Sep 08, 2021 at 04:01:14PM +0900, Shunsuke Mie wrote: > > >>>> Thank you for your comment. > > >>>>> On Wed, Sep 08, 2021 at 03:16:09PM +0900, Shunsuke Mie wrote: > > >>>>>> To share memory space using dma-buf, a API of the dma-buf requires dma > > >>>>>> device, but devices such as rxe do not have a dma device. For those case, > > >>>>>> change to specify a device of struct ib instead of the dma device. > > >>>>> So if dma-buf doesn't actually need a device to dma map why do we ever > > >>>>> pass the dma_device here? Something does not add up. > > >>>> As described in the dma-buf api guide [1], the dma_device is used by dma-buf > > >>>> exporter to know the device buffer constraints of importer. > > >>>> [1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F489703%2F&data=04%7C01%7Cchristian.koenig%40amd.com%7C4d18470a94df4ed24c8108d972ba5591%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637666967356417448%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C2000&sdata=ARwQyo%2BCjMohaNbyREofToHIj2bndL5L0HaU9cOrYq4%3D&reserved=0 > > >>> Which means for rxe you'd also have to pass the one for the underlying > > >>> net device. > > >> I thought of that way too. In that case, the memory region is constrained by the > > >> net device, but rxe driver copies data using CPU. To avoid the constraints, I > > >> decided to use the ib device. > > > Well, that is the whole problem. > > > > > > We can't mix the dmabuf stuff people are doing that doesn't fill in > > > the CPU pages in the SGL with RXE - it is simply impossible as things > > > currently are for RXE to acess this non-struct page memory. > > > > Yeah, agree that doesn't make much sense. > > > > When you want to access the data with the CPU then why do you want to > > use DMA-buf in the first place? > > > > Please keep in mind that there is work ongoing to replace the sg table > > with an DMA address array and so make the underlying struct page > > inaccessible for importers. > > Also if you do have a dma-buf, you can just dma_buf_vmap() the buffer > for cpu access. Which intentionally does not require any device. No > idea why there's a dma_buf_attach involved. Now not all exporters > support this, but that's fixable, and you must call > dma_buf_begin/end_cpu_access for cache management if the allocation > isn't cpu coherent. But it's all there, no need to apply hacks of > allowing a wrong device or other fun things. Can rxe leave the vmap in place potentially forever? Jason