Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp5172096pxu; Thu, 10 Dec 2020 15:05:51 -0800 (PST) X-Google-Smtp-Source: ABdhPJyZBylEAI+6kOYUoB/L7JZb+IRUVZ0gn+dlsIcqrHPXMw1cYcwHvlbKrZniLnE+UDSStEAq X-Received: by 2002:a17:906:4412:: with SMTP id x18mr8388100ejo.301.1607641551128; Thu, 10 Dec 2020 15:05:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607641551; cv=none; d=google.com; s=arc-20160816; b=hHK4vMcEZ94fjGSGVHLPxjgGc5VQ8zHUqJchD9e9A6wCRJcOpSgUJPVKk8f+X5Pq8p LRrHetm0iv6YjzXV6V6WsqMii63rICZ5xJ0q4PELodukye/69y6mk5CTFrqZCWgt6w91 S3tgiRGtsLsWh2Kja7Z/m8kDMWkcgO/QYkooT+jIXQyXjxv9RGIV6EOnoAy3u2wqZcQE QhT1yFwUHU51L0PpQLQBhAJSssx/EL1AGdD/xQJHbpatBOdDi/kSTIQ74ZdZreKza1f+ 65ywCs2jbWvthlG8IHjrKsVhsNEoNKbpfbhWBHFoETvP0NemsFIBXFYKnWlky5XQziYj X2SQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=olWrAE9JSl9kyhmzpxdnQFXpcomh9eOZ+RfshA6M+Yw=; b=oXazhtYkoZRFD3A+OqJq/NIs09fmGmfHlApktpUqXZYQrt7ugccQouszte880RCpVa j6JY+HZ51IXsmLRwaSqmYsumvXLe/h9oDKhtBVqSkszZIrBf6dU2IdpGeXDC4daUiZ+t 7aMCmn5VkiIPaiaK54AaBNw/w9uVy+lPzmS14qQO+0frjvxgCxzxcxUiKkLMo+2/XGel 7ugzasHIgCSymTDrAREfl8khVtXna+G2ttAWAOfhy8VPPbpHHerJ0/OjsD4/xogbjEvK I3Oy3Ihln/faFcN0xfCSwyEnNW2eSIt7Ah9+m09QbQp0wlAMhXqu2O4II6oh4ny4KAeP UAwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=p6vfVBFE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t1si3716076edy.153.2020.12.10.15.05.28; Thu, 10 Dec 2020 15:05:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=p6vfVBFE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387620AbgLJIQI (ORCPT + 99 others); Thu, 10 Dec 2020 03:16:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733155AbgLJIQH (ORCPT ); Thu, 10 Dec 2020 03:16:07 -0500 Received: from mail-ot1-x343.google.com (mail-ot1-x343.google.com [IPv6:2607:f8b0:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 870E3C061793 for ; Thu, 10 Dec 2020 00:15:27 -0800 (PST) Received: by mail-ot1-x343.google.com with SMTP id w3so4086897otp.13 for ; Thu, 10 Dec 2020 00:15:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=olWrAE9JSl9kyhmzpxdnQFXpcomh9eOZ+RfshA6M+Yw=; b=p6vfVBFEHoqspcfUNgn7sdgCPUwg36wBSjUfb4RLyJ5vULAwLKN8GmiD6e33Ac8VER FHjrYoCNFIe95KV0RX2o8nIfoYplxquDDBIrmfd7JKKvmFFKtM7pWvdbEurFdl0S+ixF a9WQ7JW4DjKNcbPrKFxaHPTaUsgn16vKzBa02kR0EjoU02V4pI6BZwvElTZfSnsE5U47 FO9QCOfjziR2mf4UreXeSdxMFAqv2P08icJDjBfNGQoS1VkLg1peKe+fS9PJGTp6YjU4 nnDn4b9KTTXh4iWvfxY5u22uy3SpZLdJHQwIhSObuqCNepIpcwc+QkYtjVkMO+J6fFF4 xlCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=olWrAE9JSl9kyhmzpxdnQFXpcomh9eOZ+RfshA6M+Yw=; b=pYs188ZmZ3OCyaxvxs7oMfYPkesjfHzyUeW1wC6bXa40D7X5xDVexwpyYmcHrscDcU VcJDqcko0SSgYHZjxkYE5//7NSTetN/3QmADqNU2ZTGZxuO8TChxrHPoLcns6Fuf9G9/ Jn7rgvhwu2CmVD1vaF/wqdanhe269yBEqyi9M6I5x29Xr6l5UxhyJWnVBVaToUGBCZjJ MzilGVCt/X69BaTFQPYY4AKpiTLnLVpwYbZuEszzIqsPYCRGwMyxDDPKpz30jNHTiBAn tOn8jwmhdlCm/zv7KM/Lh+peg35vvHP/h3XW7vApjko3CwdJpeQ3zZ6d48I+I51Agpqa 3F/g== X-Gm-Message-State: AOAM53265S5NpPIGShceZLkWa1DmzUN2b/FEwjDRoTJGFc6D4G3j5+ZT Ao4paD9boqLEpw1MLFWgHX/zrcvd+Tg4nhHn5nixog== X-Received: by 2002:a05:6830:1411:: with SMTP id v17mr1298573otp.352.1607588126592; Thu, 10 Dec 2020 00:15:26 -0800 (PST) MIME-Version: 1.0 References: <20201117181935.3613581-1-minchan@kernel.org> <20201117181935.3613581-5-minchan@kernel.org> <20201119011431.GA136599@KEI> In-Reply-To: From: John Stultz Date: Thu, 10 Dec 2020 00:15:15 -0800 Message-ID: Subject: Re: [PATCH 4/4] dma-heap: Devicetree binding for chunk heap To: Minchan Kim Cc: Hyesoo Yu , Andrew Morton , LKML , linux-mm , Matthew Wilcox , david@redhat.com, iamjoonsoo.kim@lge.com, vbabka@suse.cz, Suren Baghdasaryan , KyongHo Cho , John Dias , Hridya Valsaraju , Sumit Semwal , Brian Starkey , linux-media , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" , Rob Herring , Christian Koenig , "moderated list:DMA BUFFER SHARING FRAMEWORK" , Kunihiko Hayashi Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 9, 2020 at 3:53 PM Minchan Kim wrote: > On Wed, Nov 18, 2020 at 07:19:07PM -0800, John Stultz wrote: > > The CMA heap currently only registers the default CMA heap, as we > > didn't want to expose all CMA regions and there's otherwise no way to > > pick and choose. > > Yub. > > dma-buf really need a way to make exclusive CMA area. Otherwise, default > CMA would be shared among drivers and introduce fragmentation easily > since we couldn't control other drivers. In such aspect, I don't think > current cma-heap works if userspace needs big memory chunk. Yes, the default CMA region is not always optimal. That's why I was hopeful for Kunihiko Hayashi's patch to allow for exposing specific cma regions: https://lore.kernel.org/lkml/1594948208-4739-1-git-send-email-hayashi.kunihiko@socionext.com/ I think it would be a good solution, but all we need is *some* driver which can be considered the primary user/owner of the cma region which would then explicitly export it via the dmabuf heaps. > Here, the problem is there is no in-kernel user to bind the specific > CMA area because the owner will be random in userspace via dma-buf > interface. Well, while I agree that conceptually the dmabuf heaps allow for allocations for multi-device pipelines, and thus are not tied to specific devices. I do think that the memory types exposed are likely to have specific devices/drivers in the pipeline that it matters most to. So I don't see a big issue with the in-kernel driver registering a specific CMA region as a dmabuf heap. > > > Is there a reason to use dma-heap framework to add cma-area for specific device ? > > > > > > Even if some in-tree users register dma-heap with cma-area, the buffers could be allocated in user-land and these could be shared among other devices. > > > For exclusive access, I guess, the device don't need to register dma-heap for cma area. > > > > > > > It's not really about exclusive access. More just that if you want to > > bind a memory reservation/region (cma or otherwise), at least for DTS, > > it needs to bind with some device in DT. > > > > Then the device driver can register that region with a heap driver. > > This avoids adding new Linux-specific software bindings to DT. It > > becomes a driver implementation detail instead. The primary user of > > the heap type would probably be a practical pick (ie the display or > > isp driver). > > If it's the only solution, we could create some dummy driver which has > only module_init and bind it from there but I don't think it's a good > idea. Yea, an un-upstreamable dummy driver is maybe what it devolves to in the worst case. But I suspect it would be cleaner for a display or ISP driver that benefits most from the heap type to add the reserved memory reference to their DT node, and on init for them to register the region with the dmabuf heap code. > > The other potential solution Rob has suggested is that we create some > > tag for the memory reservation (ie: like we do with cma: "reusable"), > > which can be used to register the region to a heap. But this has the > > problem that each tag has to be well defined and map to a known heap. > > Do you think that's the only solution to make progress for this feature? > Then, could you elaborate it a bit more or any other ideas from dma-buf > folks? I'm skeptical of that DT tag approach working out, as we'd need a new DT binding for the new tag name, and we'd have to do so for each new heap type that needs this (so non-default cma, your chunk heap, whatever other similar heap types that use reserved regions folks come up with). Having *some* driver take ownership for the reserved region and register it with the appropriate heap type seems much cleaner/flexible and avoids mucking with the DT ABI. thanks -john