Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp1392786ybk; Thu, 14 May 2020 07:56:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwcdV19pFFFGqJJqhsSWMiWQ+VyQk/1KuYVpiADitFAP8Nnku6/oXp1mYTo/mozk4HwqQtF X-Received: by 2002:a17:906:4dcc:: with SMTP id f12mr4366929ejw.272.1589468211348; Thu, 14 May 2020 07:56:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589468211; cv=none; d=google.com; s=arc-20160816; b=Ff47d86azjPxZQd+QFPhsEXGc6iagNSy2qLm7BSXgzo+cnTAJWIGVwupMPzuHaGJ5B fW5q+u23ALy5qfs8L5xTScrQWuT9+B9mZgOkLED8HzanVa2tipjExEeXAwVwRe2cFLyS V0BB8ZCvvhe0sFJBGByHjlfl0I44M7R6PqxDYRLNgwMRiNaCeZkXqJ93JA4t23h2feM5 wyeJLE8hXW0mMIJWwGso5gF571Dpz5vgTiQcULikJa3+bvMJTUegWLHW9dp3RJxFSiQB iAFhcZDF8KFJv1UKoWlLlGLCLQql0OOBRfRQXtEVxkAZZDVxbA7dT8/RhCR0iB+iBa9z Or4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=A6VUpDpiouCCT+/2AnvI+Hdtfb74RQFi9h0aT6cNyZ8=; b=yl3X+mVJJD4ukzBiTzSQDqS8yzqzPuVgBku2vsEwgstr/3XVxdc+1NSzDl728m/M7j g0QbUbt8FAoNkdOy+rB6DKb3uLKjxLpDxgH9rBU7lkl3+nRlZVwRbLir+VJtkiMM55ic dL5bf9RlrJzyNSM9B0EUw16PAwih6iyh6Cx8Sfd6m9x+qgdV2pUG4pTM70GI6p2vnO0F WJHifQhO3SqJB3wbqzFUnFtl7pfoLWVNV9I6VY1eq+SJ+8hH18e38dK+qrk0LhDbz2gG jYRhuKTUMzaRt8HBd5emiMhGsoQjIn8v3BqTQ63+72qCdwe9wYMOdwu075E4OzmLLocz UC5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=jO+lD2cC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b17si1790603edt.100.2020.05.14.07.56.28; Thu, 14 May 2020 07:56:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=jO+lD2cC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726726AbgENOww (ORCPT + 99 others); Thu, 14 May 2020 10:52:52 -0400 Received: from mail.kernel.org ([198.145.29.99]:55050 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726190AbgENOwv (ORCPT ); Thu, 14 May 2020 10:52:51 -0400 Received: from mail-oi1-f170.google.com (mail-oi1-f170.google.com [209.85.167.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9FD6A20657; Thu, 14 May 2020 14:52:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589467970; bh=KyMeowribavtsIbTUR+vWwY63ali5afdWjqtehs4ado=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=jO+lD2cCU6q+5tJVkwkyOBKh8wI3TB5mInfIHe4hmQgT8JBiuB9lCqG8j40WAUs2e GBE/X3aXCrtmqPdF7LYRETB7jvg2CCP1UwQonVipOc7Lb97gzY0+t4ax94t/WeNM0h vketF0DJ9519leZRWWEsKTbZZQSFsyacHizFkUxo= Received: by mail-oi1-f170.google.com with SMTP id o24so24768110oic.0; Thu, 14 May 2020 07:52:50 -0700 (PDT) X-Gm-Message-State: AGi0PubBBLONm6r65UiOs4vWocSk31JKMsHxmNubiQ6Fsv99dZ2WrGBX +/STx0pzUmOYumYn+grT3uHAqejfR+WRMEOxvw== X-Received: by 2002:aca:1904:: with SMTP id l4mr31504767oii.106.1589467969912; Thu, 14 May 2020 07:52:49 -0700 (PDT) MIME-Version: 1.0 References: <20200501073949.120396-1-john.stultz@linaro.org> <20200501073949.120396-4-john.stultz@linaro.org> <20200501102143.xcckvsfecumbei3c@DESKTOP-E1NTVVP.localdomain> <47e7eded-7240-887a-39e1-97c55bf752e7@arm.com> <20200504090628.d2q32dwyg6em5pp7@DESKTOP-E1NTVVP.localdomain> <20200512163714.GA22577@bogus> <20200513104401.lhh2m5avlasev6mj@DESKTOP-E1NTVVP.localdomain> In-Reply-To: <20200513104401.lhh2m5avlasev6mj@DESKTOP-E1NTVVP.localdomain> From: Rob Herring Date: Thu, 14 May 2020 09:52:35 -0500 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC][PATCH 3/4] dma-buf: cma_heap: Extend logic to export CMA regions tagged with "linux,cma-heap" To: Brian Starkey Cc: John Stultz , Robin Murphy , lkml , Sumit Semwal , "Andrew F. Davis" , Benjamin Gaignard , Liam Mark , Pratik Patel , Laura Abbott , Chenbo Feng , Alistair Strachan , Sandeep Patil , Hridya Valsaraju , Christoph Hellwig , Marek Szyprowski , Andrew Morton , "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS" , dri-devel , linux-mm , nd Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 13, 2020 at 5:44 AM Brian Starkey wrote: > > Hi Rob, > > On Tue, May 12, 2020 at 11:37:14AM -0500, Rob Herring wrote: > > On Mon, May 04, 2020 at 10:06:28AM +0100, Brian Starkey wrote: > > > On Fri, May 01, 2020 at 12:01:40PM -0700, John Stultz wrote: > > > > On Fri, May 1, 2020 at 4:08 AM Robin Murphy wrote: > > > > > > > > > > On 2020-05-01 11:21 am, Brian Starkey wrote: > > > > > > Hi John, > > > > > > > > > > > > On Fri, May 01, 2020 at 07:39:48AM +0000, John Stultz wrote: > > > > > >> This patch reworks the cma_heap initialization so that > > > > > >> we expose both the default CMA region and any CMA regions > > > > > >> tagged with "linux,cma-heap" in the device-tree. > > > > > >> > > > > > >> Cc: Rob Herring > > > > > >> Cc: Sumit Semwal > > > > > >> Cc: "Andrew F. Davis" > > > > > >> Cc: Benjamin Gaignard > > > > > >> Cc: Liam Mark > > > > > >> Cc: Pratik Patel > > > > > >> Cc: Laura Abbott > > > > > >> Cc: Brian Starkey > > > > > >> Cc: Chenbo Feng > > > > > >> Cc: Alistair Strachan > > > > > >> Cc: Sandeep Patil > > > > > >> Cc: Hridya Valsaraju > > > > > >> Cc: Christoph Hellwig > > > > > >> Cc: Marek Szyprowski > > > > > >> Cc: Robin Murphy > > > > > >> Cc: Andrew Morton > > > > > >> Cc: devicetree@vger.kernel.org > > > > > >> Cc: dri-devel@lists.freedesktop.org > > > > > >> Cc: linux-mm@kvack.org > > > > > >> Signed-off-by: John Stultz > > > > > >> --- > > > > > >> drivers/dma-buf/heaps/cma_heap.c | 18 +++++++++--------- > > > > > >> 1 file changed, 9 insertions(+), 9 deletions(-) > > > > > >> > > > > > >> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c > > > > > >> index 626cf7fd033a..dd154e2db101 100644 > > > > > >> --- a/drivers/dma-buf/heaps/cma_heap.c > > > > > >> +++ b/drivers/dma-buf/heaps/cma_heap.c > > > > > >> @@ -141,6 +141,11 @@ static int __add_cma_heap(struct cma *cma, void *data) > > > > > >> { > > > > > >> struct cma_heap *cma_heap; > > > > > >> struct dma_heap_export_info exp_info; > > > > > >> + struct cma *default_cma = dev_get_cma_area(NULL); > > > > > >> + > > > > > >> + /* We only add the default heap and explicitly tagged heaps */ > > > > > >> + if (cma != default_cma && !cma_dma_heap_enabled(cma)) > > > > > >> + return 0; > > > > > > > > > > > > Thinking about the pl111 thread[1], I'm wondering if we should also > > > > > > let drivers call this directly to expose their CMA pools, even if they > > > > > > aren't tagged for dma-heaps in DT. But perhaps that's too close to > > > > > > policy. > > > > > > > > > > That sounds much like what my first thoughts were - apologies if I'm > > > > > wildly off-base here, but as far as I understand: > > > > > > > > > > - Device drivers know whether they have their own "memory-region" or not. > > > > > - Device drivers already have to do *something* to participate in dma-buf. > > > > > - Device drivers know best how they make use of both the above. > > > > > - Therefore couldn't it be left to drivers to choose whether to register > > > > > their CMA regions as heaps, without having to mess with DT at all? > > > > +1, but I'm biased toward any solution not using DT. :) > > > > > > I guess I'm not opposed to this. But I guess I'd like to see some more > > > > details? You're thinking the pl111 driver would add the > > > > "memory-region" node itself? > > > > > > > > Assuming that's the case, my only worry is what if that memory-region > > > > node isn't a CMA area, but instead something like a carveout? Does the > > > > driver need to parse enough of the dt to figure out where to register > > > > the region as a heap? > > > > > > My thinking was more like there would already be a reserved-memory > > > node in DT for the chunk of memory, appropriately tagged so that it > > > gets added as a CMA region. > > > > > > The device's node would have "memory-region=<&blah>;" and would use > > > of_reserved_mem_device_init() to link up dev->cma_area to the > > > corresponding cma region. > > > > > > So far, that's all in-place already. The bit that's missing is > > > exposing that dev->cma_area to userspace as a dma_heap - so we could > > > just have "int cma_heap_add(struct cma *cma)" or "int > > > cma_heap_dev_add(struct device *dev)" or something exported for > > > drivers to expose their device-assigned cma region if they wanted to. > > > > > > I don't think this runs into the lifetime problems of generalised > > > heaps-as-modules either, because the CMA region will never go away > > > even if the driver does. > > > > > > Alongside that, I do think the completely DT-driven approach can be > > > useful too - because there may be regions which aren't associated with > > > any specific device driver, that we want exported as heaps. > > > > And they are associated with the hardware description rather than the > > userspace environment? > > I'm not sure how to answer that. We already have CMA regions being > created from device-tree, so we're only talking about explicitly > exposing those to userspace. It's easier to argue that how much CMA memory a system/device needs is h/w description as that's more a function of devices and frame sizes. But exposing to userspace or not is an OS architecture decision. It's not really any different than a kernel vs. userspace driver question. What's exposed by UIO or spi-dev is purely a kernel thing. > Are you thinking that userspace should be deciding whether they get > exposed or not? I don't know how userspace would discover them in > order to make that decision. Or perhaps the kernel should be deciding. Expose to userspace what the kernel doesn't need or drivers decide? It's hard to argue against 1 new property. It's death by a 1000 cuts though. Rob