Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp1134593ybb; Wed, 25 Mar 2020 16:39:46 -0700 (PDT) X-Google-Smtp-Source: ADFU+vvzbV/pFGOk6aHVCi5aSzDJy6qARpSZ8wY/bc+KBqlILaFkDq4Mn1s3dJPBtZfkeMrIwfr0 X-Received: by 2002:a9d:3b6:: with SMTP id f51mr4593571otf.255.1585179586388; Wed, 25 Mar 2020 16:39:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585179586; cv=none; d=google.com; s=arc-20160816; b=Te7e0dmJKF5iXppD5YYjz5xvtTLaj7hJeGQ/ZHCdgcW5u6ha/i9uwaeYquiD2YP4X2 qmerDrsnfjagTPpArJ7du0CT0ZSaelWRQJLq1oVbIZUhfA3jIBdEkVHA8K3uT9MdJa8e CsN296RIhcezlnW+uB9lWcOkgZYUeIzYPGmyshMPGalukuybP6jYOk/nLVcmA0RZc6l/ f4KpQDJMhB8S/lkR7xtdUbPDnJ3aQpQx1im46NRUXMxroRul84h9j6stPZS6Dqp+Suef oe/8Np1FU3F79DLVaTyGKrOduzuXq3aKtV380Y7oPLPvpeFkAg+m5odoEbp3BDuP3+5K iC0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature; bh=uTm5/NcikW799RsqS/cdd6w7cvlF937SI2+gXPxCQUg=; b=TNFnp7L9hBT42yn90m03IEF1h5sIsyNsS3L9rxjkd0t1GYIRDvfsR1UD5BxObl/csd N3GBVMH4VJYnF2JJWiHwUTH/TpTbubhyYbVI/nAutnVswqptyjnjESlgacRSpSHf3CXa 5VRBluIQWLK+VrOgWBidWGVXwDFLaADNlJwmPOfXsEfZYQrSiaNlpZgdq98h9cCWa/sH zjoCWo8jr+jHHVBCUR+ugqDuiLrhk9tlbQyqagTT6UubtAtWQTRFNg/0UrIuBv0wAcfu i8uCG8GVtRSvme7u3FmbXSpeNBnhOzfrYFPKcX9ggoEFMvSMwQP3S1z1d6qdkeNZZckW XasQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Z1SbKHoI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y206si230732oig.253.2020.03.25.16.39.33; Wed, 25 Mar 2020 16:39:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=Z1SbKHoI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727525AbgCYXjJ (ORCPT + 99 others); Wed, 25 Mar 2020 19:39:09 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:59610 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726970AbgCYXjJ (ORCPT ); Wed, 25 Mar 2020 19:39:09 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 02PNd3l7119398; Wed, 25 Mar 2020 18:39:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1585179543; bh=uTm5/NcikW799RsqS/cdd6w7cvlF937SI2+gXPxCQUg=; h=Subject:To:CC:References:From:Date:In-Reply-To; b=Z1SbKHoITV1jfyNXDOsKmOC16XShnWPkEWA8qnhWD4Ryxr4jaMHvLleLHYo9zu6fC dR6pVPNdK7vdsncS4SdA/CexHBfY/HNghWIXQBSmVd+Y/ePofsUkCDtnAr/huRhdn0 QWeruHODzHXd9LLoHeJ0FMkQeX9rKgONdNb6RSec= Received: from DFLE105.ent.ti.com (dfle105.ent.ti.com [10.64.6.26]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 02PNd3kb031785; Wed, 25 Mar 2020 18:39:03 -0500 Received: from DFLE111.ent.ti.com (10.64.6.32) by DFLE105.ent.ti.com (10.64.6.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Wed, 25 Mar 2020 18:39:03 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE111.ent.ti.com (10.64.6.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Wed, 25 Mar 2020 18:39:03 -0500 Received: from [10.250.35.147] (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 02PNd2cP013907; Wed, 25 Mar 2020 18:39:03 -0500 Subject: Re: [PATCH v2 1/2] remoteproc: fall back to using parent memory pool if no dedicated available To: Mathieu Poirier CC: Bjorn Andersson , Loic Pallardy , Arnaud Pouliquen , Tero Kristo , , References: <20200319162321.20632-1-s-anna@ti.com> <20200319162321.20632-2-s-anna@ti.com> <20200325203812.GA9384@xps15> From: Suman Anna Message-ID: <207036a8-b34e-6311-5ad6-3289eb9f7a06@ti.com> Date: Wed, 25 Mar 2020 18:39:01 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20200325203812.GA9384@xps15> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Mathieu, On 3/25/20 3:38 PM, Mathieu Poirier wrote: > On Thu, Mar 19, 2020 at 11:23:20AM -0500, Suman Anna wrote: >> From: Tero Kristo >> >> In some cases, like with OMAP remoteproc, we are not creating dedicated >> memory pool for the virtio device. Instead, we use the same memory pool >> for all shared memories. The current virtio memory pool handling forces >> a split between these two, as a separate device is created for it, >> causing memory to be allocated from bad location if the dedicated pool >> is not available. Fix this by falling back to using the parent device >> memory pool if dedicated is not available. >> >> Fixes: 086d08725d34 ("remoteproc: create vdev subdevice with specific dma memory pool") >> Signed-off-by: Tero Kristo >> Signed-off-by: Suman Anna >> --- >> v2: >> - Address Arnaud's concerns about hard-coded memory-region index 0 >> - Update the comment around the new code addition >> v1: https://patchwork.kernel.org/patch/11422721/ >> >> drivers/remoteproc/remoteproc_virtio.c | 15 +++++++++++++++ >> include/linux/remoteproc.h | 2 ++ >> 2 files changed, 17 insertions(+) >> >> diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c >> index eb817132bc5f..b687715cdf4b 100644 >> --- a/drivers/remoteproc/remoteproc_virtio.c >> +++ b/drivers/remoteproc/remoteproc_virtio.c >> @@ -369,6 +369,21 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id) >> goto out; >> } >> } >> + } else { >> + struct device_node *np = rproc->dev.parent->of_node; >> + >> + /* >> + * If we don't have dedicated buffer, just attempt to re-assign >> + * the reserved memory from our parent. A default memory-region >> + * at index 0 from the parent's memory-regions is assigned for >> + * the rvdev dev to allocate from, and this can be customized >> + * by updating the vdevbuf_mem_id in platform drivers if >> + * desired. Failure is non-critical and the allocations will >> + * fall back to global pools, so don't check return value >> + * either. > > I'm perplex... In the changelog it is indicated that if a memory pool is > not dedicated allocation happens from a bad location but here failure of > getting a hold of a dedicated memory pool is not critical. So, the comment here is a generic one while the bad location part in the commit description is actually from OMAP remoteproc usage perspective (if you remember the dev_warn messages we added to the memory-region parse logic in the driver). Before the fixed-memory carveout support, all the DMA allocations in remoteproc core were made from the rproc platform device's DMA pool ( which can be NULL). That is lost after the fixed-memory support, and they were always allocated from global DMA pools if no dedicated pools are used. After this patch, that continues to be case for drivers that still do not use any dedicated pools, while it does restore the usage of the platform device's DMA pool if a driver uses one (OMAP remoteproc falls into the latter). > >> + */ >> + of_reserved_mem_device_init_by_idx(dev, np, >> + rproc->vdevbuf_mem_id); > > I wonder if using an index setup by platform code is really the best way > forward when we already have the carveout mechanic available to us. I see the > platform code adding a carveout that would have the same name as rproc->name. > From there in rproc_add_virtio_dev() we could have something like: > > mem = rproc_find_carveout_by_name(rproc, "%s", rproc->name); > > > That would be very flexible, the location of the reserved memory withing the > memory-region could change without fear of breaking things and no need to add to > struct rproc. > > Let me know what you think. I think that can work as well but I feel it is lot more cumbersome. It does require every platform driver to add code adding/registering that carveout, and parse the reserved memory region etc. End of the day, we rely on DMA API and we just have to assign the region to the newly created device. The DMA pool assignment for devices using reserved-memory nodes has simply been the of_reserved_mem_device_init() function. regards Suman > > Thanks, > Mathieu > >> } >> >> /* Allocate virtio device */ >> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h >> index ed127b2d35ca..07bd73a6d72a 100644 >> --- a/include/linux/remoteproc.h >> +++ b/include/linux/remoteproc.h >> @@ -481,6 +481,7 @@ struct rproc_dump_segment { >> * @auto_boot: flag to indicate if remote processor should be auto-started >> * @dump_segments: list of segments in the firmware >> * @nb_vdev: number of vdev currently handled by rproc >> + * @vdevbuf_mem_id: default memory-region index for allocating vdev buffers >> */ >> struct rproc { >> struct list_head node; >> @@ -514,6 +515,7 @@ struct rproc { >> bool auto_boot; >> struct list_head dump_segments; >> int nb_vdev; >> + u8 vdevbuf_mem_id; >> u8 elf_class; >> }; >> >> -- >> 2.23.0 >>