Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp3771165ybz; Mon, 20 Apr 2020 09:08:58 -0700 (PDT) X-Google-Smtp-Source: APiQypKm5AcFAAaGileFQgiw5A0wsWVzvVXCNSKznlNK7uCEj+NiTiBvpYrqJqvo7Vws6Lwh+ctQ X-Received: by 2002:a05:6402:4:: with SMTP id d4mr14581538edu.344.1587398938092; Mon, 20 Apr 2020 09:08:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587398938; cv=none; d=google.com; s=arc-20160816; b=fAR/RTMT94x+5DJaXNgN99UBkym77ZXQ8uK6mzMLfFTmWzIyZNXKj+YdLLAWzE+j/Z 1oF59ud0bRMPdnYDmqnPVXrdSmwzyDT0FtiqJr+bz+a9nO1ByAzEabKJ2063CzXv5xi2 uQvwAPNJzbqMjRx2SrP8Fv0umDJMS69faIv/rUnunmSjEbtH6YBwL9A1zCtfI1TYrRlv JFiF2psY4ruP3lIJrVutVcP40TP3ziZ3X3x9ASS83OnOdB3wGUOIUVU5iNDG/nCt39TS jzvJpCRmOd5dBSFmsd8Fn57PxkDpVbJSfP2LUE65U2wbbqDSQEaW3WDldtjIqBs7NBJr VyqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=lcvEvtPwtP61yUBg+r2EgVAD5UpkYUAqN8eUwsfynf4=; b=NsfU9eRzBGKtwgvU+oqv/dzWmJUaq44dUQWaKpuZfhPZuqueu4o/LXkfQ/DP4Q4+rG EKDIH/Ae1HQVTsPPymclRpoY3B2eFm3LY84tIll0+jBo92jTh7iaizLzVq9bFwWKnQoh RU4OKow+Ilvw6FKHraD+fJkwovQgcvfDcYyn7hnEJNF/yZwTc5QpP5Suesg+RWrToz/C VtovIqsNUL0JgDDL+KUiB86uuH8Vr/+DgrezdXkDTj9CSPWeY0EkYi8jnIGqoPHoHGhP v4WHMEqw2sGIOn4gy2aJZZFOdPVJd+7D4L8E8Pp8/euM63ZfEF3BUHogCDhPCo5S+MJe OfFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=C3VoeExZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y13si966229edp.47.2020.04.20.09.08.34; Mon, 20 Apr 2020 09:08:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=C3VoeExZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727811AbgDTQHg (ORCPT + 99 others); Mon, 20 Apr 2020 12:07:36 -0400 Received: from fllv0015.ext.ti.com ([198.47.19.141]:54350 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725958AbgDTQHg (ORCPT ); Mon, 20 Apr 2020 12:07:36 -0400 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id 03KG7S4s089873; Mon, 20 Apr 2020 11:07:28 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1587398848; bh=lcvEvtPwtP61yUBg+r2EgVAD5UpkYUAqN8eUwsfynf4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=C3VoeExZcdGJCtyAZfthp1DwYbpsS5Bfigy902ZTf1i4To3bRBELXziW22+Qp7NMS zSJO1cva/CRkTQNtzizuHnwIJfxH53+RAIh59U5bpxm7iq5d9U+mfL/OkBPBNFNbtH S7A940LNwZo0BXKz/4nE54Trzw1JDgbCObsLxSpQ= Received: from DFLE103.ent.ti.com (dfle103.ent.ti.com [10.64.6.24]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 03KG7STS127053 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 20 Apr 2020 11:07:28 -0500 Received: from DFLE100.ent.ti.com (10.64.6.21) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Mon, 20 Apr 2020 11:06:09 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE100.ent.ti.com (10.64.6.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Mon, 20 Apr 2020 11:06:09 -0500 Received: from fllv0103.dal.design.ti.com (fllv0103.dal.design.ti.com [10.247.120.73]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 03KG691s005433; Mon, 20 Apr 2020 11:06:09 -0500 Received: from localhost ([10.250.70.56]) by fllv0103.dal.design.ti.com (8.14.7/8.14.7) with ESMTP id 03KG69LA128209; Mon, 20 Apr 2020 11:06:09 -0500 From: Suman Anna To: Bjorn Andersson CC: Mathieu Poirier , Arnaud Pouliquen , Loic Pallardy , Tero Kristo , , , Suman Anna Subject: [PATCH v3 1/2] remoteproc: Fall back to using parent memory pool if no dedicated available Date: Mon, 20 Apr 2020 11:05:59 -0500 Message-ID: <20200420160600.10467-2-s-anna@ti.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420160600.10467-1-s-anna@ti.com> References: <20200420160600.10467-1-s-anna@ti.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tero Kristo In some cases, like with OMAP remoteproc, we are not creating dedicated memory pool for the virtio device. Instead, we use the same memory pool for all shared memories. The current virtio memory pool handling forces a split between these two, as a separate device is created for it, causing memory to be allocated from bad location if the dedicated pool is not available. Fix this by falling back to using the parent device memory pool if dedicated is not available. Fixes: 086d08725d34 ("remoteproc: create vdev subdevice with specific dma memory pool") Signed-off-by: Tero Kristo Signed-off-by: Suman Anna --- v3: - Go back to v1 logic (removed the vdevbuf_mem_id variable added in v2) - Revised the comment to remove references to vdevbuf_mem_id - Capitalize the patch header v2: https://patchwork.kernel.org/patch/11447651/ drivers/remoteproc/remoteproc_virtio.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c index e61d738d9b47..44187fe43677 100644 --- a/drivers/remoteproc/remoteproc_virtio.c +++ b/drivers/remoteproc/remoteproc_virtio.c @@ -376,6 +376,18 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id) goto out; } } + } else { + struct device_node *np = rproc->dev.parent->of_node; + + /* + * If we don't have dedicated buffer, just attempt to re-assign + * the reserved memory from our parent. A default memory-region + * at index 0 from the parent's memory-regions is assigned for + * the rvdev dev to allocate from. Failure is non-critical and + * the allocations will fall back to global pools, so don't + * check return value either. + */ + of_reserved_mem_device_init_by_idx(dev, np, 0); } /* Allocate virtio device */ -- 2.26.0