Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp6170762ybf; Thu, 5 Mar 2020 14:43:32 -0800 (PST) X-Google-Smtp-Source: ADFU+vvRhn87Iml48v2TtsWYHAUNCtcvupoFsyrSoEaRAFeaW20KMm7OrIRzQFTrKVekCI0lLTq8 X-Received: by 2002:a9d:6e87:: with SMTP id a7mr145882otr.352.1583448212057; Thu, 05 Mar 2020 14:43:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583448212; cv=none; d=google.com; s=arc-20160816; b=HcuVIKuAP1h1g3dxrI2PoTQoyGg8H3HHpZ5sWWFtyfQsOfzT87xrjGgoV8hYQ/tyaz Tp+gNRpK0O49qsiUPwD5sbWOqlMQsN087evvZizEfUGYB2nWnqf9SdNEMjxIj+9+HYzg 2OX/IqvCoXs5yol6+0ofhHI5rWcWG7Q+5cYXMSRnywQ5UIyI2R1sxyEWc/M8OBwFZvcx +El1d8AJ9fMOYP+m2xx26jl3QexsOsEjNrtSF13YPca/zZRIdJ3iotzE9QVZL1RkkmSO OAEz6oEgyjRpYFcmAKdgLzjGZpcljnPrd2rfp5fOyUMBG33ni5lGQ9IomWIuWQZ6cNgQ yfUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UDxDPhpvCVuvdegOP8p0ZmjTKioDGinq2lw5CQF6aq8=; b=T/cwqv5At//RpvrUQAKBDdZSp0UZUPqO7bosuZiEndc0DW6F7UvZ6Q3Lpq5968iWAX wJl8MEDw8BSLzLE0jsl3ZUvHabKMU0ThN2Z13jx69YtKs8n59mHy2eR+v5ybJF1JlcKG yJCMnN/Y63bq2FK76OMZDak2UUi3AIM9Mpir0r2wgppWPWURB7yCQ0uyDhNXDuVu+RAY Ub580eNOmXp2q9pmMTuDIwDzvIlzm5HVdGSG5whxd/WE0JZfXkJJPk8dCej4PzCPoXOI fGC4mbWg3EYfcH3xNq985KQMZUfVneE5i6J6i7tk2CU4PfNyEyIcI7d0rw2VxsVrDcxa KQLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=wqa6CYXe; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z2si197897oix.100.2020.03.05.14.43.20; Thu, 05 Mar 2020 14:43:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=wqa6CYXe; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726181AbgCEWlT (ORCPT + 99 others); Thu, 5 Mar 2020 17:41:19 -0500 Received: from lelv0142.ext.ti.com ([198.47.23.249]:49158 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726142AbgCEWlT (ORCPT ); Thu, 5 Mar 2020 17:41:19 -0500 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 025MfG8V128542; Thu, 5 Mar 2020 16:41:16 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1583448076; bh=UDxDPhpvCVuvdegOP8p0ZmjTKioDGinq2lw5CQF6aq8=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=wqa6CYXeXa9JlCz+7P86soPxQKNzNeWney8R2aLj8/zvf13t46wLev/YNxPgS7LNY wxQ7WsWs56ngWohUFD91GOSPxDZPbwFidOdTwoWYr8tWlHXyRgGVNfa4emNlMue94f tc74kOPdlAM3qPW5XOc9XmgIVLK9yS8N7bPworsw= Received: from DFLE103.ent.ti.com (dfle103.ent.ti.com [10.64.6.24]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 025MfGNN102981 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 5 Mar 2020 16:41:16 -0600 Received: from DFLE112.ent.ti.com (10.64.6.33) by DFLE103.ent.ti.com (10.64.6.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Thu, 5 Mar 2020 16:41:15 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Thu, 5 Mar 2020 16:41:15 -0600 Received: from lelv0597.itg.ti.com (lelv0597.itg.ti.com [10.181.64.32]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 025MfFKR026645; Thu, 5 Mar 2020 16:41:15 -0600 Received: from localhost (irmo.dhcp.ti.com [128.247.81.254]) by lelv0597.itg.ti.com (8.14.7/8.14.7) with ESMTP id 025MfFPC098950; Thu, 5 Mar 2020 16:41:15 -0600 From: Suman Anna To: Bjorn Andersson , Loic Pallardy CC: Mathieu Poirier , Arnaud Pouliquen , Tero Kristo , , , Suman Anna Subject: [PATCH 1/2] remoteproc: fall back to using parent memory pool if no dedicated available Date: Thu, 5 Mar 2020 16:41:07 -0600 Message-ID: <20200305224108.21351-2-s-anna@ti.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200305224108.21351-1-s-anna@ti.com> References: <20200305224108.21351-1-s-anna@ti.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Tero Kristo In some cases, like with OMAP remoteproc, we are not creating dedicated memory pool for the virtio device. Instead, we use the same memory pool for all shared memories. The current virtio memory pool handling forces a split between these two, as a separate device is created for it, causing memory to be allocated from bad location if the dedicated pool is not available. Fix this by falling back to using the parent device memory pool if dedicated is not available. Fixes: 086d08725d34 ("remoteproc: create vdev subdevice with specific dma memory pool") Signed-off-by: Tero Kristo Signed-off-by: Suman Anna --- drivers/remoteproc/remoteproc_virtio.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c index 8c07cb2ca8ba..4723ebe574b8 100644 --- a/drivers/remoteproc/remoteproc_virtio.c +++ b/drivers/remoteproc/remoteproc_virtio.c @@ -368,6 +368,16 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id) goto out; } } + } else { + struct device_node *np = rproc->dev.parent->of_node; + + /* + * If we don't have dedicated buffer, just attempt to + * re-assign the reserved memory from our parent. + * Failure is non-critical so don't check return value + * either. + */ + of_reserved_mem_device_init_by_idx(dev, np, 0); } /* Allocate virtio device */ -- 2.23.0