Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp3612459pxu; Tue, 8 Dec 2020 17:18:24 -0800 (PST) X-Google-Smtp-Source: ABdhPJxkWqk16y4UuZmuNorRqgaKRCfGyjwP2UBwpPWVoHMqWG4nDE5Ue+qEYsl9ieybF4G1DPCe X-Received: by 2002:aa7:d919:: with SMTP id a25mr209240edr.81.1607476704462; Tue, 08 Dec 2020 17:18:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607476704; cv=none; d=google.com; s=arc-20160816; b=tJ+W7GUi25PcKwIFydoFjtP0hng1DrFVotKvBNa7IJAbYNcRe1qrsnwXjnCCR+Fvin TxlxvPx5sF4CkCkWJSshRSWQPpu1Uvh0iWlS4BQ4HVVyI1wgfw6HAVBk39cy9/HcKFPm 9JpoB0uEpoYRb/xUbExKAFywbaDTn6uePYvU54OVhO+mLRwsDNM0UCGu14OynX2UenIf hj0u2KlukDahqc0t7vPWDKZGlY9gOSionJ4fjtyFg1lTT6D5fs7Q9BWgr1sL56Wgh9Dg l4mQVFm4gI1TNFrjbpVTHftkwZbUmFvCg5kz4QuuTBGjafFfWVuxK6x9kT+gOZmZfN3c 7Vxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=0tqcFPccNMLLaQVclVsBzh5ComBfmMvUjCh7GoU46XE=; b=Keveefq62ZgDUqbJfzbIdUGNohx7v49hPQvDMqngxC04JnDFfc7UkqUrOL5Hmg+s+Z mW4kskMCoilgmJ0TPKQK6bk98LkMQ0Fawy65wPp5+yWr656x864IvVmmc5xjrBB3+PAY ZxJhqhWdct5XwttSl45ZDaoOlb5AkfVarOwBybUELg5owO7SekXL8qTRKowhDu03xvVY lfyTEc60KhpHjSZVxqJ5RKnNbL2mOCswD7Gd3QNsQ1j0ND3A9EbGzFnTgWar++RpALKB 4wOoBv7NOurfE9/LX6TxwzvHHv5CVUsmjfP0/1zOUe3CORgjK2bWTgjcx/LaQXSVl6P3 Vdcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MQGPxtMq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g25si98485ejp.67.2020.12.08.17.18.02; Tue, 08 Dec 2020 17:18:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MQGPxtMq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725841AbgLIAyB (ORCPT + 99 others); Tue, 8 Dec 2020 19:54:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725791AbgLIAyA (ORCPT ); Tue, 8 Dec 2020 19:54:00 -0500 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 506FEC061793 for ; Tue, 8 Dec 2020 16:53:14 -0800 (PST) Received: by mail-pg1-x544.google.com with SMTP id m9so76159pgb.4 for ; Tue, 08 Dec 2020 16:53:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=0tqcFPccNMLLaQVclVsBzh5ComBfmMvUjCh7GoU46XE=; b=MQGPxtMq6+Uvnppng+806OGiuXQXDo5YtUMcUE4HlsGiDT3a2ucJZDTll8h2c94eM0 HBggA55nqJ3cdyery7+utx5o6xMzDAzTdbW2e8L61eHCSYBc0T/lOGthZ4b7yf8jFPdZ etq6a9RHK8ZXSbhC30IFpFt0pjrYdrSbaNccbS9Zd2O0+xWipMG27gkzYKhjOe71F4jn VWlunhUwRLIDllZRptNEaK453jY/7eZ3MHOcVCA6xepHkl++HqqY98WGZS4JwvWHNMDJ H/q14fdyAyW76jJJtpcBr4Adlwyt9x96X61A8iel07OQ8qSd1FJz3aNmkJNkxcHpA9Vr qMYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=0tqcFPccNMLLaQVclVsBzh5ComBfmMvUjCh7GoU46XE=; b=q6rRatMavQOQPaLHbQPqT5y2BvErYd9JECiSr3ml3qMquGiLuU4ePQkj4vSM6aMOT2 6OSH3UMmHFB23+9+56tSljmDJZXDAH619QPK/xnSgKD4Lu/dGGJfIEBpRTzCuSi6JVIN M7QUlUi0OH6KQGansOsbb1YfNhup8nBD4Nb8dwUosBLBkoDjG4y7h89Wm/NC9d5ejYlW SG2Ywt/ft3Y4PpszJEJSQLRIw0HJYaurowHAFJPM1b9mN6tRc7fnPNSfexi3rt+SGukK lbnohN3ILTT9GlqkVsKZxRPnjdbTPN3RDuSHdS06RBPcATPnIQYXnjrJmuCwvsChoeHv 7oog== X-Gm-Message-State: AOAM531LeJA6gx8h53CRvODhfHehnx1IPBTxR0hL5NjhqPpVJeoj3JPU 9xSQogISnjNUxiA4AksUCrfqvg== X-Received: by 2002:aa7:8105:0:b029:18e:c8d9:2c24 with SMTP id b5-20020aa781050000b029018ec8d92c24mr44332pfi.49.1607475193594; Tue, 08 Dec 2020 16:53:13 -0800 (PST) Received: from xps15 (S0106889e681aac74.cg.shawcable.net. [68.147.0.187]) by smtp.gmail.com with ESMTPSA id nm6sm83279pjb.25.2020.12.08.16.53.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Dec 2020 16:53:12 -0800 (PST) Date: Tue, 8 Dec 2020 17:53:11 -0700 From: Mathieu Poirier To: Arnaud POULIQUEN Cc: "ohad@wizery.com" , "bjorn.andersson@linaro.org" , "robh+dt@kernel.org" , "linux-remoteproc@vger.kernel.org" , "devicetree@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v3 09/15] remoteproc: Introduce function rproc_detach() Message-ID: <20201209005311.GB1601690@xps15> References: <20201126210642.897302-1-mathieu.poirier@linaro.org> <20201126210642.897302-10-mathieu.poirier@linaro.org> <0e705760-b69a-d872-9770-c03dde85ab1c@st.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0e705760-b69a-d872-9770-c03dde85ab1c@st.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 08, 2020 at 07:35:18PM +0100, Arnaud POULIQUEN wrote: > Hi Mathieu, > > > On 11/26/20 10:06 PM, Mathieu Poirier wrote: > > Introduce function rproc_detach() to enable the remoteproc > > core to release the resources associated with a remote processor > > without stopping its operation. > > > > Signed-off-by: Mathieu Poirier > > Reviewed-by: Peng Fan > > --- > > drivers/remoteproc/remoteproc_core.c | 65 +++++++++++++++++++++++++++- > > include/linux/remoteproc.h | 1 + > > 2 files changed, 65 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c > > index 928b3f975798..f5adf05762e9 100644 > > --- a/drivers/remoteproc/remoteproc_core.c > > +++ b/drivers/remoteproc/remoteproc_core.c > > @@ -1667,7 +1667,7 @@ static int rproc_stop(struct rproc *rproc, bool crashed) > > /* > > * __rproc_detach(): Does the opposite of rproc_attach() > > */ > > -static int __maybe_unused __rproc_detach(struct rproc *rproc) > > +static int __rproc_detach(struct rproc *rproc) > > { > > struct device *dev = &rproc->dev; > > int ret; > > @@ -1910,6 +1910,69 @@ void rproc_shutdown(struct rproc *rproc) > > } > > EXPORT_SYMBOL(rproc_shutdown); > > > > +/** > > + * rproc_detach() - Detach the remote processor from the > > + * remoteproc core > > + * > > + * @rproc: the remote processor > > + * > > + * Detach a remote processor (previously attached to with rproc_actuate()). > > + * > > + * In case @rproc is still being used by an additional user(s), then > > + * this function will just decrement the power refcount and exit, > > + * without disconnecting the device. > > + * > > + * Function rproc_detach() calls __rproc_detach() in order to let a remote > > + * processor know that services provided by the application processor are > > + * no longer available. From there it should be possible to remove the > > + * platform driver and even power cycle the application processor (if the HW > > + * supports it) without needing to switch off the remote processor. > > + */ > > +int rproc_detach(struct rproc *rproc) > > +{ > > + struct device *dev = &rproc->dev; > > + int ret; > > + > > + ret = mutex_lock_interruptible(&rproc->lock); > > + if (ret) { > > + dev_err(dev, "can't lock rproc %s: %d\n", rproc->name, ret); > > + return ret; > > + } > > + > > + if (rproc->state != RPROC_RUNNING && rproc->state != RPROC_ATTACHED) { > > + ret = -EPERM; > > + goto out; > > + } > > + > > + /* if the remote proc is still needed, bail out */ > > + if (!atomic_dec_and_test(&rproc->power)) { > > + ret = -EBUSY; > > + goto out; > > + } > > + > > + ret = __rproc_detach(rproc); > > + if (ret) { > > + atomic_inc(&rproc->power); > > + goto out; > > + } > > + > > + /* clean up all acquired resources */ > > + rproc_resource_cleanup(rproc); > > I started to test the series, I found 2 problems testing in STM32P1 board. > > 1) the resource_table pointer is unmapped if the firmware has been booted by the > Linux, generating a crash in rproc_free_vring. > I attached a fix at the end of the mail. > I have reproduced the condition on my side and confirm that your solution is correct. See below for a minor comment. > 2) After the detach, the rproc state is "detached" > but it is no longer possible to re-attach to it correctly. > Neither if the firmware is standalone, nor if it has been booted > by the Linux. > Did you update your FW image? If so, I need to run the same one. > I did not investigate, but the issue is probably linked to the resource > table address which is set to NULL. > > So we either have to fix the problem in order to attach or forbid the transition. > > > Regards, > Arnaud > > > + > > + rproc_disable_iommu(rproc); > > + > > + /* > > + * Set the remote processor's table pointer to NULL. Since mapping > > + * of the resource table to a virtual address is done in the platform > > + * driver, unmapping should also be done there. > > + */ > > + rproc->table_ptr = NULL; > > +out: > > + mutex_unlock(&rproc->lock); > > + return ret; > > +} > > +EXPORT_SYMBOL(rproc_detach); > > + > > /** > > * rproc_get_by_phandle() - find a remote processor by phandle > > * @phandle: phandle to the rproc > > diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h > > index da15b77583d3..329c1c071dcf 100644 > > --- a/include/linux/remoteproc.h > > +++ b/include/linux/remoteproc.h > > @@ -656,6 +656,7 @@ rproc_of_resm_mem_entry_init(struct device *dev, u32 of_resm_idx, size_t len, > > > > int rproc_boot(struct rproc *rproc); > > void rproc_shutdown(struct rproc *rproc); > > +int rproc_detach(struct rproc *rproc); > > int rproc_set_firmware(struct rproc *rproc, const char *fw_name); > > void rproc_report_crash(struct rproc *rproc, enum rproc_crash_type type); > > int rproc_coredump_add_segment(struct rproc *rproc, dma_addr_t da, size_t size); > > > > From: Arnaud Pouliquen > Date: Tue, 8 Dec 2020 18:54:51 +0100 > Subject: [PATCH] remoteproc: core: fix detach for unmapped table_ptr > > If the firmware has been loaded and started by the kernel, the > resource table has probably been mapped by the carveout allocation > (see rproc_elf_find_loaded_rsc_table). > In this case the memory can have been unmapped before the vrings are free. > The result is a crash that occurs in rproc_free_vring while try to use the > unmapped pointer. > > Signed-off-by: Arnaud Pouliquen > --- > drivers/remoteproc/remoteproc_core.c | 17 ++++++++++++++--- > 1 file changed, 14 insertions(+), 3 deletions(-) > > diff --git a/drivers/remoteproc/remoteproc_core.c > b/drivers/remoteproc/remoteproc_core.c > index 2b0a52fb3398..3508ffba4a2a 100644 > --- a/drivers/remoteproc/remoteproc_core.c > +++ b/drivers/remoteproc/remoteproc_core.c > @@ -1964,6 +1964,13 @@ int rproc_detach(struct rproc *rproc) > goto out; > } > > + /* > + * Prevent case that the installed resource table is no longer > + * accessible (e.g. memory unmapped), use the cache if available > + */ > + if (rproc->cached_table) > + rproc->table_ptr = rproc->cached_table; I don't think there is an explicit need to check ->cached_table. If the remote processor has been started by the remoteproc core it is valid anyway. And below kfree() is called invariably. So that problem is fixed. Let me know about your FW image and we'll pick it up from there. Mathieu > + > ret = __rproc_detach(rproc); > if (ret) { > atomic_inc(&rproc->power); > @@ -1975,10 +1982,14 @@ int rproc_detach(struct rproc *rproc) > > rproc_disable_iommu(rproc); > > + /* Free the chached table memory that can has been allocated*/ > + kfree(rproc->cached_table); > + rproc->cached_table = NULL; > /* > - * Set the remote processor's table pointer to NULL. Since mapping > - * of the resource table to a virtual address is done in the platform > - * driver, unmapping should also be done there. > + * Set the remote processor's table pointer to NULL. If mapping > + * of the resource table to a virtual address has been done in the > + * platform driver(attachment to an existing firmware), > + * unmapping should also be done there. > */ > rproc->table_ptr = NULL; > out: > -- > 2.17.1 > > >