Received: by 10.192.165.156 with SMTP id m28csp540195imm; Tue, 17 Apr 2018 14:55:02 -0700 (PDT) X-Google-Smtp-Source: AIpwx48kRkvR4EPn3iy3tz0CfBQ6vUQmsCUFCOmiUSk5nMG6yedMUkWle/HIncqiN8qEs04WAyq7 X-Received: by 2002:a17:902:2983:: with SMTP id h3-v6mr3578397plb.80.1524002102664; Tue, 17 Apr 2018 14:55:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524002102; cv=none; d=google.com; s=arc-20160816; b=mRIDdhvt3MNtW9vv0xLhyCALt6dL7sqzMS6x+EtDZ9lT73Vutx/IHSx2CbkmES1WGc r+9dUPawCnH0cHLA/C2DCu7JMF5g4iDEgXyzU11HpM3qHLP5D0Gzj9sPHDSzxsVymXZ2 pszeKLQRfvO72l30yyLHo+AIBiOnCyH0PyGnxcb4Plo66QAZvR2wQrDVym9ArahgwGzp iUD7i8N1rUomk0q/7gTNtKD3P7maNHqBRRzrhczkcVy+OkyL+bAcmGp2Zp8GQxsXNxwJ mNitzj9Cy1fm+e2wsCiveGEj8I61MfIDoLfGrsU/LUzAJriCVapKwgVBjSXttLTm30sq PVPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=xfqmSaUXMKl4Pfp6/g7TrTuOQf24ym4alW4CJuYNnQ8=; b=QpGOFMBcOnhDka8Bwt5hk4CR0iiF+l6vuTVJyYXQRlGNG0uaFtAjJuy8LnPUdlWUwu ZRNtWFdgw9Mm6FWlYO4ERZzKyw2IFaWnNrXbotEu/F3x57TdUZHWknbYoTUFIvkdEIun rXcin0uNA6DWXPEIBhxRdwws8/cliHtJJWBi473uhGE3lCN4dkMK8ZpYKCcolzJtyTbr irZEfjTnK67a45GtxQDT458pSuwupZjDCDY+dqSM8dyw9tablHheUlr2Mf+TqBEigxFu Ohlozr4NEyVNosl5BZT7nZtPbi5ZMi3GiOE4XAkn/zleYvCul4wY1n4Fgblifp+wDKy1 xrFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LFqS74TY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p9si12098674pgu.80.2018.04.17.14.54.47; Tue, 17 Apr 2018 14:55:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LFqS74TY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752914AbeDQVxj (ORCPT + 99 others); Tue, 17 Apr 2018 17:53:39 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:41102 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752310AbeDQVxh (ORCPT ); Tue, 17 Apr 2018 17:53:37 -0400 Received: by mail-pf0-f195.google.com with SMTP id a2so12790542pff.8 for ; Tue, 17 Apr 2018 14:53:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=xfqmSaUXMKl4Pfp6/g7TrTuOQf24ym4alW4CJuYNnQ8=; b=LFqS74TY7oy69Xd7suGjQLLuS48P+dIFpKIyjZ7tahmtx4Q4P3NYiSD07IsBTp3Qdv PDkxPyJ+3w4oZ3ODUL/yQyyVPN6CVr2vtf14vt0WYpPxU1DSE+vJYdwD+p1gOIbQQ0Tc 2g2yoIX8FSsTXBtsOD0Xw4S3qir1r5Zt4UZvY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=xfqmSaUXMKl4Pfp6/g7TrTuOQf24ym4alW4CJuYNnQ8=; b=G1V7tjCfBRpIrHRgAAF9+QZv3hGMhulGxJH8N8boj3cgeo1OHz8YuF6QfQrY5o41qF 7X3xmLhQMjq2qjge6uglqxBfcyAU0X+Tk2qcl5khqdV4fkOpCpAfC47NWo6bAvzojmpF winzPyHW7F3QlkBhmIOBCPnnLe4a07K5UcFWst2hem325FTqMhBc1S02I9bLQd5IAt8h qoA7wop5Vv08GJj27FK8cVnG1vEfXQAKdvoYnXyN+dpiASSCVQrWrumwJx+MFmF0y+1S ZPZ/Ucu+rEjDMc8/HlzlWS9BPrT9Wcz7h6toZU6GiiOk+OY5TI/qRTKaaDpgm3R8TJUU ULUQ== X-Gm-Message-State: ALQs6tBzoX5ebv5GkoaL7kQqmSc8Tq79rT989TFZKuJeLupTSkDQVhN6 iHSl5G/nF67m5Z2LLy5oPC5FVw== X-Received: by 10.99.47.4 with SMTP id v4mr3234946pgv.42.1524002016698; Tue, 17 Apr 2018 14:53:36 -0700 (PDT) Received: from builder (104-188-17-28.lightspeed.sndgca.sbcglobal.net. [104.188.17.28]) by smtp.gmail.com with ESMTPSA id 125sm26675136pff.158.2018.04.17.14.53.35 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 17 Apr 2018 14:53:36 -0700 (PDT) Date: Tue, 17 Apr 2018 14:53:34 -0700 From: Bjorn Andersson To: Baolin Wang Cc: ohad@wizery.com, arnd@arndb.de, broonie@kernel.org, linux-remoteproc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] hwspinlock: Introduce one new mode for hwspinlock Message-ID: <20180417215334.GF17344@builder> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat 07 Apr 20:06 PDT 2018, Baolin Wang wrote: > In some scenarios, user need do some time-consuming or sleepable > operations under the hardware spinlock protection for synchronization > between the multiple subsystems. > > For example, there is one PMIC efuse on Spreadtrum platform, which > need to be accessed under one hardware lock. But during the hardware > lock protection, the efuse operation is time-consuming to almost 5 ms, > so we can not disable the interrupts or preemption so long in this case. > > Thus we can introduce one new mode to indicate that we just acquire the > hardware lock and do not disable interrupts or preemption, meanwhile we > should force user to protect the hardware lock with mutex or spinlock to > avoid dead-lock. > > Signed-off-by: Baolin Wang Looks good, both patches applied. Thanks, Bjorn > --- > drivers/hwspinlock/hwspinlock_core.c | 34 ++++++++++++++++---- > include/linux/hwspinlock.h | 58 ++++++++++++++++++++++++++++++++++ > 2 files changed, 85 insertions(+), 7 deletions(-) > > diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c > index f4a59f5..5278d05 100644 > --- a/drivers/hwspinlock/hwspinlock_core.c > +++ b/drivers/hwspinlock/hwspinlock_core.c > @@ -71,10 +71,16 @@ > * This function attempts to lock an hwspinlock, and will immediately > * fail if the hwspinlock is already taken. > * > - * Upon a successful return from this function, preemption (and possibly > - * interrupts) is disabled, so the caller must not sleep, and is advised to > - * release the hwspinlock as soon as possible. This is required in order to > - * minimize remote cores polling on the hardware interconnect. > + * Caution: If the mode is HWLOCK_RAW, that means user must protect the routine > + * of getting hardware lock with mutex or spinlock. Since in some scenarios, > + * user need some time-consuming or sleepable operations under the hardware > + * lock, they need one sleepable lock (like mutex) to protect the operations. > + * > + * If the mode is not HWLOCK_RAW, upon a successful return from this function, > + * preemption (and possibly interrupts) is disabled, so the caller must not > + * sleep, and is advised to release the hwspinlock as soon as possible. This is > + * required in order to minimize remote cores polling on the hardware > + * interconnect. > * > * The user decides whether local interrupts are disabled or not, and if yes, > * whether he wants their previous state to be saved. It is up to the user > @@ -113,6 +119,9 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags) > case HWLOCK_IRQ: > ret = spin_trylock_irq(&hwlock->lock); > break; > + case HWLOCK_RAW: > + ret = 1; > + break; > default: > ret = spin_trylock(&hwlock->lock); > break; > @@ -134,6 +143,9 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags) > case HWLOCK_IRQ: > spin_unlock_irq(&hwlock->lock); > break; > + case HWLOCK_RAW: > + /* Nothing to do */ > + break; > default: > spin_unlock(&hwlock->lock); > break; > @@ -170,9 +182,14 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags) > * is already taken, the function will busy loop waiting for it to > * be released, but give up after @timeout msecs have elapsed. > * > - * Upon a successful return from this function, preemption is disabled > - * (and possibly local interrupts, too), so the caller must not sleep, > - * and is advised to release the hwspinlock as soon as possible. > + * Caution: If the mode is HWLOCK_RAW, that means user must protect the routine > + * of getting hardware lock with mutex or spinlock. Since in some scenarios, > + * user need some time-consuming or sleepable operations under the hardware > + * lock, they need one sleepable lock (like mutex) to protect the operations. > + * > + * If the mode is not HWLOCK_RAW, upon a successful return from this function, > + * preemption is disabled (and possibly local interrupts, too), so the caller > + * must not sleep, and is advised to release the hwspinlock as soon as possible. > * This is required in order to minimize remote cores polling on the > * hardware interconnect. > * > @@ -266,6 +283,9 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags) > case HWLOCK_IRQ: > spin_unlock_irq(&hwlock->lock); > break; > + case HWLOCK_RAW: > + /* Nothing to do */ > + break; > default: > spin_unlock(&hwlock->lock); > break; > diff --git a/include/linux/hwspinlock.h b/include/linux/hwspinlock.h > index 859d673..fe450ee 100644 > --- a/include/linux/hwspinlock.h > +++ b/include/linux/hwspinlock.h > @@ -24,6 +24,7 @@ > /* hwspinlock mode argument */ > #define HWLOCK_IRQSTATE 0x01 /* Disable interrupts, save state */ > #define HWLOCK_IRQ 0x02 /* Disable interrupts, don't save state */ > +#define HWLOCK_RAW 0x03 > > struct device; > struct device_node; > @@ -176,6 +177,25 @@ static inline int hwspin_trylock_irq(struct hwspinlock *hwlock) > } > > /** > + * hwspin_trylock_raw() - attempt to lock a specific hwspinlock > + * @hwlock: an hwspinlock which we want to trylock > + * > + * This function attempts to lock an hwspinlock, and will immediately fail > + * if the hwspinlock is already taken. > + * > + * Caution: User must protect the routine of getting hardware lock with mutex > + * or spinlock to avoid dead-lock, that will let user can do some time-consuming > + * or sleepable operations under the hardware lock. > + * > + * Returns 0 if we successfully locked the hwspinlock, -EBUSY if > + * the hwspinlock was already taken, and -EINVAL if @hwlock is invalid. > + */ > +static inline int hwspin_trylock_raw(struct hwspinlock *hwlock) > +{ > + return __hwspin_trylock(hwlock, HWLOCK_RAW, NULL); > +} > + > +/** > * hwspin_trylock() - attempt to lock a specific hwspinlock > * @hwlock: an hwspinlock which we want to trylock > * > @@ -243,6 +263,29 @@ int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int to) > } > > /** > + * hwspin_lock_timeout_raw() - lock an hwspinlock with timeout limit > + * @hwlock: the hwspinlock to be locked > + * @to: timeout value in msecs > + * > + * This function locks the underlying @hwlock. If the @hwlock > + * is already taken, the function will busy loop waiting for it to > + * be released, but give up when @timeout msecs have elapsed. > + * > + * Caution: User must protect the routine of getting hardware lock with mutex > + * or spinlock to avoid dead-lock, that will let user can do some time-consuming > + * or sleepable operations under the hardware lock. > + * > + * Returns 0 when the @hwlock was successfully taken, and an appropriate > + * error code otherwise (most notably an -ETIMEDOUT if the @hwlock is still > + * busy after @timeout msecs). The function will never sleep. > + */ > +static inline > +int hwspin_lock_timeout_raw(struct hwspinlock *hwlock, unsigned int to) > +{ > + return __hwspin_lock_timeout(hwlock, to, HWLOCK_RAW, NULL); > +} > + > +/** > * hwspin_lock_timeout() - lock an hwspinlock with timeout limit > * @hwlock: the hwspinlock to be locked > * @to: timeout value in msecs > @@ -302,6 +345,21 @@ static inline void hwspin_unlock_irq(struct hwspinlock *hwlock) > } > > /** > + * hwspin_unlock_raw() - unlock hwspinlock > + * @hwlock: a previously-acquired hwspinlock which we want to unlock > + * > + * This function will unlock a specific hwspinlock. > + * > + * @hwlock must be already locked (e.g. by hwspin_trylock()) before calling > + * this function: it is a bug to call unlock on a @hwlock that is already > + * unlocked. > + */ > +static inline void hwspin_unlock_raw(struct hwspinlock *hwlock) > +{ > + __hwspin_unlock(hwlock, HWLOCK_RAW, NULL); > +} > + > +/** > * hwspin_unlock() - unlock hwspinlock > * @hwlock: a previously-acquired hwspinlock which we want to unlock > * > -- > 1.7.9.5 >