Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753979AbbFIQZU (ORCPT ); Tue, 9 Jun 2015 12:25:20 -0400 Received: from mail-pa0-f50.google.com ([209.85.220.50]:35585 "EHLO mail-pa0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932833AbbFIQYB (ORCPT ); Tue, 9 Jun 2015 12:24:01 -0400 From: Lina Iyer To: ohad@wizery.com Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Lina Iyer , Jeffrey Hugo , Bjorn Andersson , Andy Gross Subject: [PATCH RFC v2 2/2] hwspinlock: qcom: Lock #7 is special lock, uses dynamic proc_id Date: Tue, 9 Jun 2015 10:23:40 -0600 Message-Id: <1433867020-7746-3-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1433867020-7746-1-git-send-email-lina.iyer@linaro.org> References: <1433867020-7746-1-git-send-email-lina.iyer@linaro.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3205 Lines: 93 Hwspinlocks are widely used between processors in an SoC, and also between elevation levels within in the same processor. QCOM SoC's use hwspinlock to serialize entry into a low power mode when the context switches from Linux to secure monitor. Lock #7 has been assigned for this purpose. In order to differentiate between one cpu core holding a lock while another cpu is contending for the same lock, the proc id written into the lock is (128 + cpu id). This makes it unique value among the cpu cores and therefore when a core locks the hwspinlock, other cores would wait for the lock to be released since they would have a different proc id. This value is specific for the lock #7 only. Declare lock #7 as raw capable, so the hwspinlock framework would not enfore acquiring a s/w spinlock before acquiring the hwspinlock. Cc: Jeffrey Hugo Cc: Bjorn Andersson Cc: Andy Gross Signed-off-by: Lina Iyer --- drivers/hwspinlock/qcom_hwspinlock.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/drivers/hwspinlock/qcom_hwspinlock.c b/drivers/hwspinlock/qcom_hwspinlock.c index 93b62e0..59278b0 100644 --- a/drivers/hwspinlock/qcom_hwspinlock.c +++ b/drivers/hwspinlock/qcom_hwspinlock.c @@ -25,16 +25,26 @@ #include "hwspinlock_internal.h" -#define QCOM_MUTEX_APPS_PROC_ID 1 -#define QCOM_MUTEX_NUM_LOCKS 32 +#define QCOM_MUTEX_APPS_PROC_ID 1 +#define QCOM_MUTEX_CPUIDLE_OFFSET 128 +#define QCOM_CPUIDLE_LOCK 7 +#define QCOM_MUTEX_NUM_LOCKS 32 + +static inline u32 __qcom_get_proc_id(struct hwspinlock *lock) +{ + return hwspin_lock_get_id(lock) == QCOM_CPUIDLE_LOCK ? + (QCOM_MUTEX_CPUIDLE_OFFSET + smp_processor_id()) : + QCOM_MUTEX_APPS_PROC_ID; +} static int qcom_hwspinlock_trylock(struct hwspinlock *lock) { struct regmap_field *field = lock->priv; u32 lock_owner; int ret; + u32 proc_id = __qcom_get_proc_id(lock); - ret = regmap_field_write(field, QCOM_MUTEX_APPS_PROC_ID); + ret = regmap_field_write(field, proc_id); if (ret) return ret; @@ -42,7 +52,7 @@ static int qcom_hwspinlock_trylock(struct hwspinlock *lock) if (ret) return ret; - return lock_owner == QCOM_MUTEX_APPS_PROC_ID; + return lock_owner == proc_id; } static void qcom_hwspinlock_unlock(struct hwspinlock *lock) @@ -57,7 +67,7 @@ static void qcom_hwspinlock_unlock(struct hwspinlock *lock) return; } - if (lock_owner != QCOM_MUTEX_APPS_PROC_ID) { + if (lock_owner != __qcom_get_proc_id(lock)) { pr_err("%s: spinlock not owned by us (actual owner is %d)\n", __func__, lock_owner); } @@ -129,6 +139,8 @@ static int qcom_hwspinlock_probe(struct platform_device *pdev) regmap, field); } + bank->lock[QCOM_CPUIDLE_LOCK].hwcaps = HWL_CAP_ALLOW_RAW; + pm_runtime_enable(&pdev->dev); ret = hwspin_lock_register(bank, &pdev->dev, &qcom_hwspinlock_ops, -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/