2017-12-01 21:52:10

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 00/10] clk: implement clock rate protection mechanism

This Patchset is related the RFC [0] and the discussion around
CLK_SET_RATE_GATE available here [1]

This patchset introduce clock protection to the CCF core. This can then
be used for:

* Provide a way for a consumer to claim exclusivity over the rate control
of a provider. Some clock consumers require that a clock rate must not
deviate from its selected frequency. There can be several reasons for
this, not least of which is that some hardware may not be able to
handle or recover from a glitch caused by changing the clock rate while
the hardware is in operation. For such HW, The ability to get exclusive
control of a clock's rate, and release that exclusivity, could be seen
as a fundamental clock rate control primitive. The exclusivity is not
preemptible, so when claimed more than once, is rate is effectively
locked.

* Provide a similar functionality to providers themselves, fixing
CLK_SET_RATE_GATE flag (enforce clock gating along the tree). While
there might still be a few platforms relying the broken implementation,
tests done has shown this change to be pretty safe.

Changes since v4: [4]
- Fixup documentation comments
- Fix error on exclusive API when CCF is disabled

Changes since v3: [3]
- Reorder patches following Stephen comments
- Add before/after examples to the cosmetic change
- Remove loops around protection where possible
- Rename the API from "protect" to "exclusive" which decribe what the
code better

Changes since v2: [2]
- Fix issues reported by Adriana Reus (Thanks !)
- Dropped patch "clk: move CLK_SET_RATE_GATE protection from prepare
to enable". This was broken as the protect count, like the prepare_count
should only be accessed under the prepare_lock.

Changes since v1: [1]
- Check if the rate would actually change before continuing, and bail-out
early if not.

Changes since RFC: [0]
- s/clk_protect/clk_rate_protect
- Request rework around core_nolock function
- Add clk_set_rate_protect
- Reword clk_rate_protect and clk_unprotect documentation
- Add few comments to explain the code
- Add fixes for CLK_SET_RATE_GATE

This was tested with the audio use case mentioned in [1]

[0]: https://lkml.kernel.org/r/[email protected]
[1]: https://lkml.kernel.org/r/148942423440.82235.17188153691656009029@resonance
[2]: https://lkml.kernel.org/r/[email protected]
[3]: https://lkml.kernel.org/r/[email protected]
[4]: https://lkml.kernel.org/r/[email protected]

Jerome Brunet (10):
clk: fix incorrect usage of ENOSYS
clk: take the prepare lock out of clk_core_set_parent
clk: add clk_core_set_phase_nolock function
clk: rework calls to round and determine rate callbacks
clk: use round rate to bail out early in set_rate
clk: add clock protection mechanism to clk core
clk: cosmetic changes to clk_summary debugfs entry
clk: fix CLK_SET_RATE_GATE with clock rate protection
clk: add clk_rate_exclusive api
clk: fix set_rate_range when current rate is out of range

drivers/clk/clk.c | 509 +++++++++++++++++++++++++++++++++++++------
include/linux/clk-provider.h | 1 +
include/linux/clk.h | 62 ++++++
3 files changed, 502 insertions(+), 70 deletions(-)

--
2.14.3


2017-12-01 21:52:15

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 02/10] clk: take the prepare lock out of clk_core_set_parent

Rework set_parent core function so it can be called when the prepare lock
is already held by the caller.

This rework is done to ease the integration of the "protected" clock
functionality.

Acked-by: Linus Walleij <[email protected]>
Tested-by: Quentin Schulz <[email protected]>
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 5fe9e63b15c6..e60b2a26b10b 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1871,32 +1871,28 @@ bool clk_has_parent(struct clk *clk, struct clk *parent)
}
EXPORT_SYMBOL_GPL(clk_has_parent);

-static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent)
+static int clk_core_set_parent_nolock(struct clk_core *core,
+ struct clk_core *parent)
{
int ret = 0;
int p_index = 0;
unsigned long p_rate = 0;

+ lockdep_assert_held(&prepare_lock);
+
if (!core)
return 0;

- /* prevent racing with updates to the clock topology */
- clk_prepare_lock();
-
if (core->parent == parent)
- goto out;
+ return 0;

/* verify ops for for multi-parent clks */
- if ((core->num_parents > 1) && (!core->ops->set_parent)) {
- ret = -EPERM;
- goto out;
- }
+ if (core->num_parents > 1 && !core->ops->set_parent)
+ return -EPERM;

/* check that we are allowed to re-parent if the clock is in use */
- if ((core->flags & CLK_SET_PARENT_GATE) && core->prepare_count) {
- ret = -EBUSY;
- goto out;
- }
+ if ((core->flags & CLK_SET_PARENT_GATE) && core->prepare_count)
+ return -EBUSY;

/* try finding the new parent index */
if (parent) {
@@ -1904,15 +1900,14 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent)
if (p_index < 0) {
pr_debug("%s: clk %s can not be parent of clk %s\n",
__func__, parent->name, core->name);
- ret = p_index;
- goto out;
+ return p_index;
}
p_rate = parent->rate;
}

ret = clk_pm_runtime_get(core);
if (ret)
- goto out;
+ return ret;

/* propagate PRE_RATE_CHANGE notifications */
ret = __clk_speculate_rates(core, p_rate);
@@ -1934,8 +1929,6 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent)

runtime_put:
clk_pm_runtime_put(core);
-out:
- clk_prepare_unlock();

return ret;
}
@@ -1959,10 +1952,17 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent)
*/
int clk_set_parent(struct clk *clk, struct clk *parent)
{
+ int ret;
+
if (!clk)
return 0;

- return clk_core_set_parent(clk->core, parent ? parent->core : NULL);
+ clk_prepare_lock();
+ ret = clk_core_set_parent_nolock(clk->core,
+ parent ? parent->core : NULL);
+ clk_prepare_unlock();
+
+ return ret;
}
EXPORT_SYMBOL_GPL(clk_set_parent);

@@ -2851,7 +2851,7 @@ void clk_unregister(struct clk *clk)
/* Reparent all children to the orphan list. */
hlist_for_each_entry_safe(child, t, &clk->core->children,
child_node)
- clk_core_set_parent(child, NULL);
+ clk_core_set_parent_nolock(child, NULL);
}

hlist_del_init(&clk->core->child_node);
--
2.14.3

2017-12-01 21:52:22

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 05/10] clk: use round rate to bail out early in set_rate

The current implementation of clk_core_set_rate_nolock() bails out early
if the requested rate is exactly the same as the one set. It should bail
out if the request would not result in a rate a change. This is important
when the rate is not exactly what is requested, which is fairly common
with PLLs.

Ex: provider able to give any rate with steps of 100Hz
- 1st consumer request 48000Hz and gets it.
- 2nd consumer request 48010Hz as well. If we were to perform the usual
mechanism, we would get 48000Hz as well. The clock would not change so
there is no point performing any checks to make sure the clock can
change, we know it won't.

This is important to prepare the addition of the clock protection
mechanism

Acked-by: Linus Walleij <[email protected]>
Tested-by: Quentin Schulz <[email protected]>
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 25 +++++++++++++++++++++++--
1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 322d9ba7e5cd..bbe90babdae4 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1658,16 +1658,37 @@ static void clk_change_rate(struct clk_core *core)
clk_change_rate(core->new_child);
}

+static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
+ unsigned long req_rate)
+{
+ int ret;
+ struct clk_rate_request req;
+
+ lockdep_assert_held(&prepare_lock);
+
+ if (!core)
+ return 0;
+
+ clk_core_get_boundaries(core, &req.min_rate, &req.max_rate);
+ req.rate = req_rate;
+
+ ret = clk_core_round_rate_nolock(core, &req);
+
+ return ret ? 0 : req.rate;
+}
+
static int clk_core_set_rate_nolock(struct clk_core *core,
unsigned long req_rate)
{
struct clk_core *top, *fail_clk;
- unsigned long rate = req_rate;
+ unsigned long rate;
int ret = 0;

if (!core)
return 0;

+ rate = clk_core_req_round_rate_nolock(core, req_rate);
+
/* bail early if nothing to do */
if (rate == clk_core_get_rate_nolock(core))
return 0;
@@ -1676,7 +1697,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
return -EBUSY;

/* calculate new rates and get the topmost changed clock */
- top = clk_calc_new_rates(core, rate);
+ top = clk_calc_new_rates(core, req_rate);
if (!top)
return -EINVAL;

--
2.14.3

2017-12-01 21:52:45

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 10/10] clk: fix set_rate_range when current rate is out of range

Calling clk_core_set_rate() with core->req_rate is basically a no-op
because of the early bail-out mechanism.

This may leave the clock in inconsistent state if the rate is out the
requested range. Calling clk_core_set_rate() with the closest rate
limit could solve the problem but:
- The underlying determine_rate() callback needs to account for this
corner case (rounding within the range, if possible)
- if only round_rate() is available, we rely on luck unfortunately.

Fixes: 1c8e600440c7 ("clk: Add rate constraints to clocks")
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 37 +++++++++++++++++++++++++++++++++----
1 file changed, 33 insertions(+), 4 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index edd965d8f41d..369933831705 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -2010,6 +2010,7 @@ EXPORT_SYMBOL_GPL(clk_set_rate_exclusive);
int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
{
int ret = 0;
+ unsigned long old_min, old_max, rate;

if (!clk)
return 0;
@@ -2026,10 +2027,38 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)
if (clk->exclusive_count)
clk_core_rate_unprotect(clk->core);

- if (min != clk->min_rate || max != clk->max_rate) {
- clk->min_rate = min;
- clk->max_rate = max;
- ret = clk_core_set_rate_nolock(clk->core, clk->core->req_rate);
+ /* Save the current values in case we need to rollback the change */
+ old_min = clk->min_rate;
+ old_max = clk->max_rate;
+ clk->min_rate = min;
+ clk->max_rate = max;
+
+ rate = clk_core_get_rate_nolock(clk->core);
+ if (rate < min || rate > max) {
+ /*
+ * FIXME:
+ * We are in bit of trouble here, current rate is outside the
+ * the requested range. We are going try to request appropriate
+ * range boundary but there is a catch. It may fail for the
+ * usual reason (clock broken, clock protected, etc) but also
+ * because:
+ * - round_rate() was not favorable and fell on the wrong
+ * side of the boundary
+ * - the determine_rate() callback does not really check for
+ * this corner case when determining the rate
+ */
+
+ if (rate < min)
+ rate = min;
+ else
+ rate = max;
+
+ ret = clk_core_set_rate_nolock(clk->core, rate);
+ if (ret) {
+ /* rollback the changes */
+ clk->min_rate = old_min;
+ clk->max_rate = old_max;
+ }
}

if (clk->exclusive_count)
--
2.14.3

2017-12-01 21:53:09

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 06/10] clk: add clock protection mechanism to clk core

The patch adds clk_core_protect and clk_core_unprotect to the internal
CCF API. These functions allow to set a new constraint along the clock
tree to prevent any change, even indirect, which may result in rate
change or glitch.

Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 119 ++++++++++++++++++++++++++++++++++++++++---
include/linux/clk-provider.h | 1 +
2 files changed, 113 insertions(+), 7 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index bbe90babdae4..f69a2176cde1 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -62,6 +62,7 @@ struct clk_core {
bool orphan;
unsigned int enable_count;
unsigned int prepare_count;
+ unsigned int protect_count;
unsigned long min_rate;
unsigned long max_rate;
unsigned long accuracy;
@@ -170,6 +171,11 @@ static void clk_enable_unlock(unsigned long flags)
spin_unlock_irqrestore(&enable_lock, flags);
}

+static bool clk_core_rate_is_protected(struct clk_core *core)
+{
+ return core->protect_count;
+}
+
static bool clk_core_is_prepared(struct clk_core *core)
{
bool ret = false;
@@ -381,6 +387,11 @@ bool clk_hw_is_prepared(const struct clk_hw *hw)
return clk_core_is_prepared(hw->core);
}

+bool clk_hw_rate_is_protected(const struct clk_hw *hw)
+{
+ return clk_core_rate_is_protected(hw->core);
+}
+
bool clk_hw_is_enabled(const struct clk_hw *hw)
{
return clk_core_is_enabled(hw->core);
@@ -519,6 +530,68 @@ EXPORT_SYMBOL_GPL(__clk_mux_determine_rate_closest);

/*** clk api ***/

+static void clk_core_rate_unprotect(struct clk_core *core)
+{
+ lockdep_assert_held(&prepare_lock);
+
+ if (!core)
+ return;
+
+ if (WARN_ON(core->protect_count == 0))
+ return;
+
+ if (--core->protect_count > 0)
+ return;
+
+ clk_core_rate_unprotect(core->parent);
+}
+
+static int clk_core_rate_nuke_protect(struct clk_core *core)
+{
+ int ret;
+
+ lockdep_assert_held(&prepare_lock);
+
+ if (!core)
+ return -EINVAL;
+
+ if (core->protect_count == 0)
+ return 0;
+
+ ret = core->protect_count;
+ core->protect_count = 1;
+ clk_core_rate_unprotect(core);
+
+ return ret;
+}
+
+static void clk_core_rate_protect(struct clk_core *core)
+{
+ lockdep_assert_held(&prepare_lock);
+
+ if (!core)
+ return;
+
+ if (core->protect_count == 0)
+ clk_core_rate_protect(core->parent);
+
+ core->protect_count++;
+}
+
+static void clk_core_rate_restore_protect(struct clk_core *core, int count)
+{
+ lockdep_assert_held(&prepare_lock);
+
+ if (!core)
+ return;
+
+ if (count == 0)
+ return;
+
+ clk_core_rate_protect(core);
+ core->protect_count = count;
+}
+
static void clk_core_unprepare(struct clk_core *core)
{
lockdep_assert_held(&prepare_lock);
@@ -915,7 +988,9 @@ static int clk_core_determine_round_nolock(struct clk_core *core,
if (!core)
return 0;

- if (core->ops->determine_rate) {
+ if (clk_core_rate_is_protected(core)) {
+ req->rate = core->rate;
+ } else if (core->ops->determine_rate) {
return core->ops->determine_rate(core->hw, req);
} else if (core->ops->round_rate) {
rate = core->ops->round_rate(core->hw, req->rate,
@@ -1661,7 +1736,7 @@ static void clk_change_rate(struct clk_core *core)
static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
unsigned long req_rate)
{
- int ret;
+ int ret, cnt;
struct clk_rate_request req;

lockdep_assert_held(&prepare_lock);
@@ -1669,11 +1744,19 @@ static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
if (!core)
return 0;

+ /* simulate what the rate would be if it could be freely set */
+ cnt = clk_core_rate_nuke_protect(core);
+ if (cnt < 0)
+ return cnt;
+
clk_core_get_boundaries(core, &req.min_rate, &req.max_rate);
req.rate = req_rate;

ret = clk_core_round_rate_nolock(core, &req);

+ /* restore the protection */
+ clk_core_rate_restore_protect(core, cnt);
+
return ret ? 0 : req.rate;
}

@@ -1693,6 +1776,10 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
if (rate == clk_core_get_rate_nolock(core))
return 0;

+ /* fail on a direct rate set of a protected provider */
+ if (clk_core_rate_is_protected(core))
+ return -EBUSY;
+
if ((core->flags & CLK_SET_RATE_GATE) && core->prepare_count)
return -EBUSY;

@@ -1937,6 +2024,9 @@ static int clk_core_set_parent_nolock(struct clk_core *core,
if ((core->flags & CLK_SET_PARENT_GATE) && core->prepare_count)
return -EBUSY;

+ if (clk_core_rate_is_protected(core))
+ return -EBUSY;
+
/* try finding the new parent index */
if (parent) {
p_index = clk_fetch_parent_index(core, parent);
@@ -2018,6 +2108,9 @@ static int clk_core_set_phase_nolock(struct clk_core *core, int degrees)
if (!core)
return 0;

+ if (clk_core_rate_is_protected(core))
+ return -EBUSY;
+
trace_clk_set_phase(core, degrees);

if (core->ops->set_phase)
@@ -2148,11 +2241,12 @@ static void clk_summary_show_one(struct seq_file *s, struct clk_core *c,
if (!c)
return;

- seq_printf(s, "%*s%-*s %11d %12d %11lu %10lu %-3d\n",
+ seq_printf(s, "%*s%-*s %11d %12d %12d %11lu %10lu %-3d\n",
level * 3 + 1, "",
30 - level * 3, c->name,
- c->enable_count, c->prepare_count, clk_core_get_rate(c),
- clk_core_get_accuracy(c), clk_core_get_phase(c));
+ c->enable_count, c->prepare_count, c->protect_count,
+ clk_core_get_rate(c), clk_core_get_accuracy(c),
+ clk_core_get_phase(c));
}

static void clk_summary_show_subtree(struct seq_file *s, struct clk_core *c,
@@ -2174,8 +2268,8 @@ static int clk_summary_show(struct seq_file *s, void *data)
struct clk_core *c;
struct hlist_head **lists = (struct hlist_head **)s->private;

- seq_puts(s, " clock enable_cnt prepare_cnt rate accuracy phase\n");
- seq_puts(s, "----------------------------------------------------------------------------------------\n");
+ seq_puts(s, " clock enable_cnt prepare_cnt protect_cnt rate accuracy phase\n");
+ seq_puts(s, "----------------------------------------------------------------------------------------------------\n");

clk_prepare_lock();

@@ -2210,6 +2304,7 @@ static void clk_dump_one(struct seq_file *s, struct clk_core *c, int level)
seq_printf(s, "\"%s\": { ", c->name);
seq_printf(s, "\"enable_count\": %d,", c->enable_count);
seq_printf(s, "\"prepare_count\": %d,", c->prepare_count);
+ seq_printf(s, "\"protect_count\": %d,", c->protect_count);
seq_printf(s, "\"rate\": %lu,", clk_core_get_rate(c));
seq_printf(s, "\"accuracy\": %lu,", clk_core_get_accuracy(c));
seq_printf(s, "\"phase\": %d", clk_core_get_phase(c));
@@ -2340,6 +2435,11 @@ static int clk_debug_create_one(struct clk_core *core, struct dentry *pdentry)
if (!d)
goto err_out;

+ d = debugfs_create_u32("clk_protect_count", S_IRUGO, core->dentry,
+ (u32 *)&core->protect_count);
+ if (!d)
+ goto err_out;
+
d = debugfs_create_u32("clk_notifier_count", S_IRUGO, core->dentry,
(u32 *)&core->notifier_count);
if (!d)
@@ -2911,6 +3011,11 @@ void clk_unregister(struct clk *clk)
if (clk->core->prepare_count)
pr_warn("%s: unregistering prepared clock: %s\n",
__func__, clk->core->name);
+
+ if (clk->core->protect_count)
+ pr_warn("%s: unregistering protected clock: %s\n",
+ __func__, clk->core->name);
+
kref_put(&clk->core->ref, __clk_release);
unlock:
clk_prepare_unlock();
diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index 7c925e6211f1..73ac87f34df9 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -744,6 +744,7 @@ unsigned long clk_hw_get_rate(const struct clk_hw *hw);
unsigned long __clk_get_flags(struct clk *clk);
unsigned long clk_hw_get_flags(const struct clk_hw *hw);
bool clk_hw_is_prepared(const struct clk_hw *hw);
+bool clk_hw_rate_is_protected(const struct clk_hw *hw);
bool clk_hw_is_enabled(const struct clk_hw *hw);
bool __clk_is_enabled(struct clk *clk);
struct clk *__clk_lookup(const char *name);
--
2.14.3

2017-12-01 21:53:08

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 07/10] clk: cosmetic changes to clk_summary debugfs entry

clk_summary debugfs entry was already well over the traditional 80
characters per line limit but it grew even larger with the addition of
clock protection.

clock enable_cnt prepare_cnt protect_cnt rate accuracy phase
----------------------------------------------------------------------------------------------------
wifi32k 1 1 0 32768 0 0
vcpu 0 0 0 2016000000 0 0
xtal 5 5 0 24000000 0 0

This patch reduce the width a bit:
enable prepare protect
clock count count count rate accuracy phase
----------------------------------------------------------------------------------------
wifi32k 1 1 0 32768 0 0
vcpu 0 0 0 2016000000 0 0
xtal 5 5 0 24000000 0 0

Acked-by: Linus Walleij <[email protected]>
Tested-by: Quentin Schulz <[email protected]>
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index f69a2176cde1..f6fe5e5595ca 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -2241,7 +2241,7 @@ static void clk_summary_show_one(struct seq_file *s, struct clk_core *c,
if (!c)
return;

- seq_printf(s, "%*s%-*s %11d %12d %12d %11lu %10lu %-3d\n",
+ seq_printf(s, "%*s%-*s %7d %8d %8d %11lu %10lu %-3d\n",
level * 3 + 1, "",
30 - level * 3, c->name,
c->enable_count, c->prepare_count, c->protect_count,
@@ -2268,8 +2268,9 @@ static int clk_summary_show(struct seq_file *s, void *data)
struct clk_core *c;
struct hlist_head **lists = (struct hlist_head **)s->private;

- seq_puts(s, " clock enable_cnt prepare_cnt protect_cnt rate accuracy phase\n");
- seq_puts(s, "----------------------------------------------------------------------------------------------------\n");
+ seq_puts(s, " enable prepare protect \n");
+ seq_puts(s, " clock count count count rate accuracy phase\n");
+ seq_puts(s, "----------------------------------------------------------------------------------------\n");

clk_prepare_lock();

--
2.14.3

2017-12-01 21:53:06

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 08/10] clk: fix CLK_SET_RATE_GATE with clock rate protection

Using clock rate protection, we can now enforce CLK_SET_RATE_GATE along the
clock tree

Acked-by: Linus Walleij <[email protected]>
Tested-by: Quentin Schulz <[email protected]>
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index f6fe5e5595ca..1af843ae20ff 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -605,6 +605,9 @@ static void clk_core_unprepare(struct clk_core *core)
if (WARN_ON(core->prepare_count == 1 && core->flags & CLK_IS_CRITICAL))
return;

+ if (core->flags & CLK_SET_RATE_GATE)
+ clk_core_rate_unprotect(core);
+
if (--core->prepare_count > 0)
return;

@@ -679,6 +682,16 @@ static int clk_core_prepare(struct clk_core *core)

core->prepare_count++;

+ /*
+ * CLK_SET_RATE_GATE is a special case of clock protection
+ * Instead of a consumer claiming exclusive rate control, it is
+ * actually the provider which prevents any consumer from making any
+ * operation which could result in a rate change or rate glitch while
+ * the clock is prepared.
+ */
+ if (core->flags & CLK_SET_RATE_GATE)
+ clk_core_rate_protect(core);
+
return 0;
unprepare:
clk_core_unprepare(core->parent);
@@ -1780,9 +1793,6 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
if (clk_core_rate_is_protected(core))
return -EBUSY;

- if ((core->flags & CLK_SET_RATE_GATE) && core->prepare_count)
- return -EBUSY;
-
/* calculate new rates and get the topmost changed clock */
top = clk_calc_new_rates(core, req_rate);
if (!top)
--
2.14.3

2017-12-01 21:53:03

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 09/10] clk: add clk_rate_exclusive api

Using clock rate protection, we can now provide a way for clock consumer
to claim exclusive control over the rate of a producer

So far, rate change operations have been a "last write wins" affair. This
changes allows drivers to explicitly protect against this behavior, if
required.

Of course, if exclusivity over a producer is claimed more than once, the
rate is effectively locked as exclusivity cannot be preempted

Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 172 ++++++++++++++++++++++++++++++++++++++++++++++++++++
include/linux/clk.h | 62 +++++++++++++++++++
2 files changed, 234 insertions(+)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 1af843ae20ff..edd965d8f41d 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -87,6 +87,7 @@ struct clk {
const char *con_id;
unsigned long min_rate;
unsigned long max_rate;
+ unsigned int exclusive_count;
struct hlist_node clks_node;
};

@@ -565,6 +566,45 @@ static int clk_core_rate_nuke_protect(struct clk_core *core)
return ret;
}

+/**
+ * clk_rate_exclusive_put - release exclusivity over clock rate control
+ * @clk: the clk over which the exclusivity is released
+ *
+ * clk_rate_exclusive_put() completes a critical section during which a clock
+ * consumer cannot tolerate any other consumer making any operation on the
+ * clock which could result in a rate change or rate glitch. Exclusive clocks
+ * cannot have their rate changed, either directly or indirectly due to changes
+ * further up the parent chain of clocks. As a result, clocks up parent chain
+ * also get under exclusive control of the calling consumer.
+ *
+ * If exlusivity is claimed more than once on clock, even by the same consumer,
+ * the rate effectively gets locked as exclusivity can't be preempted.
+ *
+ * Calls to clk_rate_exclusive_put() must be balanced with calls to
+ * clk_rate_exclusive_get(). Calls to this function may sleep, and do not return
+ * error status.
+ */
+void clk_rate_exclusive_put(struct clk *clk)
+{
+ if (!clk)
+ return;
+
+ clk_prepare_lock();
+
+ /*
+ * if there is something wrong with this consumer protect count, stop
+ * here before messing with the provider
+ */
+ if (WARN_ON(clk->exclusive_count <= 0))
+ goto out;
+
+ clk_core_rate_unprotect(clk->core);
+ clk->exclusive_count--;
+out:
+ clk_prepare_unlock();
+}
+EXPORT_SYMBOL_GPL(clk_rate_exclusive_put);
+
static void clk_core_rate_protect(struct clk_core *core)
{
lockdep_assert_held(&prepare_lock);
@@ -592,6 +632,38 @@ static void clk_core_rate_restore_protect(struct clk_core *core, int count)
core->protect_count = count;
}

+/**
+ * clk_rate_exclusive_get - get exclusivity over the clk rate control
+ * @clk: the clk over which the exclusity of rate control is requested
+ *
+ * clk_rate_exlusive_get() begins a critical section during which a clock
+ * consumer cannot tolerate any other consumer making any operation on the
+ * clock which could result in a rate change or rate glitch. Exclusive clocks
+ * cannot have their rate changed, either directly or indirectly due to changes
+ * further up the parent chain of clocks. As a result, clocks up parent chain
+ * also get under exclusive control of the calling consumer.
+ *
+ * If exlusivity is claimed more than once on clock, even by the same consumer,
+ * the rate effectively gets locked as exclusivity can't be preempted.
+ *
+ * Calls to clk_rate_exclusive_get() should be balanced with calls to
+ * clk_rate_exclusive_put(). Calls to this function may sleep.
+ * Returns 0 on success, -EERROR otherwise
+ */
+int clk_rate_exclusive_get(struct clk *clk)
+{
+ if (!clk)
+ return 0;
+
+ clk_prepare_lock();
+ clk_core_rate_protect(clk->core);
+ clk->exclusive_count++;
+ clk_prepare_unlock();
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(clk_rate_exclusive_get);
+
static void clk_core_unprepare(struct clk_core *core)
{
lockdep_assert_held(&prepare_lock);
@@ -1001,6 +1073,12 @@ static int clk_core_determine_round_nolock(struct clk_core *core,
if (!core)
return 0;

+ /*
+ * At this point, core protection will be disabled if
+ * - if the provider is not protected at all
+ * - if the calling consumer is the only one which has exclusivity
+ * over the provider
+ */
if (clk_core_rate_is_protected(core)) {
req->rate = core->rate;
} else if (core->ops->determine_rate) {
@@ -1117,10 +1195,17 @@ long clk_round_rate(struct clk *clk, unsigned long rate)

clk_prepare_lock();

+ if (clk->exclusive_count)
+ clk_core_rate_unprotect(clk->core);
+
clk_core_get_boundaries(clk->core, &req.min_rate, &req.max_rate);
req.rate = rate;

ret = clk_core_round_rate_nolock(clk->core, &req);
+
+ if (clk->exclusive_count)
+ clk_core_rate_protect(clk->core);
+
clk_prepare_unlock();

if (ret)
@@ -1853,14 +1938,67 @@ int clk_set_rate(struct clk *clk, unsigned long rate)
/* prevent racing with updates to the clock topology */
clk_prepare_lock();

+ if (clk->exclusive_count)
+ clk_core_rate_unprotect(clk->core);
+
ret = clk_core_set_rate_nolock(clk->core, rate);

+ if (clk->exclusive_count)
+ clk_core_rate_protect(clk->core);
+
clk_prepare_unlock();

return ret;
}
EXPORT_SYMBOL_GPL(clk_set_rate);

+/**
+ * clk_set_rate_exclusive - specify a new rate get exclusive control
+ * @clk: the clk whose rate is being changed
+ * @rate: the new rate for clk
+ *
+ * This is a combination of clk_set_rate() and clk_rate_exclusive_get()
+ * within a critical section
+ *
+ * This can be used initially to ensure that at least 1 consumer is
+ * statisfied when several consumers are competing for exclusivity over the
+ * same clock provider.
+ *
+ * The exclusivity is not applied if setting the rate failed.
+ *
+ * Calls to clk_rate_exclusive_get() should be balanced with calls to
+ * clk_rate_exclusive_put().
+ *
+ * Returns 0 on success, -EERROR otherwise.
+ */
+int clk_set_rate_exclusive(struct clk *clk, unsigned long rate)
+{
+ int ret;
+
+ if (!clk)
+ return 0;
+
+ /* prevent racing with updates to the clock topology */
+ clk_prepare_lock();
+
+ /*
+ * The temporary protection removal is not here, on purpose
+ * This function is meant to be used instead of clk_rate_protect,
+ * so before the consumer code path protect the clock provider
+ */
+
+ ret = clk_core_set_rate_nolock(clk->core, rate);
+ if (!ret) {
+ clk_core_rate_protect(clk->core);
+ clk->exclusive_count++;
+ }
+
+ clk_prepare_unlock();
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(clk_set_rate_exclusive);
+
/**
* clk_set_rate_range - set a rate range for a clock source
* @clk: clock source
@@ -1885,12 +2023,18 @@ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max)

clk_prepare_lock();

+ if (clk->exclusive_count)
+ clk_core_rate_unprotect(clk->core);
+
if (min != clk->min_rate || max != clk->max_rate) {
clk->min_rate = min;
clk->max_rate = max;
ret = clk_core_set_rate_nolock(clk->core, clk->core->req_rate);
}

+ if (clk->exclusive_count)
+ clk_core_rate_protect(clk->core);
+
clk_prepare_unlock();

return ret;
@@ -2101,8 +2245,16 @@ int clk_set_parent(struct clk *clk, struct clk *parent)
return 0;

clk_prepare_lock();
+
+ if (clk->exclusive_count)
+ clk_core_rate_unprotect(clk->core);
+
ret = clk_core_set_parent_nolock(clk->core,
parent ? parent->core : NULL);
+
+ if (clk->exclusive_count)
+ clk_core_rate_protect(clk->core);
+
clk_prepare_unlock();

return ret;
@@ -2164,7 +2316,15 @@ int clk_set_phase(struct clk *clk, int degrees)
degrees += 360;

clk_prepare_lock();
+
+ if (clk->exclusive_count)
+ clk_core_rate_unprotect(clk->core);
+
ret = clk_core_set_phase_nolock(clk->core, degrees);
+
+ if (clk->exclusive_count)
+ clk_core_rate_protect(clk->core);
+
clk_prepare_unlock();

return ret;
@@ -3185,6 +3345,18 @@ void __clk_put(struct clk *clk)

clk_prepare_lock();

+ /*
+ * Before calling clk_put, all calls to clk_rate_exclusive_get() from a
+ * given user should be balanced with calls to clk_rate_exclusive_put()
+ * and by that same consumer
+ */
+ if (WARN_ON(clk->exclusive_count)) {
+ /* We voiced our concern, let's sanitize the situation */
+ clk->core->protect_count -= (clk->exclusive_count - 1);
+ clk_core_rate_unprotect(clk->core);
+ clk->exclusive_count = 0;
+ }
+
hlist_del(&clk->clks_node);
if (clk->min_rate > clk->core->req_rate ||
clk->max_rate < clk->core->req_rate)
diff --git a/include/linux/clk.h b/include/linux/clk.h
index 12c96d94d1fa..4c4ef9f34db3 100644
--- a/include/linux/clk.h
+++ b/include/linux/clk.h
@@ -331,6 +331,38 @@ struct clk *devm_clk_get(struct device *dev, const char *id);
*/
struct clk *devm_get_clk_from_child(struct device *dev,
struct device_node *np, const char *con_id);
+/**
+ * clk_rate_exclusive_get - get exclusivity over the rate control of a
+ * producer
+ * @clk: clock source
+ *
+ * This function allows drivers to get exclusive control over the rate of a
+ * provider. It prevents any other consumer to execute, even indirectly,
+ * opereation which could alter the rate of the provider or cause glitches
+ *
+ * If exlusivity is claimed more than once on clock, even by the same driver,
+ * the rate effectively gets locked as exclusivity can't be preempted.
+ *
+ * Must not be called from within atomic context.
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_rate_exclusive_get(struct clk *clk);
+
+/**
+ * clk_rate_exclusive_put - release exclusivity over the rate control of a
+ * producer
+ * @clk: clock source
+ *
+ * This function allows drivers to release the exclusivity it previously got
+ * from clk_rate_exclusive_get()
+ *
+ * The caller must balance the number of clk_rate_exclusive_get() and
+ * clk_rate_exclusive_put() calls.
+ *
+ * Must not be called from within atomic context.
+ */
+void clk_rate_exclusive_put(struct clk *clk);

/**
* clk_enable - inform the system when the clock source should be running.
@@ -472,6 +504,23 @@ long clk_round_rate(struct clk *clk, unsigned long rate);
*/
int clk_set_rate(struct clk *clk, unsigned long rate);

+/**
+ * clk_set_rate_exclusive- set the clock rate and claim exclusivity over
+ * clock source
+ * @clk: clock source
+ * @rate: desired clock rate in Hz
+ *
+ * This helper function allows drivers to atomically set the rate of a producer
+ * and claim exclusivity over the rate control of the producer.
+ *
+ * It is essentially a combination of clk_set_rate() and
+ * clk_rate_exclusite_get(). Caller must balance this call with a call to
+ * clk_rate_exclusive_put()
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_set_rate_exclusive(struct clk *clk, unsigned long rate);
+
/**
* clk_has_parent - check if a clock is a possible parent for another
* @clk: clock source
@@ -583,6 +632,14 @@ static inline void clk_bulk_put(int num_clks, struct clk_bulk_data *clks) {}

static inline void devm_clk_put(struct device *dev, struct clk *clk) {}

+
+static inline int clk_rate_exclusive_get(struct clk *clk)
+{
+ return 0;
+}
+
+static inline void clk_rate_exclusive_put(struct clk *clk) {}
+
static inline int clk_enable(struct clk *clk)
{
return 0;
@@ -609,6 +666,11 @@ static inline int clk_set_rate(struct clk *clk, unsigned long rate)
return 0;
}

+static inline int clk_set_rate_exclusive(struct clk *clk, unsigned long rate)
+{
+ return 0;
+}
+
static inline long clk_round_rate(struct clk *clk, unsigned long rate)
{
return 0;
--
2.14.3

2017-12-01 21:54:05

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 03/10] clk: add clk_core_set_phase_nolock function

Create a core function for set_phase, as it is done for set_rate and
set_parent.

This rework is done to ease the integration of "protected" clock
functionality.

Acked-by: Linus Walleij <[email protected]>
Tested-by: Quentin Schulz <[email protected]>
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 33 +++++++++++++++++++++------------
1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index e60b2a26b10b..7946a069ba2e 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1966,6 +1966,25 @@ int clk_set_parent(struct clk *clk, struct clk *parent)
}
EXPORT_SYMBOL_GPL(clk_set_parent);

+static int clk_core_set_phase_nolock(struct clk_core *core, int degrees)
+{
+ int ret = -EINVAL;
+
+ lockdep_assert_held(&prepare_lock);
+
+ if (!core)
+ return 0;
+
+ trace_clk_set_phase(core, degrees);
+
+ if (core->ops->set_phase)
+ ret = core->ops->set_phase(core->hw, degrees);
+
+ trace_clk_set_phase_complete(core, degrees);
+
+ return ret;
+}
+
/**
* clk_set_phase - adjust the phase shift of a clock signal
* @clk: clock signal source
@@ -1988,7 +2007,7 @@ EXPORT_SYMBOL_GPL(clk_set_parent);
*/
int clk_set_phase(struct clk *clk, int degrees)
{
- int ret = -EINVAL;
+ int ret;

if (!clk)
return 0;
@@ -1999,17 +2018,7 @@ int clk_set_phase(struct clk *clk, int degrees)
degrees += 360;

clk_prepare_lock();
-
- trace_clk_set_phase(clk->core, degrees);
-
- if (clk->core->ops->set_phase)
- ret = clk->core->ops->set_phase(clk->core->hw, degrees);
-
- trace_clk_set_phase_complete(clk->core, degrees);
-
- if (!ret)
- clk->core->phase = degrees;
-
+ ret = clk_core_set_phase_nolock(clk->core, degrees);
clk_prepare_unlock();

return ret;
--
2.14.3

2017-12-01 21:54:03

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 04/10] clk: rework calls to round and determine rate callbacks

Rework the way the callbacks round_rate() and determine_rate() are called.
The goal is to do this at a single point and make it easier to add
conditions before calling them.

Because of this factorization, rate returned by determine_rate() is also
checked against the min and max rate values

This rework is done to ease the integration of "protected" clock
functionality.

Acked-by: Linus Walleij <[email protected]>
Tested-by: Quentin Schulz <[email protected]>
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 82 +++++++++++++++++++++++++++++++++++--------------------
1 file changed, 52 insertions(+), 30 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 7946a069ba2e..322d9ba7e5cd 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -905,10 +905,9 @@ static int clk_disable_unused(void)
}
late_initcall_sync(clk_disable_unused);

-static int clk_core_round_rate_nolock(struct clk_core *core,
- struct clk_rate_request *req)
+static int clk_core_determine_round_nolock(struct clk_core *core,
+ struct clk_rate_request *req)
{
- struct clk_core *parent;
long rate;

lockdep_assert_held(&prepare_lock);
@@ -916,15 +915,6 @@ static int clk_core_round_rate_nolock(struct clk_core *core,
if (!core)
return 0;

- parent = core->parent;
- if (parent) {
- req->best_parent_hw = parent->hw;
- req->best_parent_rate = parent->rate;
- } else {
- req->best_parent_hw = NULL;
- req->best_parent_rate = 0;
- }
-
if (core->ops->determine_rate) {
return core->ops->determine_rate(core->hw, req);
} else if (core->ops->round_rate) {
@@ -934,15 +924,58 @@ static int clk_core_round_rate_nolock(struct clk_core *core,
return rate;

req->rate = rate;
- } else if (core->flags & CLK_SET_RATE_PARENT) {
- return clk_core_round_rate_nolock(parent, req);
} else {
- req->rate = core->rate;
+ return -EINVAL;
}

return 0;
}

+static void clk_core_init_rate_req(struct clk_core * const core,
+ struct clk_rate_request *req)
+{
+ struct clk_core *parent;
+
+ if (WARN_ON(!core || !req))
+ return;
+
+ parent = core->parent;
+ if (parent) {
+ req->best_parent_hw = parent->hw;
+ req->best_parent_rate = parent->rate;
+ } else {
+ req->best_parent_hw = NULL;
+ req->best_parent_rate = 0;
+ }
+}
+
+static bool clk_core_can_round(struct clk_core * const core)
+{
+ if (core->ops->determine_rate || core->ops->round_rate)
+ return true;
+
+ return false;
+}
+
+static int clk_core_round_rate_nolock(struct clk_core *core,
+ struct clk_rate_request *req)
+{
+ lockdep_assert_held(&prepare_lock);
+
+ if (!core)
+ return 0;
+
+ clk_core_init_rate_req(core, req);
+
+ if (clk_core_can_round(core))
+ return clk_core_determine_round_nolock(core, req);
+ else if (core->flags & CLK_SET_RATE_PARENT)
+ return clk_core_round_rate_nolock(core->parent, req);
+
+ req->rate = core->rate;
+ return 0;
+}
+
/**
* __clk_determine_rate - get the closest rate actually supported by a clock
* @hw: determine the rate of this clock
@@ -1432,34 +1465,23 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
clk_core_get_boundaries(core, &min_rate, &max_rate);

/* find the closest rate and parent clk/rate */
- if (core->ops->determine_rate) {
+ if (clk_core_can_round(core)) {
struct clk_rate_request req;

req.rate = rate;
req.min_rate = min_rate;
req.max_rate = max_rate;
- if (parent) {
- req.best_parent_hw = parent->hw;
- req.best_parent_rate = parent->rate;
- } else {
- req.best_parent_hw = NULL;
- req.best_parent_rate = 0;
- }

- ret = core->ops->determine_rate(core->hw, &req);
+ clk_core_init_rate_req(core, &req);
+
+ ret = clk_core_determine_round_nolock(core, &req);
if (ret < 0)
return NULL;

best_parent_rate = req.best_parent_rate;
new_rate = req.rate;
parent = req.best_parent_hw ? req.best_parent_hw->core : NULL;
- } else if (core->ops->round_rate) {
- ret = core->ops->round_rate(core->hw, rate,
- &best_parent_rate);
- if (ret < 0)
- return NULL;

- new_rate = ret;
if (new_rate < min_rate || new_rate > max_rate)
return NULL;
} else if (!parent || !(core->flags & CLK_SET_RATE_PARENT)) {
--
2.14.3

2017-12-01 21:54:37

by Jerome Brunet

[permalink] [raw]
Subject: [PATCH v5 01/10] clk: fix incorrect usage of ENOSYS

ENOSYS is special and should only be used for incorrect syscall number.
It does not seem to be the case here.

Reported by checkpatch.pl while working on clock protection.

Acked-by: Linus Walleij <[email protected]>
Tested-by: Quentin Schulz <[email protected]>
Tested-by: Maxime Ripard <[email protected]>
Acked-by: Michael Turquette <[email protected]>
Signed-off-by: Jerome Brunet <[email protected]>
---
drivers/clk/clk.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 647d056df88c..5fe9e63b15c6 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1888,7 +1888,7 @@ static int clk_core_set_parent(struct clk_core *core, struct clk_core *parent)

/* verify ops for for multi-parent clks */
if ((core->num_parents > 1) && (!core->ops->set_parent)) {
- ret = -ENOSYS;
+ ret = -EPERM;
goto out;
}

--
2.14.3

2017-12-20 17:27:10

by Michael Turquette

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

Quoting Jerome Brunet (2017-12-01 13:51:50)
> This Patchset is related the RFC [0] and the discussion around
> CLK_SET_RATE_GATE available here [1]
>
> This patchset introduce clock protection to the CCF core. This can then
> be used for:
>
> * Provide a way for a consumer to claim exclusivity over the rate control
> of a provider. Some clock consumers require that a clock rate must not
> deviate from its selected frequency. There can be several reasons for
> this, not least of which is that some hardware may not be able to
> handle or recover from a glitch caused by changing the clock rate while
> the hardware is in operation. For such HW, The ability to get exclusive
> control of a clock's rate, and release that exclusivity, could be seen
> as a fundamental clock rate control primitive. The exclusivity is not
> preemptible, so when claimed more than once, is rate is effectively
> locked.
>
> * Provide a similar functionality to providers themselves, fixing
> CLK_SET_RATE_GATE flag (enforce clock gating along the tree). While
> there might still be a few platforms relying the broken implementation,
> tests done has shown this change to be pretty safe.

Applied to clk-protect-rate, with the exception that I did not apply
"clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
qcom clk code.

Stephen, do you plan to fix up the qcom clock code so that the
SET_RATE_GATE improvement can go in?

Thanks,
Mike

>
> Changes since v4: [4]
> - Fixup documentation comments
> - Fix error on exclusive API when CCF is disabled
>
> Changes since v3: [3]
> - Reorder patches following Stephen comments
> - Add before/after examples to the cosmetic change
> - Remove loops around protection where possible
> - Rename the API from "protect" to "exclusive" which decribe what the
> code better
>
> Changes since v2: [2]
> - Fix issues reported by Adriana Reus (Thanks !)
> - Dropped patch "clk: move CLK_SET_RATE_GATE protection from prepare
> to enable". This was broken as the protect count, like the prepare_count
> should only be accessed under the prepare_lock.
>
> Changes since v1: [1]
> - Check if the rate would actually change before continuing, and bail-out
> early if not.
>
> Changes since RFC: [0]
> - s/clk_protect/clk_rate_protect
> - Request rework around core_nolock function
> - Add clk_set_rate_protect
> - Reword clk_rate_protect and clk_unprotect documentation
> - Add few comments to explain the code
> - Add fixes for CLK_SET_RATE_GATE
>
> This was tested with the audio use case mentioned in [1]
>
> [0]: https://lkml.kernel.org/r/[email protected]
> [1]: https://lkml.kernel.org/r/148942423440.82235.17188153691656009029@resonance
> [2]: https://lkml.kernel.org/r/[email protected]
> [3]: https://lkml.kernel.org/r/[email protected]
> [4]: https://lkml.kernel.org/r/[email protected]
>
> Jerome Brunet (10):
> clk: fix incorrect usage of ENOSYS
> clk: take the prepare lock out of clk_core_set_parent
> clk: add clk_core_set_phase_nolock function
> clk: rework calls to round and determine rate callbacks
> clk: use round rate to bail out early in set_rate
> clk: add clock protection mechanism to clk core
> clk: cosmetic changes to clk_summary debugfs entry
> clk: fix CLK_SET_RATE_GATE with clock rate protection
> clk: add clk_rate_exclusive api
> clk: fix set_rate_range when current rate is out of range
>
> drivers/clk/clk.c | 509 +++++++++++++++++++++++++++++++++++++------
> include/linux/clk-provider.h | 1 +
> include/linux/clk.h | 62 ++++++
> 3 files changed, 502 insertions(+), 70 deletions(-)
>
> --
> 2.14.3
>

2017-12-20 17:45:15

by Jerome Brunet

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

On Tue, 2017-12-19 at 16:38 -0800, Michael Turquette wrote:
> Applied to clk-protect-rate,

Thx !

> with the exception that I did not apply
> "clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
> qcom clk code.

Here is a reminder of what I found at the time (so you don't to dig in your
mailbox for it)

Regressions reported by Kci on the following platforms:
* qcom-apq8064-cm-qs600
* qcom-apq8064-ifc6410

it seems the problem is coming from the clock used by the mmc driver
(drivers/mmc/host/mmci.c)

the driver does following sequence:
* get_clk
* prepare_enable
* get_rate
* set_rate
* ...

with clock SDCx_clk (qcom_apq8064.dtsi:1037). This clock has CLK_SET_RATE_PARENT
flag so it will transmit the request to its parent.
The parent of this clock is SDCx_src which has the CLK_SET_RATE_GATE flag.

>
> Stephen, do you plan to fix up the qcom clock code so that the
> SET_RATE_GATE improvement can go in?
>
> Thanks,
> Mike

2017-12-22 02:15:24

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

On 12/19, Michael Turquette wrote:
> Quoting Jerome Brunet (2017-12-01 13:51:50)
> > This Patchset is related the RFC [0] and the discussion around
> > CLK_SET_RATE_GATE available here [1]
> >
> > This patchset introduce clock protection to the CCF core. This can then
> > be used for:
> >
> > * Provide a way for a consumer to claim exclusivity over the rate control
> > of a provider. Some clock consumers require that a clock rate must not
> > deviate from its selected frequency. There can be several reasons for
> > this, not least of which is that some hardware may not be able to
> > handle or recover from a glitch caused by changing the clock rate while
> > the hardware is in operation. For such HW, The ability to get exclusive
> > control of a clock's rate, and release that exclusivity, could be seen
> > as a fundamental clock rate control primitive. The exclusivity is not
> > preemptible, so when claimed more than once, is rate is effectively
> > locked.
> >
> > * Provide a similar functionality to providers themselves, fixing
> > CLK_SET_RATE_GATE flag (enforce clock gating along the tree). While
> > there might still be a few platforms relying the broken implementation,
> > tests done has shown this change to be pretty safe.
>
> Applied to clk-protect-rate, with the exception that I did not apply
> "clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
> qcom clk code.
>
> Stephen, do you plan to fix up the qcom clock code so that the
> SET_RATE_GATE improvement can go in?
>

I started working on it a while back. Let's see if I can finish
it off this weekend.

--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

2018-01-29 09:24:57

by Jerome Brunet

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

On Thu, 2017-12-21 at 18:15 -0800, Stephen Boyd wrote:
> On 12/19, Michael Turquette wrote:
> > Quoting Jerome Brunet (2017-12-01 13:51:50)
> > > This Patchset is related the RFC [0] and the discussion around
> > > CLK_SET_RATE_GATE available here [1]
> > >
> > > This patchset introduce clock protection to the CCF core. This can then
> > > be used for:
> > >
> > > * Provide a way for a consumer to claim exclusivity over the rate control
> > > of a provider. Some clock consumers require that a clock rate must not
> > > deviate from its selected frequency. There can be several reasons for
> > > this, not least of which is that some hardware may not be able to
> > > handle or recover from a glitch caused by changing the clock rate while
> > > the hardware is in operation. For such HW, The ability to get exclusive
> > > control of a clock's rate, and release that exclusivity, could be seen
> > > as a fundamental clock rate control primitive. The exclusivity is not
> > > preemptible, so when claimed more than once, is rate is effectively
> > > locked.
> > >
> > > * Provide a similar functionality to providers themselves, fixing
> > > CLK_SET_RATE_GATE flag (enforce clock gating along the tree). While
> > > there might still be a few platforms relying the broken implementation,
> > > tests done has shown this change to be pretty safe.
> >
> > Applied to clk-protect-rate, with the exception that I did not apply
> > "clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
> > qcom clk code.
> >
> > Stephen, do you plan to fix up the qcom clock code so that the
> > SET_RATE_GATE improvement can go in?
> >
>
> I started working on it a while back. Let's see if I can finish
> it off this weekend.
>

Hi Stephen,

Have you been able find something to fix the qcom code regarding this issue ?

Cheers
Jerome

2018-02-01 17:44:27

by Stephen Boyd

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

On 01/29, Jerome Brunet wrote:
> On Thu, 2017-12-21 at 18:15 -0800, Stephen Boyd wrote:
> > On 12/19, Michael Turquette wrote:
> > > Quoting Jerome Brunet (2017-12-01 13:51:50)
> > > > This Patchset is related the RFC [0] and the discussion around
> > > > CLK_SET_RATE_GATE available here [1]
> > > >
> > > > This patchset introduce clock protection to the CCF core. This can then
> > > > be used for:
> > > >
> > > > * Provide a way for a consumer to claim exclusivity over the rate control
> > > > of a provider. Some clock consumers require that a clock rate must not
> > > > deviate from its selected frequency. There can be several reasons for
> > > > this, not least of which is that some hardware may not be able to
> > > > handle or recover from a glitch caused by changing the clock rate while
> > > > the hardware is in operation. For such HW, The ability to get exclusive
> > > > control of a clock's rate, and release that exclusivity, could be seen
> > > > as a fundamental clock rate control primitive. The exclusivity is not
> > > > preemptible, so when claimed more than once, is rate is effectively
> > > > locked.
> > > >
> > > > * Provide a similar functionality to providers themselves, fixing
> > > > CLK_SET_RATE_GATE flag (enforce clock gating along the tree). While
> > > > there might still be a few platforms relying the broken implementation,
> > > > tests done has shown this change to be pretty safe.
> > >
> > > Applied to clk-protect-rate, with the exception that I did not apply
> > > "clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
> > > qcom clk code.
> > >
> > > Stephen, do you plan to fix up the qcom clock code so that the
> > > SET_RATE_GATE improvement can go in?
> > >
> >
> > I started working on it a while back. Let's see if I can finish
> > it off this weekend.
> >
>
> Hi Stephen,
>
> Have you been able find something to fix the qcom code regarding this issue ?
>

This is what I have. I'm unhappy with a few things. First, I made
a spinlock for each clk, which is overkill. Most likely, just a
single spinlock is needed per clk-controller device. Second, I
haven't finished off the branch/gate part, so gating/ungating of
branches needs to be locked as well to prevent branches from
turning on while rates change. And finally, the 'branches' list is
duplicating a bunch of information about the child clks of an
RCG, so it feels like we need a core framework API to enable and
disable clks forcibly while remembering what is enabled/disabled
or at least to walk the clk tree and call some function.

The spinlock per clk-controller is duplicating the regmap lock we
already have, so we may want a regmap API to grab the lock, and
then another regmap API to do reads/writes without grabbing the
lock, and then finally release the lock with a regmap unlock API.
This part is mostly an optimization, but it would be nice to have
so that multiple writes could be done in sequence. This way, the
RCG code could do the special locking sequence and the branch
code could do the fire and forget single bit update.

----8<----

drivers/clk/qcom/clk-rcg.c | 117 +++++++++++++++++++++++++++++++++++-----
drivers/clk/qcom/clk-rcg.h | 20 ++++++-
drivers/clk/qcom/gcc-ipq806x.c | 57 +++++++++++---------
drivers/clk/qcom/gcc-mdm9615.c | 41 +++++++-------
drivers/clk/qcom/gcc-msm8660.c | 72 ++++++++++++-------------
drivers/clk/qcom/gcc-msm8960.c | 83 ++++++++++++++--------------
drivers/clk/qcom/lcc-ipq806x.c | 6 +--
drivers/clk/qcom/lcc-mdm9615.c | 8 +--
drivers/clk/qcom/lcc-msm8960.c | 8 +--
drivers/clk/qcom/mmcc-msm8960.c | 20 +++++++
10 files changed, 282 insertions(+), 150 deletions(-)

diff --git a/drivers/clk/qcom/clk-rcg.c b/drivers/clk/qcom/clk-rcg.c
index 67ce7c146a6a..7748187cb21c 100644
--- a/drivers/clk/qcom/clk-rcg.c
+++ b/drivers/clk/qcom/clk-rcg.c
@@ -99,7 +99,7 @@ static u8 clk_dyn_rcg_get_parent(struct clk_hw *hw)
return 0;
}

-static int clk_rcg_set_parent(struct clk_hw *hw, u8 index)
+static int clk_rcg_set_parent_nolock(struct clk_hw *hw, u8 index)
{
struct clk_rcg *rcg = to_clk_rcg(hw);
u32 ns;
@@ -111,6 +111,50 @@ static int clk_rcg_set_parent(struct clk_hw *hw, u8 index)
return 0;
}

+static void clk_rcg_force_off(struct clk_rcg *rcg)
+{
+ struct rcg_branch_map *branch = rcg->branches;
+ const struct rcg_branch_map *end = branch + rcg->num_branches;
+
+ while (branch < end) {
+ regmap_update_bits_check(rcg->clkr.regmap, branch->reg,
+ branch->mask, 0, &branch->was_enabled);
+ branch++;
+ }
+}
+
+static void clk_rcg_force_on(struct clk_rcg *rcg)
+{
+ struct rcg_branch_map *branch = rcg->branches;
+ const struct rcg_branch_map *end = branch + rcg->num_branches;
+
+ while (branch < end) {
+ if (branch->was_enabled) {
+ regmap_update_bits(rcg->clkr.regmap, branch->reg,
+ branch->mask, branch->mask);
+ branch->was_enabled = false;
+ }
+ branch++;
+ }
+}
+
+static int clk_rcg_set_parent(struct clk_hw *hw, u8 index)
+{
+ struct clk_rcg *rcg = to_clk_rcg(hw);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&rcg->lock, flags);
+
+ clk_rcg_force_off(rcg);
+ ret = clk_rcg_set_parent_nolock(hw, index);
+ clk_rcg_force_on(rcg);
+
+ spin_unlock_irqrestore(&rcg->lock, flags);
+
+ return ret;
+}
+
static u32 md_to_m(struct mn *mn, u32 md)
{
md >>= mn->m_val_shift;
@@ -479,7 +523,8 @@ static int clk_rcg_bypass_determine_rate(struct clk_hw *hw,
return 0;
}

-static int __clk_rcg_set_rate(struct clk_rcg *rcg, const struct freq_tbl *f)
+static int
+clk_rcg_set_rate_nolock(struct clk_rcg *rcg, const struct freq_tbl *f)
{
u32 ns, md, ctl;
struct mn *mn = &rcg->mn;
@@ -521,6 +566,22 @@ static int __clk_rcg_set_rate(struct clk_rcg *rcg, const struct freq_tbl *f)
return 0;
}

+static int __clk_rcg_set_rate(struct clk_rcg *rcg, const struct freq_tbl *f)
+{
+ int ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&rcg->lock, flags);
+
+ clk_rcg_force_off(rcg);
+ ret = clk_rcg_set_rate_nolock(rcg, f);
+ clk_rcg_force_on(rcg);
+
+ spin_unlock_irqrestore(&rcg->lock, flags);
+
+ return ret;
+}
+
static int clk_rcg_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
@@ -763,7 +824,7 @@ static int clk_rcg_lcc_set_rate(struct clk_hw *hw, unsigned long rate,

/* Switch to XO to avoid glitches */
regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, 0);
- ret = __clk_rcg_set_rate(rcg, f);
+ ret = clk_rcg_set_rate_nolock(rcg, f);
/* Switch back to M/N if it's clocking */
if (__clk_is_enabled(hw->clk))
regmap_update_bits(rcg->clkr.regmap, rcg->ns_reg, gfm, gfm);
@@ -813,9 +874,37 @@ static int clk_dyn_rcg_set_rate_and_parent(struct clk_hw *hw,
return __clk_dyn_rcg_set_rate(hw, rate);
}

+/*
+ * These enable/disable functions grab the lock to synchronize with
+ * clk_rcg_set_rate() and clk_rcg_set_parent() disabling all branches and the
+ * rcg itself.
+ */
+static int clk_rcg_enable(struct clk_hw *hw)
+{
+ struct clk_rcg *rcg = to_clk_rcg(hw);
+ int ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&rcg->lock, flags);
+ ret = clk_enable_regmap(hw);
+ spin_unlock_irqrestore(&rcg->lock, flags);
+
+ return ret;
+}
+
+static void clk_rcg_disable(struct clk_hw *hw)
+{
+ struct clk_rcg *rcg = to_clk_rcg(hw);
+ unsigned long flags;
+
+ spin_lock_irqsave(&rcg->lock, flags);
+ clk_disable_regmap(hw);
+ spin_unlock_irqrestore(&rcg->lock, flags);
+}
+
const struct clk_ops clk_rcg_ops = {
- .enable = clk_enable_regmap,
- .disable = clk_disable_regmap,
+ .enable = clk_rcg_enable,
+ .disable = clk_rcg_disable,
.get_parent = clk_rcg_get_parent,
.set_parent = clk_rcg_set_parent,
.recalc_rate = clk_rcg_recalc_rate,
@@ -825,8 +914,8 @@ const struct clk_ops clk_rcg_ops = {
EXPORT_SYMBOL_GPL(clk_rcg_ops);

const struct clk_ops clk_rcg_bypass_ops = {
- .enable = clk_enable_regmap,
- .disable = clk_disable_regmap,
+ .enable = clk_rcg_enable,
+ .disable = clk_rcg_disable,
.get_parent = clk_rcg_get_parent,
.set_parent = clk_rcg_set_parent,
.recalc_rate = clk_rcg_recalc_rate,
@@ -836,8 +925,8 @@ const struct clk_ops clk_rcg_bypass_ops = {
EXPORT_SYMBOL_GPL(clk_rcg_bypass_ops);

const struct clk_ops clk_rcg_bypass2_ops = {
- .enable = clk_enable_regmap,
- .disable = clk_disable_regmap,
+ .enable = clk_rcg_enable,
+ .disable = clk_rcg_disable,
.get_parent = clk_rcg_get_parent,
.set_parent = clk_rcg_set_parent,
.recalc_rate = clk_rcg_recalc_rate,
@@ -848,8 +937,8 @@ const struct clk_ops clk_rcg_bypass2_ops = {
EXPORT_SYMBOL_GPL(clk_rcg_bypass2_ops);

const struct clk_ops clk_rcg_pixel_ops = {
- .enable = clk_enable_regmap,
- .disable = clk_disable_regmap,
+ .enable = clk_rcg_enable,
+ .disable = clk_rcg_disable,
.get_parent = clk_rcg_get_parent,
.set_parent = clk_rcg_set_parent,
.recalc_rate = clk_rcg_recalc_rate,
@@ -860,8 +949,8 @@ const struct clk_ops clk_rcg_pixel_ops = {
EXPORT_SYMBOL_GPL(clk_rcg_pixel_ops);

const struct clk_ops clk_rcg_esc_ops = {
- .enable = clk_enable_regmap,
- .disable = clk_disable_regmap,
+ .enable = clk_rcg_enable,
+ .disable = clk_rcg_disable,
.get_parent = clk_rcg_get_parent,
.set_parent = clk_rcg_set_parent,
.recalc_rate = clk_rcg_recalc_rate,
@@ -875,7 +964,7 @@ const struct clk_ops clk_rcg_lcc_ops = {
.enable = clk_rcg_lcc_enable,
.disable = clk_rcg_lcc_disable,
.get_parent = clk_rcg_get_parent,
- .set_parent = clk_rcg_set_parent,
+ .set_parent = clk_rcg_set_parent_nolock,
.recalc_rate = clk_rcg_recalc_rate,
.determine_rate = clk_rcg_determine_rate,
.set_rate = clk_rcg_lcc_set_rate,
diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
index a2495457e564..3d9c502f91f5 100644
--- a/drivers/clk/qcom/clk-rcg.h
+++ b/drivers/clk/qcom/clk-rcg.h
@@ -15,6 +15,7 @@
#define __QCOM_CLK_RCG_H__

#include <linux/clk-provider.h>
+#include <linux/spinlock.h>
#include "clk-regmap.h"

struct freq_tbl {
@@ -78,6 +79,18 @@ struct src_sel {
const struct parent_map *parent_map;
};

+/**
+ * struct rcg_branch_map - branches under rcg that need to be force off/on
+ * @reg: Address of branch control
+ * @mask: Enable mask to enable branch
+ * @was_enabled: Indicates if the branch was enabled before forcing off
+ */
+struct rcg_branch_map {
+ u32 reg;
+ u32 mask;
+ bool was_enabled;
+};
+
/**
* struct clk_rcg - root clock generator
*
@@ -88,8 +101,8 @@ struct src_sel {
* @s: source selector
* @freq_tbl: frequency table
* @clkr: regmap clock handle
- * @lock: register lock
- *
+ * @lock: register lock for this RCG and its children to protect set_rate/parent
+ * @branches: list of registers to turn off when changing rate/parent
*/
struct clk_rcg {
u32 ns_reg;
@@ -100,6 +113,9 @@ struct clk_rcg {
struct src_sel s;

const struct freq_tbl *freq_tbl;
+ spinlock_t lock;
+ struct rcg_branch_map *branches;
+ unsigned int num_branches;

struct clk_regmap clkr;
};
diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
index 28eb200d0f1e..644672bc07f6 100644
--- a/drivers/clk/qcom/gcc-ipq806x.c
+++ b/drivers/clk/qcom/gcc-ipq806x.c
@@ -266,6 +266,8 @@ static struct freq_tbl clk_tbl_gsbi_uart[] = {
{ }
};

+static struct rcg_branch_map gsbi1_uart_branch = { 0x29d4, BIT(9) };
+
static struct clk_rcg gsbi1_uart_src = {
.ns_reg = 0x29d4,
.md_reg = 0x29d0,
@@ -286,6 +288,9 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_uart_src.lock),
+ .branches = &gsbi1_uart_branch,
+ .num_branches = 1,
.clkr = {
.enable_reg = 0x29d4,
.enable_mask = BIT(11),
@@ -294,7 +299,6 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -337,6 +341,7 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_uart_src.lock),
.clkr = {
.enable_reg = 0x29f4,
.enable_mask = BIT(11),
@@ -345,7 +350,6 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -388,6 +392,7 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_uart_src.lock),
.clkr = {
.enable_reg = 0x2a34,
.enable_mask = BIT(11),
@@ -396,7 +401,6 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -439,6 +443,7 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_uart_src.lock),
.clkr = {
.enable_reg = 0x2a54,
.enable_mask = BIT(11),
@@ -447,7 +452,6 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -490,6 +494,7 @@ static struct clk_rcg gsbi6_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi6_uart_src.lock),
.clkr = {
.enable_reg = 0x2a74,
.enable_mask = BIT(11),
@@ -498,7 +503,6 @@ static struct clk_rcg gsbi6_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -541,6 +545,7 @@ static struct clk_rcg gsbi7_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi7_uart_src.lock),
.clkr = {
.enable_reg = 0x2a94,
.enable_mask = BIT(11),
@@ -549,7 +554,6 @@ static struct clk_rcg gsbi7_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -605,6 +609,7 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_qup_src.lock),
.clkr = {
.enable_reg = 0x29cc,
.enable_mask = BIT(11),
@@ -613,7 +618,6 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -654,6 +658,7 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_qup_src.lock),
.clkr = {
.enable_reg = 0x29ec,
.enable_mask = BIT(11),
@@ -662,7 +667,6 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -703,6 +707,7 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_qup_src.lock),
.clkr = {
.enable_reg = 0x2a2c,
.enable_mask = BIT(11),
@@ -711,7 +716,6 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -752,6 +756,7 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_qup_src.lock),
.clkr = {
.enable_reg = 0x2a4c,
.enable_mask = BIT(11),
@@ -760,7 +765,6 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -801,6 +805,7 @@ static struct clk_rcg gsbi6_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi6_qup_src.lock),
.clkr = {
.enable_reg = 0x2a6c,
.enable_mask = BIT(11),
@@ -809,7 +814,6 @@ static struct clk_rcg gsbi6_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -850,6 +854,7 @@ static struct clk_rcg gsbi7_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi7_qup_src.lock),
.clkr = {
.enable_reg = 0x2a8c,
.enable_mask = BIT(11),
@@ -858,7 +863,6 @@ static struct clk_rcg gsbi7_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1000,6 +1004,7 @@ static struct clk_rcg gp0_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp0_src.lock),
.clkr = {
.enable_reg = 0x2d24,
.enable_mask = BIT(11),
@@ -1008,7 +1013,6 @@ static struct clk_rcg gp0_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
}
};
@@ -1049,6 +1053,7 @@ static struct clk_rcg gp1_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp1_src.lock),
.clkr = {
.enable_reg = 0x2d44,
.enable_mask = BIT(11),
@@ -1057,7 +1062,6 @@ static struct clk_rcg gp1_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1098,6 +1102,7 @@ static struct clk_rcg gp2_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp2_src.lock),
.clkr = {
.enable_reg = 0x2d64,
.enable_mask = BIT(11),
@@ -1106,7 +1111,6 @@ static struct clk_rcg gp2_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1152,6 +1156,7 @@ static struct clk_rcg prng_src = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&prng_src.lock),
.clkr = {
.hw.init = &(struct clk_init_data){
.name = "prng_src",
@@ -1212,6 +1217,7 @@ static struct clk_rcg sdc1_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc1_src.lock),
.clkr = {
.enable_reg = 0x282c,
.enable_mask = BIT(11),
@@ -1220,7 +1226,6 @@ static struct clk_rcg sdc1_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1261,6 +1266,7 @@ static struct clk_rcg sdc3_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc3_src.lock),
.clkr = {
.enable_reg = 0x286c,
.enable_mask = BIT(11),
@@ -1269,7 +1275,6 @@ static struct clk_rcg sdc3_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1345,6 +1350,7 @@ static struct clk_rcg tsif_ref_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_tsif_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&tsif_ref_src.lock),
.clkr = {
.enable_reg = 0x2710,
.enable_mask = BIT(11),
@@ -1353,7 +1359,6 @@ static struct clk_rcg tsif_ref_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1508,6 +1513,7 @@ static struct clk_rcg pcie_ref_src = {
.parent_map = gcc_pxo_pll3_map,
},
.freq_tbl = clk_tbl_pcie_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&pcie_ref_src.lock),
.clkr = {
.enable_reg = 0x3860,
.enable_mask = BIT(11),
@@ -1516,7 +1522,6 @@ static struct clk_rcg pcie_ref_src = {
.parent_names = gcc_pxo_pll3,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -1600,6 +1605,7 @@ static struct clk_rcg pcie1_ref_src = {
.parent_map = gcc_pxo_pll3_map,
},
.freq_tbl = clk_tbl_pcie_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&pcie1_ref_src.lock),
.clkr = {
.enable_reg = 0x3aa0,
.enable_mask = BIT(11),
@@ -1608,7 +1614,6 @@ static struct clk_rcg pcie1_ref_src = {
.parent_names = gcc_pxo_pll3,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -1692,6 +1697,7 @@ static struct clk_rcg pcie2_ref_src = {
.parent_map = gcc_pxo_pll3_map,
},
.freq_tbl = clk_tbl_pcie_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&pcie2_ref_src.lock),
.clkr = {
.enable_reg = 0x3ae0,
.enable_mask = BIT(11),
@@ -1700,7 +1706,6 @@ static struct clk_rcg pcie2_ref_src = {
.parent_names = gcc_pxo_pll3,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -1789,6 +1794,7 @@ static struct clk_rcg sata_ref_src = {
.parent_map = gcc_pxo_pll3_sata_map,
},
.freq_tbl = clk_tbl_sata_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&sata_ref_src.lock),
.clkr = {
.enable_reg = 0x2c08,
.enable_mask = BIT(7),
@@ -1797,7 +1803,6 @@ static struct clk_rcg sata_ref_src = {
.parent_names = gcc_pxo_pll3,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -1926,6 +1931,7 @@ static struct clk_rcg usb30_master_clk_src = {
.parent_map = gcc_pxo_pll8_pll0,
},
.freq_tbl = clk_tbl_usb30_master,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb30_master_clk_src.lock),
.clkr = {
.enable_reg = 0x3b2c,
.enable_mask = BIT(11),
@@ -1934,7 +1940,6 @@ static struct clk_rcg usb30_master_clk_src = {
.parent_names = gcc_pxo_pll8_pll0_map,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -1996,6 +2001,7 @@ static struct clk_rcg usb30_utmi_clk = {
.parent_map = gcc_pxo_pll8_pll0,
},
.freq_tbl = clk_tbl_usb30_utmi,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb30_utmi_clk.lock),
.clkr = {
.enable_reg = 0x3b44,
.enable_mask = BIT(11),
@@ -2004,7 +2010,6 @@ static struct clk_rcg usb30_utmi_clk = {
.parent_names = gcc_pxo_pll8_pll0_map,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -2066,6 +2071,7 @@ static struct clk_rcg usb_hs1_xcvr_clk_src = {
.parent_map = gcc_pxo_pll8_pll0,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hs1_xcvr_clk_src.lock),
.clkr = {
.enable_reg = 0x2968,
.enable_mask = BIT(11),
@@ -2074,7 +2080,6 @@ static struct clk_rcg usb_hs1_xcvr_clk_src = {
.parent_names = gcc_pxo_pll8_pll0_map,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -2130,6 +2135,7 @@ static struct clk_rcg usb_fs1_xcvr_clk_src = {
.parent_map = gcc_pxo_pll8_pll0,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_fs1_xcvr_clk_src.lock),
.clkr = {
.enable_reg = 0x2968,
.enable_mask = BIT(11),
@@ -2138,7 +2144,6 @@ static struct clk_rcg usb_fs1_xcvr_clk_src = {
.parent_names = gcc_pxo_pll8_pll0_map,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
diff --git a/drivers/clk/qcom/gcc-mdm9615.c b/drivers/clk/qcom/gcc-mdm9615.c
index b99dd406e907..be61d6cfe7e0 100644
--- a/drivers/clk/qcom/gcc-mdm9615.c
+++ b/drivers/clk/qcom/gcc-mdm9615.c
@@ -209,6 +209,7 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_uart_src.lock),
.clkr = {
.enable_reg = 0x29d4,
.enable_mask = BIT(11),
@@ -217,7 +218,6 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -260,6 +260,7 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_uart_src.lock),
.clkr = {
.enable_reg = 0x29f4,
.enable_mask = BIT(11),
@@ -268,7 +269,6 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -311,6 +311,7 @@ static struct clk_rcg gsbi3_uart_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi3_uart_src.lock),
.clkr = {
.enable_reg = 0x2a14,
.enable_mask = BIT(11),
@@ -319,7 +320,6 @@ static struct clk_rcg gsbi3_uart_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -362,6 +362,7 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_uart_src.lock),
.clkr = {
.enable_reg = 0x2a34,
.enable_mask = BIT(11),
@@ -370,7 +371,6 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -413,6 +413,7 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_uart_src.lock),
.clkr = {
.enable_reg = 0x2a54,
.enable_mask = BIT(11),
@@ -421,7 +422,6 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -476,6 +476,7 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_qup_src.lock),
.clkr = {
.enable_reg = 0x29cc,
.enable_mask = BIT(11),
@@ -484,7 +485,6 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -525,6 +525,7 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_qup_src.lock),
.clkr = {
.enable_reg = 0x29ec,
.enable_mask = BIT(11),
@@ -533,7 +534,6 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -574,6 +574,7 @@ static struct clk_rcg gsbi3_qup_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi3_qup_src.lock),
.clkr = {
.enable_reg = 0x2a0c,
.enable_mask = BIT(11),
@@ -582,7 +583,6 @@ static struct clk_rcg gsbi3_qup_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -623,6 +623,7 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_qup_src.lock),
.clkr = {
.enable_reg = 0x2a2c,
.enable_mask = BIT(11),
@@ -631,7 +632,6 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -672,6 +672,7 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_qup_src.lock),
.clkr = {
.enable_reg = 0x2a4c,
.enable_mask = BIT(11),
@@ -680,7 +681,6 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -727,6 +727,7 @@ static struct clk_rcg gp0_src = {
.parent_map = gcc_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp0_src.lock),
.clkr = {
.enable_reg = 0x2d24,
.enable_mask = BIT(11),
@@ -735,7 +736,6 @@ static struct clk_rcg gp0_src = {
.parent_names = gcc_cxo,
.num_parents = 1,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
}
};
@@ -776,6 +776,7 @@ static struct clk_rcg gp1_src = {
.parent_map = gcc_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp1_src.lock),
.clkr = {
.enable_reg = 0x2d44,
.enable_mask = BIT(11),
@@ -784,7 +785,6 @@ static struct clk_rcg gp1_src = {
.parent_names = gcc_cxo,
.num_parents = 1,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -825,6 +825,7 @@ static struct clk_rcg gp2_src = {
.parent_map = gcc_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp2_src.lock),
.clkr = {
.enable_reg = 0x2d64,
.enable_mask = BIT(11),
@@ -833,7 +834,6 @@ static struct clk_rcg gp2_src = {
.parent_names = gcc_cxo,
.num_parents = 1,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -879,6 +879,7 @@ static struct clk_rcg prng_src = {
.src_sel_shift = 0,
.parent_map = gcc_cxo_pll8_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&prng_src.lock),
.clkr = {
.hw.init = &(struct clk_init_data){
.name = "prng_src",
@@ -939,6 +940,7 @@ static struct clk_rcg sdc1_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc1_src.lock),
.clkr = {
.enable_reg = 0x282c,
.enable_mask = BIT(11),
@@ -947,7 +949,6 @@ static struct clk_rcg sdc1_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -988,6 +989,7 @@ static struct clk_rcg sdc2_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc2_src.lock),
.clkr = {
.enable_reg = 0x284c,
.enable_mask = BIT(11),
@@ -996,7 +998,6 @@ static struct clk_rcg sdc2_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1042,6 +1043,7 @@ static struct clk_rcg usb_hs1_xcvr_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hs1_xcvr_src.lock),
.clkr = {
.enable_reg = 0x290c,
.enable_mask = BIT(11),
@@ -1050,7 +1052,6 @@ static struct clk_rcg usb_hs1_xcvr_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1091,6 +1092,7 @@ static struct clk_rcg usb_hsic_xcvr_fs_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hsic_xcvr_fs_src.lock),
.clkr = {
.enable_reg = 0x2928,
.enable_mask = BIT(11),
@@ -1099,7 +1101,6 @@ static struct clk_rcg usb_hsic_xcvr_fs_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1146,6 +1147,7 @@ static struct clk_rcg usb_hs1_system_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_usb_hs1_system,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hs1_system_src.lock),
.clkr = {
.enable_reg = 0x36a4,
.enable_mask = BIT(11),
@@ -1154,7 +1156,6 @@ static struct clk_rcg usb_hs1_system_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1201,6 +1202,7 @@ static struct clk_rcg usb_hsic_system_src = {
.parent_map = gcc_cxo_pll8_map,
},
.freq_tbl = clk_tbl_usb_hsic_system,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hsic_system_src.lock),
.clkr = {
.enable_reg = 0x2b58,
.enable_mask = BIT(11),
@@ -1209,7 +1211,6 @@ static struct clk_rcg usb_hsic_system_src = {
.parent_names = gcc_cxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1256,6 +1257,7 @@ static struct clk_rcg usb_hsic_hsic_src = {
.parent_map = gcc_cxo_pll14_map,
},
.freq_tbl = clk_tbl_usb_hsic_hsic,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hsic_hsic_src.lock),
.clkr = {
.enable_reg = 0x2b50,
.enable_mask = BIT(11),
@@ -1264,7 +1266,6 @@ static struct clk_rcg usb_hsic_hsic_src = {
.parent_names = gcc_cxo_pll14,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
diff --git a/drivers/clk/qcom/gcc-msm8660.c b/drivers/clk/qcom/gcc-msm8660.c
index c347a0d44bc8..05157636a8da 100644
--- a/drivers/clk/qcom/gcc-msm8660.c
+++ b/drivers/clk/qcom/gcc-msm8660.c
@@ -125,6 +125,7 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_uart_src.lock),
.clkr = {
.enable_reg = 0x29d4,
.enable_mask = BIT(11),
@@ -133,7 +134,6 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -176,6 +176,7 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_uart_src.lock),
.clkr = {
.enable_reg = 0x29f4,
.enable_mask = BIT(11),
@@ -184,7 +185,6 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -227,6 +227,7 @@ static struct clk_rcg gsbi3_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi3_uart_src.lock),
.clkr = {
.enable_reg = 0x2a14,
.enable_mask = BIT(11),
@@ -235,7 +236,6 @@ static struct clk_rcg gsbi3_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -278,6 +278,7 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_uart_src.lock),
.clkr = {
.enable_reg = 0x2a34,
.enable_mask = BIT(11),
@@ -286,7 +287,6 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -329,6 +329,7 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_uart_src.lock),
.clkr = {
.enable_reg = 0x2a54,
.enable_mask = BIT(11),
@@ -337,7 +338,6 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -380,6 +380,7 @@ static struct clk_rcg gsbi6_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi6_uart_src.lock),
.clkr = {
.enable_reg = 0x2a74,
.enable_mask = BIT(11),
@@ -388,7 +389,6 @@ static struct clk_rcg gsbi6_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -431,6 +431,7 @@ static struct clk_rcg gsbi7_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi7_uart_src.lock),
.clkr = {
.enable_reg = 0x2a94,
.enable_mask = BIT(11),
@@ -439,7 +440,6 @@ static struct clk_rcg gsbi7_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -482,6 +482,7 @@ static struct clk_rcg gsbi8_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi8_uart_src.lock),
.clkr = {
.enable_reg = 0x2ab4,
.enable_mask = BIT(11),
@@ -490,7 +491,6 @@ static struct clk_rcg gsbi8_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -531,6 +531,7 @@ static struct clk_rcg gsbi9_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi9_uart_src.lock),
.clkr = {
.enable_reg = 0x2ad4,
.enable_mask = BIT(11),
@@ -539,7 +540,6 @@ static struct clk_rcg gsbi9_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -580,6 +580,7 @@ static struct clk_rcg gsbi10_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi10_uart_src.lock),
.clkr = {
.enable_reg = 0x2af4,
.enable_mask = BIT(11),
@@ -588,7 +589,6 @@ static struct clk_rcg gsbi10_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -629,6 +629,7 @@ static struct clk_rcg gsbi11_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi11_uart_src.lock),
.clkr = {
.enable_reg = 0x2b14,
.enable_mask = BIT(11),
@@ -637,7 +638,6 @@ static struct clk_rcg gsbi11_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -678,6 +678,7 @@ static struct clk_rcg gsbi12_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi12_uart_src.lock),
.clkr = {
.enable_reg = 0x2b34,
.enable_mask = BIT(11),
@@ -686,7 +687,6 @@ static struct clk_rcg gsbi12_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -740,6 +740,7 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_qup_src.lock),
.clkr = {
.enable_reg = 0x29cc,
.enable_mask = BIT(11),
@@ -748,7 +749,6 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -789,6 +789,7 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_qup_src.lock),
.clkr = {
.enable_reg = 0x29ec,
.enable_mask = BIT(11),
@@ -797,7 +798,6 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -838,6 +838,7 @@ static struct clk_rcg gsbi3_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi3_qup_src.lock),
.clkr = {
.enable_reg = 0x2a0c,
.enable_mask = BIT(11),
@@ -846,7 +847,6 @@ static struct clk_rcg gsbi3_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -887,6 +887,7 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_qup_src.lock),
.clkr = {
.enable_reg = 0x2a2c,
.enable_mask = BIT(11),
@@ -895,7 +896,6 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -936,6 +936,7 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_qup_src.lock),
.clkr = {
.enable_reg = 0x2a4c,
.enable_mask = BIT(11),
@@ -944,7 +945,6 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -985,6 +985,7 @@ static struct clk_rcg gsbi6_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi6_qup_src.lock),
.clkr = {
.enable_reg = 0x2a6c,
.enable_mask = BIT(11),
@@ -993,7 +994,6 @@ static struct clk_rcg gsbi6_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1034,6 +1034,7 @@ static struct clk_rcg gsbi7_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi7_qup_src.lock),
.clkr = {
.enable_reg = 0x2a8c,
.enable_mask = BIT(11),
@@ -1042,7 +1043,6 @@ static struct clk_rcg gsbi7_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1083,6 +1083,7 @@ static struct clk_rcg gsbi8_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi8_qup_src.lock),
.clkr = {
.enable_reg = 0x2aac,
.enable_mask = BIT(11),
@@ -1091,7 +1092,6 @@ static struct clk_rcg gsbi8_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1132,6 +1132,7 @@ static struct clk_rcg gsbi9_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi9_qup_src.lock),
.clkr = {
.enable_reg = 0x2acc,
.enable_mask = BIT(11),
@@ -1140,7 +1141,6 @@ static struct clk_rcg gsbi9_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1181,6 +1181,7 @@ static struct clk_rcg gsbi10_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi10_qup_src.lock),
.clkr = {
.enable_reg = 0x2aec,
.enable_mask = BIT(11),
@@ -1189,7 +1190,6 @@ static struct clk_rcg gsbi10_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1230,6 +1230,7 @@ static struct clk_rcg gsbi11_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi11_qup_src.lock),
.clkr = {
.enable_reg = 0x2b0c,
.enable_mask = BIT(11),
@@ -1238,7 +1239,6 @@ static struct clk_rcg gsbi11_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1279,6 +1279,7 @@ static struct clk_rcg gsbi12_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi12_qup_src.lock),
.clkr = {
.enable_reg = 0x2b2c,
.enable_mask = BIT(11),
@@ -1287,7 +1288,6 @@ static struct clk_rcg gsbi12_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1341,6 +1341,7 @@ static struct clk_rcg gp0_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp0_src.lock),
.clkr = {
.enable_reg = 0x2d24,
.enable_mask = BIT(11),
@@ -1349,7 +1350,6 @@ static struct clk_rcg gp0_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
}
};
@@ -1390,6 +1390,7 @@ static struct clk_rcg gp1_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp1_src.lock),
.clkr = {
.enable_reg = 0x2d44,
.enable_mask = BIT(11),
@@ -1398,7 +1399,6 @@ static struct clk_rcg gp1_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1439,6 +1439,7 @@ static struct clk_rcg gp2_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp2_src.lock),
.clkr = {
.enable_reg = 0x2d64,
.enable_mask = BIT(11),
@@ -1447,7 +1448,6 @@ static struct clk_rcg gp2_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1550,6 +1550,7 @@ static struct clk_rcg sdc1_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc1_src.lock),
.clkr = {
.enable_reg = 0x282c,
.enable_mask = BIT(11),
@@ -1558,7 +1559,6 @@ static struct clk_rcg sdc1_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1599,6 +1599,7 @@ static struct clk_rcg sdc2_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc2_src.lock),
.clkr = {
.enable_reg = 0x284c,
.enable_mask = BIT(11),
@@ -1607,7 +1608,6 @@ static struct clk_rcg sdc2_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1648,6 +1648,7 @@ static struct clk_rcg sdc3_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc3_src.lock),
.clkr = {
.enable_reg = 0x286c,
.enable_mask = BIT(11),
@@ -1656,7 +1657,6 @@ static struct clk_rcg sdc3_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1697,6 +1697,7 @@ static struct clk_rcg sdc4_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc4_src.lock),
.clkr = {
.enable_reg = 0x288c,
.enable_mask = BIT(11),
@@ -1705,7 +1706,6 @@ static struct clk_rcg sdc4_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1746,6 +1746,7 @@ static struct clk_rcg sdc5_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc5_src.lock),
.clkr = {
.enable_reg = 0x28ac,
.enable_mask = BIT(11),
@@ -1754,7 +1755,6 @@ static struct clk_rcg sdc5_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1800,6 +1800,7 @@ static struct clk_rcg tsif_ref_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_tsif_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&tsif_ref_src.lock),
.clkr = {
.enable_reg = 0x2710,
.enable_mask = BIT(11),
@@ -1808,7 +1809,6 @@ static struct clk_rcg tsif_ref_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1854,6 +1854,7 @@ static struct clk_rcg usb_hs1_xcvr_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hs1_xcvr_src.lock),
.clkr = {
.enable_reg = 0x290c,
.enable_mask = BIT(11),
@@ -1862,7 +1863,6 @@ static struct clk_rcg usb_hs1_xcvr_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1903,6 +1903,7 @@ static struct clk_rcg usb_fs1_xcvr_fs_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_fs1_xcvr_fs_src.lock),
.clkr = {
.enable_reg = 0x2968,
.enable_mask = BIT(11),
@@ -1911,7 +1912,6 @@ static struct clk_rcg usb_fs1_xcvr_fs_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1970,6 +1970,7 @@ static struct clk_rcg usb_fs2_xcvr_fs_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_fs2_xcvr_fs_src.lock),
.clkr = {
.enable_reg = 0x2988,
.enable_mask = BIT(11),
@@ -1978,7 +1979,6 @@ static struct clk_rcg usb_fs2_xcvr_fs_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
diff --git a/drivers/clk/qcom/gcc-msm8960.c b/drivers/clk/qcom/gcc-msm8960.c
index eb551c75fba6..b34e24dae37c 100644
--- a/drivers/clk/qcom/gcc-msm8960.c
+++ b/drivers/clk/qcom/gcc-msm8960.c
@@ -192,6 +192,7 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_uart_src.lock),
.clkr = {
.enable_reg = 0x29d4,
.enable_mask = BIT(11),
@@ -200,7 +201,6 @@ static struct clk_rcg gsbi1_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -243,6 +243,7 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_uart_src.lock),
.clkr = {
.enable_reg = 0x29f4,
.enable_mask = BIT(11),
@@ -251,7 +252,6 @@ static struct clk_rcg gsbi2_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -294,6 +294,7 @@ static struct clk_rcg gsbi3_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi3_uart_src.lock),
.clkr = {
.enable_reg = 0x2a14,
.enable_mask = BIT(11),
@@ -302,7 +303,6 @@ static struct clk_rcg gsbi3_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -345,6 +345,7 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_uart_src.lock),
.clkr = {
.enable_reg = 0x2a34,
.enable_mask = BIT(11),
@@ -353,7 +354,6 @@ static struct clk_rcg gsbi4_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -396,6 +396,7 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_uart_src.lock),
.clkr = {
.enable_reg = 0x2a54,
.enable_mask = BIT(11),
@@ -404,7 +405,6 @@ static struct clk_rcg gsbi5_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -447,6 +447,7 @@ static struct clk_rcg gsbi6_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi6_uart_src.lock),
.clkr = {
.enable_reg = 0x2a74,
.enable_mask = BIT(11),
@@ -455,7 +456,6 @@ static struct clk_rcg gsbi6_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -498,6 +498,7 @@ static struct clk_rcg gsbi7_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi7_uart_src.lock),
.clkr = {
.enable_reg = 0x2a94,
.enable_mask = BIT(11),
@@ -506,7 +507,6 @@ static struct clk_rcg gsbi7_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -549,6 +549,7 @@ static struct clk_rcg gsbi8_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi8_uart_src.lock),
.clkr = {
.enable_reg = 0x2ab4,
.enable_mask = BIT(11),
@@ -557,7 +558,6 @@ static struct clk_rcg gsbi8_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -598,6 +598,7 @@ static struct clk_rcg gsbi9_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi9_uart_src.lock),
.clkr = {
.enable_reg = 0x2ad4,
.enable_mask = BIT(11),
@@ -606,7 +607,6 @@ static struct clk_rcg gsbi9_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -647,6 +647,7 @@ static struct clk_rcg gsbi10_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi10_uart_src.lock),
.clkr = {
.enable_reg = 0x2af4,
.enable_mask = BIT(11),
@@ -655,7 +656,6 @@ static struct clk_rcg gsbi10_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -696,6 +696,7 @@ static struct clk_rcg gsbi11_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi11_uart_src.lock),
.clkr = {
.enable_reg = 0x2b14,
.enable_mask = BIT(11),
@@ -704,7 +705,6 @@ static struct clk_rcg gsbi11_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -745,6 +745,7 @@ static struct clk_rcg gsbi12_uart_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_uart,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi12_uart_src.lock),
.clkr = {
.enable_reg = 0x2b34,
.enable_mask = BIT(11),
@@ -753,7 +754,6 @@ static struct clk_rcg gsbi12_uart_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -807,6 +807,7 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi1_qup_src.lock),
.clkr = {
.enable_reg = 0x29cc,
.enable_mask = BIT(11),
@@ -815,7 +816,6 @@ static struct clk_rcg gsbi1_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -856,6 +856,7 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi2_qup_src.lock),
.clkr = {
.enable_reg = 0x29ec,
.enable_mask = BIT(11),
@@ -864,7 +865,6 @@ static struct clk_rcg gsbi2_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -905,6 +905,7 @@ static struct clk_rcg gsbi3_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi3_qup_src.lock),
.clkr = {
.enable_reg = 0x2a0c,
.enable_mask = BIT(11),
@@ -913,7 +914,6 @@ static struct clk_rcg gsbi3_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -954,6 +954,7 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi4_qup_src.lock),
.clkr = {
.enable_reg = 0x2a2c,
.enable_mask = BIT(11),
@@ -962,7 +963,6 @@ static struct clk_rcg gsbi4_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1003,6 +1003,7 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi5_qup_src.lock),
.clkr = {
.enable_reg = 0x2a4c,
.enable_mask = BIT(11),
@@ -1011,7 +1012,6 @@ static struct clk_rcg gsbi5_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1052,6 +1052,7 @@ static struct clk_rcg gsbi6_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi6_qup_src.lock),
.clkr = {
.enable_reg = 0x2a6c,
.enable_mask = BIT(11),
@@ -1060,7 +1061,6 @@ static struct clk_rcg gsbi6_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1101,6 +1101,7 @@ static struct clk_rcg gsbi7_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi7_qup_src.lock),
.clkr = {
.enable_reg = 0x2a8c,
.enable_mask = BIT(11),
@@ -1109,7 +1110,6 @@ static struct clk_rcg gsbi7_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1150,6 +1150,7 @@ static struct clk_rcg gsbi8_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi8_qup_src.lock),
.clkr = {
.enable_reg = 0x2aac,
.enable_mask = BIT(11),
@@ -1158,7 +1159,6 @@ static struct clk_rcg gsbi8_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1199,6 +1199,7 @@ static struct clk_rcg gsbi9_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi9_qup_src.lock),
.clkr = {
.enable_reg = 0x2acc,
.enable_mask = BIT(11),
@@ -1207,7 +1208,6 @@ static struct clk_rcg gsbi9_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1248,6 +1248,7 @@ static struct clk_rcg gsbi10_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi10_qup_src.lock),
.clkr = {
.enable_reg = 0x2aec,
.enable_mask = BIT(11),
@@ -1256,7 +1257,6 @@ static struct clk_rcg gsbi10_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1297,6 +1297,7 @@ static struct clk_rcg gsbi11_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi11_qup_src.lock),
.clkr = {
.enable_reg = 0x2b0c,
.enable_mask = BIT(11),
@@ -1305,7 +1306,6 @@ static struct clk_rcg gsbi11_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1346,6 +1346,7 @@ static struct clk_rcg gsbi12_qup_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_gsbi_qup,
+ .lock = __SPIN_LOCK_UNLOCKED(&gsbi12_qup_src.lock),
.clkr = {
.enable_reg = 0x2b2c,
.enable_mask = BIT(11),
@@ -1354,7 +1355,6 @@ static struct clk_rcg gsbi12_qup_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
},
};
@@ -1408,6 +1408,7 @@ static struct clk_rcg gp0_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp0_src.lock),
.clkr = {
.enable_reg = 0x2d24,
.enable_mask = BIT(11),
@@ -1416,7 +1417,6 @@ static struct clk_rcg gp0_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_PARENT_GATE,
},
}
};
@@ -1457,6 +1457,7 @@ static struct clk_rcg gp1_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp1_src.lock),
.clkr = {
.enable_reg = 0x2d44,
.enable_mask = BIT(11),
@@ -1465,7 +1466,6 @@ static struct clk_rcg gp1_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1506,6 +1506,7 @@ static struct clk_rcg gp2_src = {
.parent_map = gcc_pxo_pll8_cxo_map,
},
.freq_tbl = clk_tbl_gp,
+ .lock = __SPIN_LOCK_UNLOCKED(&gp2_src.lock),
.clkr = {
.enable_reg = 0x2d64,
.enable_mask = BIT(11),
@@ -1514,7 +1515,6 @@ static struct clk_rcg gp2_src = {
.parent_names = gcc_pxo_pll8_cxo,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1560,6 +1560,7 @@ static struct clk_rcg prng_src = {
.src_sel_shift = 0,
.parent_map = gcc_pxo_pll8_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&prng_src.lock),
.clkr = {
.hw.init = &(struct clk_init_data){
.name = "prng_src",
@@ -1620,6 +1621,7 @@ static struct clk_rcg sdc1_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc1_src.lock),
.clkr = {
.enable_reg = 0x282c,
.enable_mask = BIT(11),
@@ -1628,7 +1630,6 @@ static struct clk_rcg sdc1_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1669,6 +1670,7 @@ static struct clk_rcg sdc2_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc2_src.lock),
.clkr = {
.enable_reg = 0x284c,
.enable_mask = BIT(11),
@@ -1677,7 +1679,6 @@ static struct clk_rcg sdc2_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1718,6 +1719,7 @@ static struct clk_rcg sdc3_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc3_src.lock),
.clkr = {
.enable_reg = 0x286c,
.enable_mask = BIT(11),
@@ -1726,7 +1728,6 @@ static struct clk_rcg sdc3_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1767,6 +1768,7 @@ static struct clk_rcg sdc4_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc4_src.lock),
.clkr = {
.enable_reg = 0x288c,
.enable_mask = BIT(11),
@@ -1775,7 +1777,6 @@ static struct clk_rcg sdc4_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1816,6 +1817,7 @@ static struct clk_rcg sdc5_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_sdc,
+ .lock = __SPIN_LOCK_UNLOCKED(&sdc5_src.lock),
.clkr = {
.enable_reg = 0x28ac,
.enable_mask = BIT(11),
@@ -1824,7 +1826,6 @@ static struct clk_rcg sdc5_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1870,6 +1871,7 @@ static struct clk_rcg tsif_ref_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_tsif_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&tsif_ref_src.lock),
.clkr = {
.enable_reg = 0x2710,
.enable_mask = BIT(11),
@@ -1878,7 +1880,6 @@ static struct clk_rcg tsif_ref_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1924,6 +1925,7 @@ static struct clk_rcg usb_hs1_xcvr_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hs1_xcvr_src.lock),
.clkr = {
.enable_reg = 0x290c,
.enable_mask = BIT(11),
@@ -1932,7 +1934,6 @@ static struct clk_rcg usb_hs1_xcvr_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -1973,6 +1974,7 @@ static struct clk_rcg usb_hs3_xcvr_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hs3_xcvr_src.lock),
.clkr = {
.enable_reg = 0x370c,
.enable_mask = BIT(11),
@@ -1981,7 +1983,6 @@ static struct clk_rcg usb_hs3_xcvr_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -2022,6 +2023,7 @@ static struct clk_rcg usb_hs4_xcvr_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hs4_xcvr_src.lock),
.clkr = {
.enable_reg = 0x372c,
.enable_mask = BIT(11),
@@ -2030,7 +2032,6 @@ static struct clk_rcg usb_hs4_xcvr_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -2071,6 +2072,7 @@ static struct clk_rcg usb_hsic_xcvr_fs_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_hsic_xcvr_fs_src.lock),
.clkr = {
.enable_reg = 0x2928,
.enable_mask = BIT(11),
@@ -2079,7 +2081,6 @@ static struct clk_rcg usb_hsic_xcvr_fs_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -2166,6 +2167,7 @@ static struct clk_rcg usb_fs1_xcvr_fs_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_fs1_xcvr_fs_src.lock),
.clkr = {
.enable_reg = 0x2968,
.enable_mask = BIT(11),
@@ -2174,7 +2176,6 @@ static struct clk_rcg usb_fs1_xcvr_fs_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -2233,6 +2234,7 @@ static struct clk_rcg usb_fs2_xcvr_fs_src = {
.parent_map = gcc_pxo_pll8_map,
},
.freq_tbl = clk_tbl_usb,
+ .lock = __SPIN_LOCK_UNLOCKED(&usb_fs2_xcvr_fs_src.lock),
.clkr = {
.enable_reg = 0x2988,
.enable_mask = BIT(11),
@@ -2241,7 +2243,6 @@ static struct clk_rcg usb_fs2_xcvr_fs_src = {
.parent_names = gcc_pxo_pll8,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
}
};
@@ -2721,6 +2722,7 @@ static struct clk_rcg ce3_src = {
.parent_map = gcc_pxo_pll8_pll3_map,
},
.freq_tbl = clk_tbl_ce3,
+ .lock = __SPIN_LOCK_UNLOCKED(&ce3_src.lock),
.clkr = {
.enable_reg = 0x36c0,
.enable_mask = BIT(7),
@@ -2729,7 +2731,6 @@ static struct clk_rcg ce3_src = {
.parent_names = gcc_pxo_pll8_pll3,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -2783,6 +2784,7 @@ static struct clk_rcg sata_clk_src = {
.parent_map = gcc_pxo_pll8_pll3_map,
},
.freq_tbl = clk_tbl_sata_ref,
+ .lock = __SPIN_LOCK_UNLOCKED(&sata_clk_src.lock),
.clkr = {
.enable_reg = 0x2c08,
.enable_mask = BIT(7),
@@ -2791,7 +2793,6 @@ static struct clk_rcg sata_clk_src = {
.parent_names = gcc_pxo_pll8_pll3,
.num_parents = 3,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
index 977e98eadbeb..b4f3e9d68f68 100644
--- a/drivers/clk/qcom/lcc-ipq806x.c
+++ b/drivers/clk/qcom/lcc-ipq806x.c
@@ -133,6 +133,7 @@ static struct clk_rcg mi2s_osr_src = {
.parent_map = lcc_pxo_pll4_map,
},
.freq_tbl = clk_tbl_aif_mi2s,
+ .lock = __SPIN_LOCK_UNLOCKED(&mi2s_osr_src.lock),
.clkr = {
.enable_reg = 0x48,
.enable_mask = BIT(9),
@@ -141,7 +142,6 @@ static struct clk_rcg mi2s_osr_src = {
.parent_names = lcc_pxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -247,6 +247,7 @@ static struct clk_rcg pcm_src = {
.parent_map = lcc_pxo_pll4_map,
},
.freq_tbl = clk_tbl_pcm,
+ .lock = __SPIN_LOCK_UNLOCKED(&pcm_src.lock),
.clkr = {
.enable_reg = 0x54,
.enable_mask = BIT(9),
@@ -255,7 +256,6 @@ static struct clk_rcg pcm_src = {
.parent_names = lcc_pxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -327,6 +327,7 @@ static struct clk_rcg spdif_src = {
.parent_map = lcc_pxo_pll4_map,
},
.freq_tbl = clk_tbl_aif_osr,
+ .lock = __SPIN_LOCK_UNLOCKED(&spdif_src.lock),
.clkr = {
.enable_reg = 0xcc,
.enable_mask = BIT(9),
@@ -335,7 +336,6 @@ static struct clk_rcg spdif_src = {
.parent_names = lcc_pxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
diff --git a/drivers/clk/qcom/lcc-mdm9615.c b/drivers/clk/qcom/lcc-mdm9615.c
index 3237ef4c1197..782b9896d792 100644
--- a/drivers/clk/qcom/lcc-mdm9615.c
+++ b/drivers/clk/qcom/lcc-mdm9615.c
@@ -116,6 +116,7 @@ static struct clk_rcg mi2s_osr_src = {
.parent_map = lcc_cxo_pll4_map,
},
.freq_tbl = clk_tbl_aif_osr_393,
+ .lock = __SPIN_LOCK_UNLOCKED(&mi2s_osr_src.lock),
.clkr = {
.enable_reg = 0x48,
.enable_mask = BIT(9),
@@ -124,7 +125,6 @@ static struct clk_rcg mi2s_osr_src = {
.parent_names = lcc_cxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -222,6 +222,7 @@ static struct clk_rcg prefix##_osr_src = { \
.parent_map = lcc_cxo_pll4_map, \
}, \
.freq_tbl = clk_tbl_aif_osr_393, \
+ .lock = __SPIN_LOCK_UNLOCKED(&prefix.lock), \
.clkr = { \
.enable_reg = _ns, \
.enable_mask = BIT(9), \
@@ -230,7 +231,6 @@ static struct clk_rcg prefix##_osr_src = { \
.parent_names = lcc_cxo_pll4, \
.num_parents = 2, \
.ops = &clk_rcg_ops, \
- .flags = CLK_SET_RATE_GATE, \
}, \
}, \
}; \
@@ -366,6 +366,7 @@ static struct clk_rcg pcm_src = {
.parent_map = lcc_cxo_pll4_map,
},
.freq_tbl = clk_tbl_pcm_393,
+ .lock = __SPIN_LOCK_UNLOCKED(&pcm_src.lock),
.clkr = {
.enable_reg = 0x54,
.enable_mask = BIT(9),
@@ -374,7 +375,6 @@ static struct clk_rcg pcm_src = {
.parent_names = lcc_cxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -434,6 +434,7 @@ static struct clk_rcg slimbus_src = {
.parent_map = lcc_cxo_pll4_map,
},
.freq_tbl = clk_tbl_aif_osr_393,
+ .lock = __SPIN_LOCK_UNLOCKED(&slimbus_src.lock),
.clkr = {
.enable_reg = 0xcc,
.enable_mask = BIT(9),
@@ -442,7 +443,6 @@ static struct clk_rcg slimbus_src = {
.parent_names = lcc_cxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
diff --git a/drivers/clk/qcom/lcc-msm8960.c b/drivers/clk/qcom/lcc-msm8960.c
index 4fcf9d1d233c..7cdd2a7b0a31 100644
--- a/drivers/clk/qcom/lcc-msm8960.c
+++ b/drivers/clk/qcom/lcc-msm8960.c
@@ -114,6 +114,7 @@ static struct clk_rcg mi2s_osr_src = {
.parent_map = lcc_pxo_pll4_map,
},
.freq_tbl = clk_tbl_aif_osr_393,
+ .lock = __SPIN_LOCK_UNLOCKED(&mi2s_osr_src.lock),
.clkr = {
.enable_reg = 0x48,
.enable_mask = BIT(9),
@@ -122,7 +123,6 @@ static struct clk_rcg mi2s_osr_src = {
.parent_names = lcc_pxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -220,6 +220,7 @@ static struct clk_rcg prefix##_osr_src = { \
.parent_map = lcc_pxo_pll4_map, \
}, \
.freq_tbl = clk_tbl_aif_osr_393, \
+ .lock = __SPIN_LOCK_UNLOCKED(&prefix.lock), \
.clkr = { \
.enable_reg = _ns, \
.enable_mask = BIT(9), \
@@ -228,7 +229,6 @@ static struct clk_rcg prefix##_osr_src = { \
.parent_names = lcc_pxo_pll4, \
.num_parents = 2, \
.ops = &clk_rcg_ops, \
- .flags = CLK_SET_RATE_GATE, \
}, \
}, \
}; \
@@ -364,6 +364,7 @@ static struct clk_rcg pcm_src = {
.parent_map = lcc_pxo_pll4_map,
},
.freq_tbl = clk_tbl_pcm_393,
+ .lock = __SPIN_LOCK_UNLOCKED(&pcm_src.lock),
.clkr = {
.enable_reg = 0x54,
.enable_mask = BIT(9),
@@ -372,7 +373,6 @@ static struct clk_rcg pcm_src = {
.parent_names = lcc_pxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
@@ -432,6 +432,7 @@ static struct clk_rcg slimbus_src = {
.parent_map = lcc_pxo_pll4_map,
},
.freq_tbl = clk_tbl_aif_osr_393,
+ .lock = __SPIN_LOCK_UNLOCKED(&slimbus_src.lock),
.clkr = {
.enable_reg = 0xcc,
.enable_mask = BIT(9),
@@ -440,7 +441,6 @@ static struct clk_rcg slimbus_src = {
.parent_names = lcc_pxo_pll4,
.num_parents = 2,
.ops = &clk_rcg_ops,
- .flags = CLK_SET_RATE_GATE,
},
},
};
diff --git a/drivers/clk/qcom/mmcc-msm8960.c b/drivers/clk/qcom/mmcc-msm8960.c
index 7f21421c87d6..52e5f877f851 100644
--- a/drivers/clk/qcom/mmcc-msm8960.c
+++ b/drivers/clk/qcom/mmcc-msm8960.c
@@ -195,6 +195,7 @@ static struct clk_rcg camclk0_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_cam,
+ .lock = __SPIN_LOCK_UNLOCKED(&camclk0_src.lock),
.clkr = {
.enable_reg = 0x0140,
.enable_mask = BIT(2),
@@ -244,6 +245,7 @@ static struct clk_rcg camclk1_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_cam,
+ .lock = __SPIN_LOCK_UNLOCKED(&camclk1_src.lock),
.clkr = {
.enable_reg = 0x0154,
.enable_mask = BIT(2),
@@ -293,6 +295,7 @@ static struct clk_rcg camclk2_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_cam,
+ .lock = __SPIN_LOCK_UNLOCKED(&camclk2_src.lock),
.clkr = {
.enable_reg = 0x0220,
.enable_mask = BIT(2),
@@ -348,6 +351,7 @@ static struct clk_rcg csi0_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_csi,
+ .lock = __SPIN_LOCK_UNLOCKED(&csi0_src.lock),
.clkr = {
.enable_reg = 0x0040,
.enable_mask = BIT(2),
@@ -412,6 +416,7 @@ static struct clk_rcg csi1_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_csi,
+ .lock = __SPIN_LOCK_UNLOCKED(&csi1_src.lock),
.clkr = {
.enable_reg = 0x0024,
.enable_mask = BIT(2),
@@ -476,6 +481,7 @@ static struct clk_rcg csi2_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_csi,
+ .lock = __SPIN_LOCK_UNLOCKED(&csi2_src.lock),
.clkr = {
.enable_reg = 0x022c,
.enable_mask = BIT(2),
@@ -728,6 +734,7 @@ static struct clk_rcg csiphytimer_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_csiphytimer,
+ .lock = __SPIN_LOCK_UNLOCKED(&csiphytimer_src.lock),
.clkr = {
.enable_reg = 0x0160,
.enable_mask = BIT(2),
@@ -1156,6 +1163,7 @@ static struct clk_rcg ijpeg_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_ijpeg,
+ .lock = __SPIN_LOCK_UNLOCKED(&ijpeg_src.lock),
.clkr = {
.enable_reg = 0x0098,
.enable_mask = BIT(2),
@@ -1204,6 +1212,7 @@ static struct clk_rcg jpegd_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_jpegd,
+ .lock = __SPIN_LOCK_UNLOCKED(&jpegd_src.lock),
.clkr = {
.enable_reg = 0x00a4,
.enable_mask = BIT(2),
@@ -1446,6 +1455,7 @@ static struct clk_rcg tv_src = {
.parent_map = mmcc_pxo_hdmi_map,
},
.freq_tbl = clk_tbl_tv,
+ .lock = __SPIN_LOCK_UNLOCKED(&tv_src.lock),
.clkr = {
.enable_reg = 0x00ec,
.enable_mask = BIT(2),
@@ -1668,6 +1678,7 @@ static struct clk_rcg vpe_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_vpe,
+ .lock = __SPIN_LOCK_UNLOCKED(&vpe_src.lock),
.clkr = {
.enable_reg = 0x0110,
.enable_mask = BIT(2),
@@ -1736,6 +1747,7 @@ static struct clk_rcg vfe_src = {
.parent_map = mmcc_pxo_pll8_pll2_map,
},
.freq_tbl = clk_tbl_vfe,
+ .lock = __SPIN_LOCK_UNLOCKED(&vfe_src.lock),
.clkr = {
.enable_reg = 0x0104,
.enable_mask = BIT(2),
@@ -2070,6 +2082,7 @@ static struct clk_rcg dsi1_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi2_dsi1_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi1_src.lock),
.clkr = {
.enable_reg = 0x004c,
.enable_mask = BIT(2),
@@ -2118,6 +2131,7 @@ static struct clk_rcg dsi2_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi2_dsi1_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi1_src.lock),
.clkr = {
.enable_reg = 0x003c,
.enable_mask = BIT(2),
@@ -2157,6 +2171,7 @@ static struct clk_rcg dsi1_byte_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi1_dsi2_byte_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi1_src.lock),
.clkr = {
.enable_reg = 0x0090,
.enable_mask = BIT(2),
@@ -2196,6 +2211,7 @@ static struct clk_rcg dsi2_byte_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi1_dsi2_byte_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi2_byte_src.lock),
.clkr = {
.enable_reg = 0x0130,
.enable_mask = BIT(2),
@@ -2235,6 +2251,7 @@ static struct clk_rcg dsi1_esc_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi1_dsi2_byte_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi1_esc_src.lock),
.clkr = {
.enable_reg = 0x00cc,
.enable_mask = BIT(2),
@@ -2273,6 +2290,7 @@ static struct clk_rcg dsi2_esc_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi1_dsi2_byte_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi2_esc_src.lock),
.clkr = {
.enable_reg = 0x013c,
.enable_mask = BIT(2),
@@ -2320,6 +2338,7 @@ static struct clk_rcg dsi1_pixel_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi2_dsi1_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi1_pixel_src.lock),
.clkr = {
.enable_reg = 0x0130,
.enable_mask = BIT(2),
@@ -2367,6 +2386,7 @@ static struct clk_rcg dsi2_pixel_src = {
.src_sel_shift = 0,
.parent_map = mmcc_pxo_dsi2_dsi1_map,
},
+ .lock = __SPIN_LOCK_UNLOCKED(&dsi2_pixel_src.lock),
.clkr = {
.enable_reg = 0x0094,
.enable_mask = BIT(2),

--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

2018-02-02 12:53:26

by Jerome Brunet

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

On Thu, 2018-02-01 at 09:43 -0800, Stephen Boyd wrote:
> > > > Applied to clk-protect-rate, with the exception that I did not apply
> > > > "clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
> > > > qcom clk code.
> > > >
> > > > Stephen, do you plan to fix up the qcom clock code so that the
> > > > SET_RATE_GATE improvement can go in?
> > > >
> > >
> > > I started working on it a while back. Let's see if I can finish
> > > it off this weekend.
> > >
> >
> > Hi Stephen,
> >
> > Have you been able find something to fix the qcom code regarding this issue ?
> >
>
> This is what I have. I'm unhappy with a few things. First, I made
> a spinlock for each clk, which is overkill. Most likely, just a
> single spinlock is needed per clk-controller device. Second, I
> haven't finished off the branch/gate part, so gating/ungating of
> branches needs to be locked as well to prevent branches from
> turning on while rates change. And finally, the 'branches' list is
> duplicating a bunch of information about the child clks of an
> RCG, so it feels like we need a core framework API to enable and
> disable clks forcibly while remembering what is enabled/disabled
> or at least to walk the clk tree and call some function.

Looks similar to Mike's CCR idea ;)

>
> The spinlock per clk-controller is duplicating the regmap lock we
> already have, so we may want a regmap API to grab the lock, and
> then another regmap API to do reads/writes without grabbing the
> lock, and then finally release the lock with a regmap unlock API.

There is 'regsequence' for multiple write in a burst, but that's only if you do
write only ... I suppose you are more in read/update/writeback mode, so it
probably does not help much.

Maybe we could extend regmap's regsequence, to do a sequence of
regmap_update_bits() ?

Another possibility could be to provide your own lock/unlock ops when
registering the regmap. With this, you might be able to supply your own spinlock
to regmap. This is already supported by regmap, would it help ?

> This part is mostly an optimization, but it would be nice to have
> so that multiple writes could be done in sequence. This way, the
> RCG code could do the special locking sequence and the branch
> code could do the fire and forget single bit update.


2018-03-30 08:23:19

by Jerome Brunet

[permalink] [raw]
Subject: Re: [PATCH v5 08/10] clk: fix CLK_SET_RATE_GATE with clock rate protection

On Fri, 2017-12-01 at 22:51 +0100, Jerome Brunet wrote:
> Using clock rate protection, we can now enforce CLK_SET_RATE_GATE along the
> clock tree

Hi Mike, Stephen,

Gentle ping.
This patch has been waiting for while now.
As far as I know, the only blocking point to merging this patch is the qcom mmc
driver. The clocks used in this driver rely on the broken implementation of
CLK_SET_RATE_GATE, effectively ignoring the flag.

Since the flag is ignored, removing it from this particular clock won't make any
difference. Can we just do that until a better fix is available for this qcom
mmc clock ?

--- A bit of history ----

I managed to have a run on kci - based on v4.12-rc6:
https://kernelci.org/boot/all/job/khilman/branch/to-build/kernel/v4.12-rc6-10-ge
a373ddef830/

There was no build regression but kci did find one boot regression on qcom
platforms:
* qcom-apq8064-cm-qs600
* qcom-apq8064-ifc6410

it seems the problem is coming from the clock used by the mmc driver
(drivers/mmc/host/mmci.c)

the driver does following sequence:
* get_clk
* prepare_enable
* get_rate
* set_rate
* ...

with clock SDCx_clk (qcom_apq8064.dtsi:1037). This clock has CLK_SET_RATE_PARENT
flag so it will transmit the request to its parent.
The parent of this clock is SDCx_src which has the CLK_SET_RATE_GATE flag.

So obviously, is CLK_SET_RATE_GATE was enforced, the sequence used by mmci
driver would never had worked.

This particular driver rely on the fact that the clock can while the clock is
on. Either it actually works and we can remove CLK_SET_RATE_GATE until a better
solution comes.
OR, it does not work which means nobody has been using this driver for a long
time.

>
> Acked-by: Linus Walleij <[email protected]>
> Tested-by: Quentin Schulz <[email protected]>
> Tested-by: Maxime Ripard <[email protected]>
> Acked-by: Michael Turquette <[email protected]>
> Signed-off-by: Jerome Brunet <[email protected]>
> ---
> drivers/clk/clk.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index f6fe5e5595ca..1af843ae20ff 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -605,6 +605,9 @@ static void clk_core_unprepare(struct clk_core *core)
> if (WARN_ON(core->prepare_count == 1 && core->flags & CLK_IS_CRITICAL))
> return;
>
> + if (core->flags & CLK_SET_RATE_GATE)
> + clk_core_rate_unprotect(core);
> +
> if (--core->prepare_count > 0)
> return;
>
> @@ -679,6 +682,16 @@ static int clk_core_prepare(struct clk_core *core)
>
> core->prepare_count++;
>
> + /*
> + * CLK_SET_RATE_GATE is a special case of clock protection
> + * Instead of a consumer claiming exclusive rate control, it is
> + * actually the provider which prevents any consumer from making any
> + * operation which could result in a rate change or rate glitch while
> + * the clock is prepared.
> + */
> + if (core->flags & CLK_SET_RATE_GATE)
> + clk_core_rate_protect(core);
> +
> return 0;
> unprepare:
> clk_core_unprepare(core->parent);
> @@ -1780,9 +1793,6 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
> if (clk_core_rate_is_protected(core))
> return -EBUSY;
>
> - if ((core->flags & CLK_SET_RATE_GATE) && core->prepare_count)
> - return -EBUSY;
> -
> /* calculate new rates and get the topmost changed clock */
> top = clk_calc_new_rates(core, req_rate);
> if (!top)


2018-04-27 05:01:19

by Michael Turquette

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

Quoting Jerome Brunet (2018-02-02 04:50:28)
> On Thu, 2018-02-01 at 09:43 -0800, Stephen Boyd wrote:
> > > > > Applied to clk-protect-rate, with the exception that I did not apply
> > > > > "clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
> > > > > qcom clk code.
> > > > >
> > > > > Stephen, do you plan to fix up the qcom clock code so that the
> > > > > SET_RATE_GATE improvement can go in?
> > > > >
> > > >
> > > > I started working on it a while back. Let's see if I can finish
> > > > it off this weekend.
> > > >
> > >
> > > Hi Stephen,
> > >
> > > Have you been able find something to fix the qcom code regarding this issue ?
> > >
> >
> > This is what I have. I'm unhappy with a few things. First, I made
> > a spinlock for each clk, which is overkill. Most likely, just a
> > single spinlock is needed per clk-controller device. Second, I
> > haven't finished off the branch/gate part, so gating/ungating of
> > branches needs to be locked as well to prevent branches from
> > turning on while rates change. And finally, the 'branches' list is
> > duplicating a bunch of information about the child clks of an
> > RCG, so it feels like we need a core framework API to enable and
> > disable clks forcibly while remembering what is enabled/disabled
> > or at least to walk the clk tree and call some function.
>
> Looks similar to Mike's CCR idea ;)

Giving clk provider drivers more control over the clocks that they
provide is a similar concept, but the ancient ccr series dealt almost
exclusively with set_rate and set_parent ops.

>
> >
> > The spinlock per clk-controller is duplicating the regmap lock we
> > already have, so we may want a regmap API to grab the lock, and
> > then another regmap API to do reads/writes without grabbing the
> > lock, and then finally release the lock with a regmap unlock API.
>
> There is 'regsequence' for multiple write in a burst, but that's only if you do
> write only ... I suppose you are more in read/update/writeback mode, so it
> probably does not help much.
>
> Maybe we could extend regmap's regsequence, to do a sequence of
> regmap_update_bits() ?
>
> Another possibility could be to provide your own lock/unlock ops when
> registering the regmap. With this, you might be able to supply your own spinlock
> to regmap. This is already supported by regmap, would it help ?

Stephen, was there ever an update on your end? This patch has been
dangling for a while and I thought it was time to ping on it.

Regards,
Mike

>
> > This part is mostly an optimization, but it would be nice to have
> > so that multiple writes could be done in sequence. This way, the
> > RCG code could do the special locking sequence and the branch
> > code could do the fire and forget single bit update.
>


2018-05-25 02:17:51

by Jerome Brunet

[permalink] [raw]
Subject: Re: [PATCH v5 00/10] clk: implement clock rate protection mechanism

On Mon, 2018-04-23 at 11:21 -0700, Michael Turquette wrote:
> Quoting Jerome Brunet (2018-02-02 04:50:28)
> > On Thu, 2018-02-01 at 09:43 -0800, Stephen Boyd wrote:
> > > > > > Applied to clk-protect-rate, with the exception that I did not apply
> > > > > > "clk: fix CLK_SET_RATE_GATE with clock rate protection" as it breaks
> > > > > > qcom clk code.
> > > > > >
> > > > > > Stephen, do you plan to fix up the qcom clock code so that the
> > > > > > SET_RATE_GATE improvement can go in?
> > > > > >
> > > > >
> > > > > I started working on it a while back. Let's see if I can finish
> > > > > it off this weekend.
> > > > >
> > > >
> > > > Hi Stephen,
> > > >
> > > > Have you been able find something to fix the qcom code regarding this issue ?
> > > >
> > >
> > > This is what I have. I'm unhappy with a few things. First, I made
> > > a spinlock for each clk, which is overkill. Most likely, just a
> > > single spinlock is needed per clk-controller device. Second, I
> > > haven't finished off the branch/gate part, so gating/ungating of
> > > branches needs to be locked as well to prevent branches from
> > > turning on while rates change. And finally, the 'branches' list is
> > > duplicating a bunch of information about the child clks of an
> > > RCG, so it feels like we need a core framework API to enable and
> > > disable clks forcibly while remembering what is enabled/disabled
> > > or at least to walk the clk tree and call some function.
> >
> > Looks similar to Mike's CCR idea ;)
>
> Giving clk provider drivers more control over the clocks that they
> provide is a similar concept, but the ancient ccr series dealt almost
> exclusively with set_rate and set_parent ops.
>
> >
> > >
> > > The spinlock per clk-controller is duplicating the regmap lock we
> > > already have, so we may want a regmap API to grab the lock, and
> > > then another regmap API to do reads/writes without grabbing the
> > > lock, and then finally release the lock with a regmap unlock API.
> >
> > There is 'regsequence' for multiple write in a burst, but that's only if you do
> > write only ... I suppose you are more in read/update/writeback mode, so it
> > probably does not help much.
> >
> > Maybe we could extend regmap's regsequence, to do a sequence of
> > regmap_update_bits() ?
> >
> > Another possibility could be to provide your own lock/unlock ops when
> > registering the regmap. With this, you might be able to supply your own spinlock
> > to regmap. This is already supported by regmap, would it help ?
>
> Stephen, was there ever an update on your end? This patch has been
> dangling for a while and I thought it was time to ping on it.
>
> Regards,
> Mike

Mike, Stephen,
The patch as been sitting on list for 6 months now and CLK_SET_RATE_GATE still
does not really do what it should, at least not completely.
How can we progress on this ?

As explained in this mail [0] from March, I think there is a fairly simple way
to deal with platforms relying on the broken implementation, such as qcom and
the mmci driver.

[0]: https://lkml.kernel.org/r/[email protected]

Regards
Jerome

>
> >
> > > This part is mostly an optimization, but it would be nice to have
> > > so that multiple writes could be done in sequence. This way, the
> > > RCG code could do the special locking sequence and the branch
> > > code could do the fire and forget single bit update.
>
>