The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for
arm64") introduced a warning from Clang because vcpu_is_preempted() is
compiled away,
kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu'
[-Wunused-function]
static inline int node_cpu(struct optimistic_spin_node *node)
^
1 warning generated.
Since vcpu_is_preempted() had already been defined in
include/linux/sched.h as false, just comment out the redundant macro, so
it can still be served for the documentation purpose.
Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64")
Signed-off-by: Qian Cai <[email protected]>
---
arch/arm64/include/asm/spinlock.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index 102404dc1e13..b05f82e8ba19 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -17,7 +17,8 @@
*
* See:
* https://lore.kernel.org/lkml/[email protected]
+ *
+ * #define vcpu_is_preempted(cpu) false
*/
-#define vcpu_is_preempted(cpu) false
#endif /* __ASM_SPINLOCK_H */
--
2.21.0 (Apple Git-122.2)
On Thu, Jan 23, 2020 at 11:29:45AM -0500, Qian Cai wrote:
> The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for
> arm64") introduced a warning from Clang because vcpu_is_preempted() is
> compiled away,
>
> kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu'
> [-Wunused-function]
> static inline int node_cpu(struct optimistic_spin_node *node)
> ^
> 1 warning generated.
>
> Since vcpu_is_preempted() had already been defined in
> include/linux/sched.h as false, just comment out the redundant macro, so
> it can still be served for the documentation purpose.
>
> Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64")
> Signed-off-by: Qian Cai <[email protected]>
> ---
> arch/arm64/include/asm/spinlock.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> index 102404dc1e13..b05f82e8ba19 100644
> --- a/arch/arm64/include/asm/spinlock.h
> +++ b/arch/arm64/include/asm/spinlock.h
> @@ -17,7 +17,8 @@
> *
> * See:
> * https://lore.kernel.org/lkml/[email protected]
> + *
> + * #define vcpu_is_preempted(cpu) false
> */
> -#define vcpu_is_preempted(cpu) false
Damn, the whole point of this was to warn in the case that
vcpu_is_preempted() does get defined for arm64. Can we force it to evaluate
the macro argument instead (e.g. ({ (cpu), false; }) or something)?
Will
> On Jan 23, 2020, at 11:56 AM, Will Deacon <[email protected]> wrote:
>
> Damn, the whole point of this was to warn in the case that
> vcpu_is_preempted() does get defined for arm64. Can we force it to evaluate
> the macro argument instead (e.g. ({ (cpu), false; }) or something)?
That should work. Let me test it out and rinse.
> On Jan 23, 2020, at 11:56 AM, Will Deacon <[email protected]> wrote:
>
> Damn, the whole point of this was to warn in the case that
> vcpu_is_preempted() does get defined for arm64. Can we force it to evaluate
> the macro argument instead (e.g. ({ (cpu), false; }) or something)?
Actually, static inline should be better.
#define vcpu_is_preempted vcpu_is_preempted
static inline bool vcpu_is_preempted(int cpu)
{
return false;
}
On 1/23/20 12:31 PM, Qian Cai wrote:
>
>> On Jan 23, 2020, at 11:56 AM, Will Deacon <[email protected]> wrote:
>>
>> Damn, the whole point of this was to warn in the case that
>> vcpu_is_preempted() does get defined for arm64. Can we force it to evaluate
>> the macro argument instead (e.g. ({ (cpu), false; }) or something)?
> Actually, static inline should be better.
>
> #define vcpu_is_preempted vcpu_is_preempted
> static inline bool vcpu_is_preempted(int cpu)
> {
> return false;
> }
>
Yes, that may work.
Cheers,
Longman
On 1/23/20 11:29 AM, Qian Cai wrote:
> The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for
> arm64") introduced a warning from Clang because vcpu_is_preempted() is
> compiled away,
>
> kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu'
> [-Wunused-function]
> static inline int node_cpu(struct optimistic_spin_node *node)
> ^
> 1 warning generated.
>
> Since vcpu_is_preempted() had already been defined in
> include/linux/sched.h as false, just comment out the redundant macro, so
> it can still be served for the documentation purpose.
>
> Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64")
> Signed-off-by: Qian Cai <[email protected]>
> ---
> arch/arm64/include/asm/spinlock.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> index 102404dc1e13..b05f82e8ba19 100644
> --- a/arch/arm64/include/asm/spinlock.h
> +++ b/arch/arm64/include/asm/spinlock.h
> @@ -17,7 +17,8 @@
> *
> * See:
> * https://lore.kernel.org/lkml/[email protected]
> + *
> + * #define vcpu_is_preempted(cpu) false
> */
> -#define vcpu_is_preempted(cpu) false
>
> #endif /* __ASM_SPINLOCK_H */
Does adding a __maybe_unused tag help to prevent the warning? Like
diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 6ef600aa0f47..0722655af34f 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -22,7 +22,7 @@ static inline int encode_cpu(int cpu_nr)
return cpu_nr + 1;
}
-static inline int node_cpu(struct optimistic_spin_node *node)
+static inline int __maybe_unused node_cpu(struct optimistic_spin_node
*node)
{
return node->cpu - 1;
}
Cheers,
Longman