Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp6650162ybp; Tue, 15 Oct 2019 19:12:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqz/zZOq6k/5voOdbscrWKemvnPGNel2uSmr6kzU7lt9C8dzWt4bTCx02PCoWiPYnT6NPNSI X-Received: by 2002:a50:ee8f:: with SMTP id f15mr16081664edr.127.1571191930917; Tue, 15 Oct 2019 19:12:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571191930; cv=none; d=google.com; s=arc-20160816; b=lDLI5gSBZExZJrus72+WdbptHTotD7hOz4cefZPdg32L3i/0XFUUzXmXU62wciuU0s Skm9i9GRLVzTXZJdA21mqftQJH5Ln7QmstrPnnRpB6kmzT++vZf60nEGsIJdoJoL1N/p UgxfCYSipS9p4tNSnn7ANTS72olIEuPEtlvmT9CyKyO/v7Che6/E5AutzbgtqTnW26uZ ti8P2t9JB5ixwzSimCkirOe8vsfJPACGSqpO3C2wZ1lGQYTqrmelbXMG/2PyA7DdeqS9 Sokw/YDVY3pk3d0e1qUARQDVJ2Foadl8ZglJF87hsYO3Y7WdyXjNn3pgLtuqSODDKs/Y 8Ljw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=+l3W+6zrdtzkipXA2kdLcPqqInjPdsXEQHel75VB5Rg=; b=oiDi3fjKHPBQ0lz2qnFc1WqSzao7ESgauRlhtCJaWdKGEr4S+fGI1ETQElQrduEEIL Kin4Fj40Mil3iNQAemd0Oe3KCTjKp8eyN+SafqwdyW/JslRQdoFie3gTH+jziR5NdTWz Rb6j5T5qehYshUGRdUZmjh1zY7FhqhaC7F+8cvB8VtJbeyDmN1RxVrOUaWinkT3yHvjo fj8NdlI//mqqPjQ9X5bch7Da7PnZ+2lt4CcWQWMxrEhic62nmdFn3tXeibQM0A1peMuR zvfJZAvaVz2rTwq50QbRJjYcUP74d1ODmhpvo0gjptYmStF3NIjdlGIRmvZsGmLqW4J3 gVLg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f7si14166106ejt.155.2019.10.15.19.11.47; Tue, 15 Oct 2019 19:12:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731920AbfJOTVI (ORCPT + 99 others); Tue, 15 Oct 2019 15:21:08 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:45642 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389367AbfJOTSj (ORCPT ); Tue, 15 Oct 2019 15:18:39 -0400 Received: from localhost ([127.0.0.1] helo=localhost.localdomain) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1iKSL2-00067i-3h; Tue, 15 Oct 2019 21:18:32 +0200 From: Sebastian Andrzej Siewior To: linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, Russell King , linux-arm-kernel@lists.infradead.org, Sebastian Andrzej Siewior Subject: [PATCH 01/34] ARM: Use CONFIG_PREEMPTION Date: Tue, 15 Oct 2019 21:17:48 +0200 Message-Id: <20191015191821.11479-2-bigeasy@linutronix.de> In-Reply-To: <20191015191821.11479-1-bigeasy@linutronix.de> References: <20191015191821.11479-1-bigeasy@linutronix.de> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Gleixner CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT. Switch the entry code, cache over to use CONFIG_PREEMPTION and add output in show_stack() for PREEMPT_RT. Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Thomas Gleixner [bigeasy: +traps.c] Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/include/asm/switch_to.h | 2 +- arch/arm/kernel/entry-armv.S | 4 ++-- arch/arm/kernel/traps.c | 2 ++ arch/arm/mm/cache-v7.S | 4 ++-- arch/arm/mm/cache-v7m.S | 4 ++-- 5 files changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/switch_to.h b/arch/arm/include/asm/switch= _to.h index d3e937dcee4d0..007d8fea71572 100644 --- a/arch/arm/include/asm/switch_to.h +++ b/arch/arm/include/asm/switch_to.h @@ -10,7 +10,7 @@ * to ensure that the maintenance completes in case we migrate to another * CPU. */ -#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP) && defined(CONFIG_CPU_V= 7) +#if defined(CONFIG_PREEMPTION) && defined(CONFIG_SMP) && defined(CONFIG_CP= U_V7) #define __complete_pending_tlbi() dsb(ish) #else #define __complete_pending_tlbi() diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 858d4e5415326..77f54830554c3 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -211,7 +211,7 @@ ENDPROC(__dabt_svc) svc_entry irq_handler =20 -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION ldr r8, [tsk, #TI_PREEMPT] @ get preempt count ldr r0, [tsk, #TI_FLAGS] @ get flags teq r8, #0 @ if preempt count !=3D 0 @@ -226,7 +226,7 @@ ENDPROC(__irq_svc) =20 .ltorg =20 -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION svc_preempt: mov r8, lr 1: bl preempt_schedule_irq @ irq en/disable is done inside diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index c053abd1fb539..abb7dd7e656fd 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -248,6 +248,8 @@ void show_stack(struct task_struct *tsk, unsigned long = *sp) =20 #ifdef CONFIG_PREEMPT #define S_PREEMPT " PREEMPT" +#elif defined(CONFIG_PREEMPT_RT) +#define S_PREEMPT " PREEMPT_RT" #else #define S_PREEMPT "" #endif diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S index 0ee8fc4b4672c..dc8f152f35566 100644 --- a/arch/arm/mm/cache-v7.S +++ b/arch/arm/mm/cache-v7.S @@ -135,13 +135,13 @@ ENTRY(v7_flush_dcache_all) and r1, r1, #7 @ mask of the bits for current cache only cmp r1, #2 @ see what cache we have at this level blt skip @ skip if no cache, or just i-cache -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION save_and_disable_irqs_notrace r9 @ make cssr&csidr read atomic #endif mcr p15, 2, r10, c0, c0, 0 @ select current cache level in cssr isb @ isb to sych the new cssr&csidr mrc p15, 1, r1, c0, c0, 0 @ read the new csidr -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION restore_irqs_notrace r9 #endif and r2, r1, #7 @ extract the length of the cache lines diff --git a/arch/arm/mm/cache-v7m.S b/arch/arm/mm/cache-v7m.S index a0035c426ce63..1bc3a0a507539 100644 --- a/arch/arm/mm/cache-v7m.S +++ b/arch/arm/mm/cache-v7m.S @@ -183,13 +183,13 @@ ENTRY(v7m_flush_dcache_all) and r1, r1, #7 @ mask of the bits for current cache only cmp r1, #2 @ see what cache we have at this level blt skip @ skip if no cache, or just i-cache -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION save_and_disable_irqs_notrace r9 @ make cssr&csidr read atomic #endif write_csselr r10, r1 @ set current cache level isb @ isb to sych the new cssr&csidr read_ccsidr r1 @ read the new csidr -#ifdef CONFIG_PREEMPT +#ifdef CONFIG_PREEMPTION restore_irqs_notrace r9 #endif and r2, r1, #7 @ extract the length of the cache lines --=20 2.23.0