Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp3276216imm; Fri, 25 May 2018 02:55:41 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpDTe3PGTmIL8TVZKCyrTDBjCH0sOWnBq5GLWxhTt/1xY87iPWdnp0UIyUtYGwfMLZCDehE X-Received: by 2002:a65:4acc:: with SMTP id c12-v6mr1406315pgu.329.1527242141016; Fri, 25 May 2018 02:55:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527242140; cv=none; d=google.com; s=arc-20160816; b=hQPacVmvoUttQAVJz/zqIrfmA4sBJHcdRGnHopDdesOwP3AbnwcAzo8F8ae79muF56 HWFpVWolP6lvsrO56bpJH1Vx8xT7+8lTtKJKi+BOHwRcggcYajo57az3OSCqwlXacM91 KSltycDu1UyO1jwWkMwizuxjnZ4Gq/XzAxZ06XwM8k9mMKUAaaBCWEK5sJGr6GhM1BFu fsb1a771Vg3zyBiA6ufriNXrw//4gHgqKZQalYFX9+FijlqEMPIFteGIpBH9sOxSYsLF i8i/H1cxyhOf6bSnUqwHnpKr/+jYx+8+CfwlPJ19n5tPIBQWBL6GgK1502CuufgyA1nG ssSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=31Ru+6JqrGcCqZIpEd0mNGeHO2cX4aPOXkA5kThhblE=; b=Z6e5uRMWOGr1ecZSBaDb8QZ8eN4QZBRGhWZa/+4GCnPAB9DPcXPRx/j0ZTBr7Pwp6v CuFysu+QgTs14Q5rRjv9ens3yRJl3HeiCLiyZJ5GJZoV94yLySVN4WejjlyWj/EcrHzT rHLKXzpbUYH3m6zDxsrKim5CV8C1X9pEzFpkFwbbc3Y9vgzZ52SZUriN0nq5or22r2xC 1TDp0GEIh8XDuZh6sfaqYk2xl3TUjmj+UFv2AVhJbikuazPoBWxPXith61IsG0iR0B/q 4GjuFKZwF/++vT40sTZVpCFJCeHeouR/ieI7+QHF7I/xQ8uo0YTujHCOnXe/SSSyB/sb 1n9g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y11-v6si23230840plg.376.2018.05.25.02.55.26; Fri, 25 May 2018 02:55:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966005AbeEYJxd (ORCPT + 99 others); Fri, 25 May 2018 05:53:33 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:57624 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966178AbeEYJup (ORCPT ); Fri, 25 May 2018 05:50:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D21601596; Fri, 25 May 2018 02:50:44 -0700 (PDT) Received: from e112298-lin.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C868B3F25D; Fri, 25 May 2018 02:50:42 -0700 (PDT) From: Julien Thierry To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, daniel.thompson@linaro.org, joel@joelfernandes.org, marc.zyngier@arm.com, mark.rutland@arm.com, christoffer.dall@arm.com, james.morse@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, Julien Thierry Subject: [PATCH v4 20/26] arm64: Switch to PMR masking when starting CPUs Date: Fri, 25 May 2018 10:49:26 +0100 Message-Id: <1527241772-48007-21-git-send-email-julien.thierry@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1527241772-48007-1-git-send-email-julien.thierry@arm.com> References: <1527241772-48007-1-git-send-email-julien.thierry@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Once the boot CPU has been prepared or a new secondary CPU has been brought up, use ICC_PMR_EL1 to mask interrupts on that CPU and clear PSR.I bit. Signed-off-by: Julien Thierry Suggested-by: Daniel Thompson Cc: Catalin Marinas Cc: Will Deacon Cc: James Morse Cc: Marc Zyngier --- arch/arm64/include/asm/irqflags.h | 3 +++ arch/arm64/kernel/head.S | 35 +++++++++++++++++++++++++++++++++++ arch/arm64/kernel/smp.c | 5 +++++ 3 files changed, 43 insertions(+) diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h index 193cfd0..d31e9b6 100644 --- a/arch/arm64/include/asm/irqflags.h +++ b/arch/arm64/include/asm/irqflags.h @@ -153,5 +153,8 @@ static inline int arch_irqs_disabled_flags(unsigned long flags) return (ARCH_FLAGS_GET_DAIF(flags) & (PSR_I_BIT)) | !(ARCH_FLAGS_GET_PMR(flags) & ICC_PMR_EL1_EN_BIT); } + +void maybe_switch_to_sysreg_gic_cpuif(void); + #endif #endif diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index b085306..ba73690 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -648,6 +648,41 @@ set_cpu_boot_mode_flag: ENDPROC(set_cpu_boot_mode_flag) /* + * void maybe_switch_to_sysreg_gic_cpuif(void) + * + * Enable interrupt controller system register access if this feature + * has been detected by the alternatives system. + * + * Before we jump into generic code we must enable interrupt controller system + * register access because this is required by the irqflags macros. We must + * also mask interrupts at the PMR and unmask them within the PSR. That leaves + * us set up and ready for the kernel to make its first call to + * arch_local_irq_enable(). + * + */ +ENTRY(maybe_switch_to_sysreg_gic_cpuif) +alternative_if_not ARM64_HAS_IRQ_PRIO_MASKING + b 1f +alternative_else + mrs_s x0, SYS_ICC_SRE_EL1 +alternative_endif + orr x0, x0, #1 + msr_s SYS_ICC_SRE_EL1, x0 // Set ICC_SRE_EL1.SRE==1 + isb // Make sure SRE is now set + mrs x0, daif + tbz x0, #7, no_mask_pmr // Are interrupts on? + mov x0, ICC_PMR_EL1_MASKED + msr_s SYS_ICC_PMR_EL1, x0 // Prepare for unmask of I bit + msr daifclr, #2 // Clear the I bit + b 1f +no_mask_pmr: + mov x0, ICC_PMR_EL1_UNMASKED + msr_s SYS_ICC_PMR_EL1, x0 +1: + ret +ENDPROC(maybe_switch_to_sysreg_gic_cpuif) + +/* * These values are written with the MMU off, but read with the MMU on. * Writers will invalidate the corresponding address, discarding up to a * 'Cache Writeback Granule' (CWG) worth of data. The linker script ensures diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index b7fb909..3f39d8c 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -185,6 +185,8 @@ asmlinkage void secondary_start_kernel(void) struct mm_struct *mm = &init_mm; unsigned int cpu; + maybe_switch_to_sysreg_gic_cpuif(); + cpu = task_cpu(current); set_my_cpu_offset(per_cpu_offset(cpu)); @@ -417,6 +419,9 @@ void __init smp_prepare_boot_cpu(void) * and/or scheduling is enabled. */ apply_boot_alternatives(); + + /* Conditionally switch to GIC PMR for interrupt masking */ + maybe_switch_to_sysreg_gic_cpuif(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn) -- 1.9.1