Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp9360884ybi; Wed, 10 Jul 2019 08:59:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqyo3XUYnzywuTuOZ72Zi9vg65lI+g0XQQTwGjyatvy1qtgwB6WDQIJEJSDBH0mt4NaU/jJB X-Received: by 2002:a17:90b:d8a:: with SMTP id bg10mr7893884pjb.92.1562774384286; Wed, 10 Jul 2019 08:59:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562774384; cv=none; d=google.com; s=arc-20160816; b=IPbLDa2tTi57cefJ2PIGmZH1bVx8oLTaDBtPkYxUoiGHwzUh9caRzGIzsZSCSf2xkQ GCQtq09oGT6BWdvhuJPY5EtMHYWMPZgT5U1IQ9BsNN0ItXS6ZO0R9ZvT+wUUypjS7KBb 6pUPVFJfVbWJgXjtdy6Zl7iMfcMES24YXxuSnE9hBcrzt3Mb6p0lr5lMiaWP7i/JxciW Z3731ta314IpXOhqrktlKbGDFRf8k0TsE65ji7I6muzIfPXbxkXVC8xO/avNq5P1TbF+ /9G1LTfYWGzarLLKd5DoCE6xo/TaAQTXmi6Bvx5Ps9g/3aSAiHgOm+khzydnbbUcEh5Z nmJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=5NzwbBGqtPoTMfsv3+OVP8wP42KNVmOMVqEYZiOBrx4=; b=Mh7Gl+vh7kvjZUfE/xL8V9RbsrAa4ENPCZrwwMUSWbxqBbdX7MLHCo1ZikNQV9rvst f5/jbabbgs6UgLHd95cbr0146oribqfenznk2VU1OHcJhrAkBj2x7ngdgqC5O0M5KkJn alUshtn+btBcD/6Cpu1yH1l8lnOvQeCUoaO4eT6g6xXqxIjjWTgO8r/GgtaiNhylX5a9 GBijS1tzzD7x3T48KFx5y3mvuajNS50kMjqFxF3xfxduaOwK0z3FbzxrEBkDWi2LhgP5 cgDRrYo1PN+mqPYk8zb4jcSJfU5VwpnqyszllOJmvDaXYRq4nDF+9faVKOHUgl4ZOtpc R1Iw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mengyan1223.wang header.s=mail header.b=u7aob8J5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=mengyan1223.wang Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b69si1319087pga.541.2019.07.10.08.59.28; Wed, 10 Jul 2019 08:59:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@mengyan1223.wang header.s=mail header.b=u7aob8J5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=mengyan1223.wang Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728257AbfGJP6R (ORCPT + 99 others); Wed, 10 Jul 2019 11:58:17 -0400 Received: from mengyan1223.wang ([89.208.246.23]:38962 "EHLO mengyan1223.wang" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727463AbfGJP6R (ORCPT ); Wed, 10 Jul 2019 11:58:17 -0400 Received: from [IPv6:2408:8270:a58:d980:697b:cb16:ae5f:f5aa] (unknown [IPv6:2408:8270:a58:d980:697b:cb16:ae5f:f5aa]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (Client did not present a certificate) (Authenticated sender: xry111@mengyan1223.wang) by mengyan1223.wang (Postfix) with ESMTPSA id EDE5C65B60; Wed, 10 Jul 2019 11:58:08 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mengyan1223.wang; s=mail; t=1562774296; bh=5NzwbBGqtPoTMfsv3+OVP8wP42KNVmOMVqEYZiOBrx4=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=u7aob8J5jToXtx8TioDj1zWKqG8nx0hwBHgJrF3J7+BreMBBph9klc+3T0UKEKFbA LLb5/Z/9NDEx0J/nN/9lAv8VZIlNGLbo/gJB37UZdGfdI+4WgYZYYg5shLP676Dy/i byIa88C2MxAATFJMnY4oFYTzjiiaboJNRSQTW8qvOC0ZUwgjw4oamotc9Jg0asOFbJ FzHvf6Z3AfdxV0MtENWIQeTukJ6TYadBUfRiQ5ROkzofa79h2nOkBVTaPx+1w+JPna UQBkvwa4o8ZqZ58GSNKlDj7Wk8kqmqOfvCQhF2FiN7fjKi+EBVD2gHIBpDomgCuHEw epfhnTIyoBYhQ== Message-ID: Subject: Re: [GIT PULL] x86/topology changes for v5.3 From: Xi Ruoyao To: Thomas Gleixner , Peter Zijlstra Cc: Jiri Kosina , Kees Cook , Linus Torvalds , Ingo Molnar , Linux List Kernel Mailing , Borislav Petkov , Len Brown , Andrew Morton , "Rafael J. Wysocki" , Tony Luck , Bob Moore , Erik Schmauss , Josh Poimboeuf , Daniel Bristot de Oliveira , xry111@mengyan1223.wang Date: Wed, 10 Jul 2019 23:58:03 +0800 In-Reply-To: References: <201907091727.91CC6C72D8@keescook> <1ad2de95e694a29909801d022fe2d556df9a4bd5.camel@mengyan1223.wang> <768463eb26a2feb0fcc374fd7f9cc28b96976917.camel@mengyan1223.wang> <20190710134433.GN3402@hirez.programming.kicks-ass.net> <20190710142653.GJ3419@hirez.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.32.3 MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019-07-10 17:13 +0200, Thomas Gleixner wrote: > Something like the below. Builds and boots, must be perfect. > > Thanks, > > tglx Tested-by: Xi Ruoyao > 8<---------------- > > arch/x86/include/asm/processor.h | 1 > arch/x86/include/asm/special_insns.h | 41 ------------------- > arch/x86/kernel/cpu/common.c | 72 +++++++++++++++++++++++++++---- > ---- > arch/x86/kernel/smpboot.c | 14 ------ > arch/x86/xen/smp_pv.c | 1 > 5 files changed, 61 insertions(+), 68 deletions(-) > > --- a/arch/x86/include/asm/processor.h > +++ b/arch/x86/include/asm/processor.h > @@ -741,6 +741,7 @@ extern void load_direct_gdt(int); > extern void load_fixmap_gdt(int); > extern void load_percpu_segment(int); > extern void cpu_init(void); > +extern void cr4_init(void); > > static inline unsigned long get_debugctlmsr(void) > { > --- a/arch/x86/include/asm/special_insns.h > +++ b/arch/x86/include/asm/special_insns.h > @@ -18,9 +18,7 @@ > */ > extern unsigned long __force_order; > > -/* Starts false and gets enabled once CPU feature detection is done. */ > -DECLARE_STATIC_KEY_FALSE(cr_pinning); > -extern unsigned long cr4_pinned_bits; > +void native_write_cr0(unsigned long val); > > static inline unsigned long native_read_cr0(void) > { > @@ -29,24 +27,6 @@ static inline unsigned long native_read_ > return val; > } > > -static inline void native_write_cr0(unsigned long val) > -{ > - unsigned long bits_missing = 0; > - > -set_register: > - asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order)); > - > - if (static_branch_likely(&cr_pinning)) { > - if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) { > - bits_missing = X86_CR0_WP; > - val |= bits_missing; > - goto set_register; > - } > - /* Warn after we've set the missing bits. */ > - WARN_ONCE(bits_missing, "CR0 WP bit went missing!?\n"); > - } > -} > - > static inline unsigned long native_read_cr2(void) > { > unsigned long val; > @@ -91,24 +71,7 @@ static inline unsigned long native_read_ > return val; > } > > -static inline void native_write_cr4(unsigned long val) > -{ > - unsigned long bits_missing = 0; > - > -set_register: > - asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits)); > - > - if (static_branch_likely(&cr_pinning)) { > - if (unlikely((val & cr4_pinned_bits) != cr4_pinned_bits)) { > - bits_missing = ~val & cr4_pinned_bits; > - val |= bits_missing; > - goto set_register; > - } > - /* Warn after we've set the missing bits. */ > - WARN_ONCE(bits_missing, "CR4 bits went missing: %lx!?\n", > - bits_missing); > - } > -} > +void native_write_cr4(unsigned long val); > > #ifdef CONFIG_X86_64 > static inline unsigned long native_read_cr8(void) > --- a/arch/x86/kernel/cpu/common.c > +++ b/arch/x86/kernel/cpu/common.c > @@ -366,10 +366,62 @@ static __always_inline void setup_umip(s > cr4_clear_bits(X86_CR4_UMIP); > } > > -DEFINE_STATIC_KEY_FALSE_RO(cr_pinning); > -EXPORT_SYMBOL(cr_pinning); > -unsigned long cr4_pinned_bits __ro_after_init; > -EXPORT_SYMBOL(cr4_pinned_bits); > +static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning); > +static unsigned long cr4_pinned_bits __ro_after_init; > + > +void native_write_cr0(unsigned long val) > +{ > + unsigned long bits_missing = 0; > + > +set_register: > + asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order)); > + > + if (static_branch_likely(&cr_pinning)) { > + if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) { > + bits_missing = X86_CR0_WP; > + val |= bits_missing; > + goto set_register; > + } > + /* Warn after we've set the missing bits. */ > + WARN_ONCE(bits_missing, "CR0 WP bit went missing!?\n"); > + } > +} > +EXPORT_SYMBOL(native_write_cr0); > + > +void native_write_cr4(unsigned long val) > +{ > + unsigned long bits_missing = 0; > + > +set_register: > + asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits)); > + > + if (static_branch_likely(&cr_pinning)) { > + if (unlikely((val & cr4_pinned_bits) != cr4_pinned_bits)) { > + bits_missing = ~val & cr4_pinned_bits; > + val |= bits_missing; > + goto set_register; > + } > + /* Warn after we've set the missing bits. */ > + WARN_ONCE(bits_missing, "CR4 bits went missing: %lx!?\n", > + bits_missing); > + } > +} > +EXPORT_SYMBOL(native_write_cr4); > + > +void cr4_init(void) > +{ > + unsigned long cr4 = __read_cr4(); > + > + if (boot_cpu_has(X86_FEATURE_PCID)) > + cr4 |= X86_CR4_PCIDE; > + if (static_branch_likely(&cr_pinning)) > + cr4 |= cr4_pinned_bits; > + > + __write_cr4(cr4); > + > + /* Initialize cr4 shadow for this CPU. */ > + this_cpu_write(cpu_tlbstate.cr4, cr4); > +} > > /* > * Once CPU feature detection is finished (and boot params have been > @@ -1723,12 +1775,6 @@ void cpu_init(void) > > wait_for_master_cpu(cpu); > > - /* > - * Initialize the CR4 shadow before doing anything that could > - * try to read it. > - */ > - cr4_init_shadow(); > - > if (cpu) > load_ucode_ap(); > > @@ -1823,12 +1869,6 @@ void cpu_init(void) > > wait_for_master_cpu(cpu); > > - /* > - * Initialize the CR4 shadow before doing anything that could > - * try to read it. > - */ > - cr4_init_shadow(); > - > show_ucode_info_early(); > > pr_info("Initializing CPU#%d\n", cpu); > --- a/arch/x86/kernel/smpboot.c > +++ b/arch/x86/kernel/smpboot.c > @@ -210,28 +210,16 @@ static int enable_start_cpu0; > */ > static void notrace start_secondary(void *unused) > { > - unsigned long cr4 = __read_cr4(); > - > /* > * Don't put *anything* except direct CPU state initialization > * before cpu_init(), SMP booting is too fragile that we want to > * limit the things done here to the most necessary things. > */ > - if (boot_cpu_has(X86_FEATURE_PCID)) > - cr4 |= X86_CR4_PCIDE; > - if (static_branch_likely(&cr_pinning)) > - cr4 |= cr4_pinned_bits; > - > - __write_cr4(cr4); > + cr4_init(); > > #ifdef CONFIG_X86_32 > /* switch away from the initial page table */ > load_cr3(swapper_pg_dir); > - /* > - * Initialize the CR4 shadow before doing anything that could > - * try to read it. > - */ > - cr4_init_shadow(); > __flush_tlb_all(); > #endif > load_current_idt(); > --- a/arch/x86/xen/smp_pv.c > +++ b/arch/x86/xen/smp_pv.c > @@ -58,6 +58,7 @@ static void cpu_bringup(void) > { > int cpu; > > + cr4_init(); > cpu_init(); > touch_softlockup_watchdog(); > preempt_disable(); -- Xi Ruoyao School of Aerospace Science and Technology, Xidian University