Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp3801281ybz; Mon, 20 Apr 2020 09:41:07 -0700 (PDT) X-Google-Smtp-Source: APiQypJcFfa6wgJH8ulkGyTH72k9MftFkJWYsCoe7u7WHeLuEYd5q8+t1yI+efJWRdAUumw7J3n6 X-Received: by 2002:aa7:d718:: with SMTP id t24mr15380963edq.29.1587400866923; Mon, 20 Apr 2020 09:41:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587400866; cv=none; d=google.com; s=arc-20160816; b=toGC70MUcDuQaBcDhYU7ymz+RlnyYI1xwKNYkuR6Yn08QmNLzeczJtcGPi808DMytI QISlNngvRAnNbBQgq3eVTQ2ckkJnHnZWtlZlns3xisYRAVXKqck0/XS9Q9BN3WDwGhdD MvRp8BDM8z5zBWIvOfNruMy1mnvjAdR+OZWC/ffGUeU1GSAvTY+IJ5/iTMXb7tXok/wp svAEYPP7tF0c3xgtGChcSNi1fkJP+EZjfVtcipfo5D6sN6Wl19See7XZHQavUkYs3TC+ Bl02xCgOTjIiSjAezjG0g61PKJcZF81XcUWsNdifnmZ5+Qy8m3mB7y1Fbc3kEMMCMo8C ujdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=oaTWsySW8nkwbFoO3rZMDEZuG+nzkk9rwm78M2DyMjg=; b=zXx//+DINf2Wclv7SRexI+by+zZbdmmkXT4+pwHo0O+bucoOzPiy2iwAeADm5vunJo w0TV7k0VeHRcs192Hw4R6ZWISCyWhRJTg0w+x0no3eO7tZeovnBSu3rEZT0MjYEdV0HF 9nsDxl3YNAa18Xps5+fy7gEtAkZU3YTjnfexRJalWSfTUT0c1tAdkgjfgLxsHXdPtECL KTryGCQQCBIoNtkeGkgAySYLCvEYAdAXly55LnsE6UAhw+e6Ecv5TWRBM0i7VJxsfQEQ HEDJv5hAh0x5xCWHJNrbaQtN03axyZXCtl9oNt3QTPmb5QKvaZqRkQJ5x6iE5J1BZ5gG 3x0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s25si862372edw.441.2020.04.20.09.40.43; Mon, 20 Apr 2020 09:41:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729965AbgDTOiV (ORCPT + 99 others); Mon, 20 Apr 2020 10:38:21 -0400 Received: from mx2.suse.de ([195.135.220.15]:60058 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726679AbgDTOiV (ORCPT ); Mon, 20 Apr 2020 10:38:21 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 3655AABD7; Mon, 20 Apr 2020 14:38:18 +0000 (UTC) Subject: Re: [patch 05/15] x86/tlb: Move __flush_tlb() out of line To: Tom Lendacky , Thomas Gleixner , LKML Cc: x86@kernel.org, Kees Cook , Paolo Bonzini , Boris Ostrovsky References: <20200419203137.214111265@linutronix.de> <20200419203336.134117165@linutronix.de> <5857df01-abeb-c6cd-8e92-64eb365dc835@amd.com> <852c1af7-bc1c-a86c-d390-7547563bdde4@amd.com> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: <76124c5d-d5ab-6ad0-b24b-4127894107e5@suse.com> Date: Mon, 20 Apr 2020 16:38:17 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <852c1af7-bc1c-a86c-d390-7547563bdde4@amd.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 20.04.20 16:26, Tom Lendacky wrote: > On 4/20/20 9:03 AM, Jürgen Groß wrote: >> On 20.04.20 15:48, Tom Lendacky wrote: >>> On 4/19/20 3:31 PM, Thomas Gleixner wrote: >>>> cpu_tlbstate is exported because various TLB related functions need >>>> access >>>> to it, but cpu_tlbstate is sensitive information which should only be >>>> accessed by well contained kernel functions and not be directly >>>> exposed to >>>> modules. >>>> >>>> The various TLB flush functions need access to cpu_tlbstate. As a first >>>> step move __flush_tlb() out of line and hide the native function. The >>>> latter can be static when CONFIG_PARAVIRT is disabled. >>>> >>>> Consolidate the name space while at it and remove the pointless extra >>>> wrapper in the paravirt code. >>>> >>>> No functional change. >>>> >>>> Signed-off-by: Thomas Gleixner >>>> Cc: Thomas Lendacky >>>> Cc: Juergen Gross >>>> --- >>>>   arch/x86/include/asm/paravirt.h    |    4 +++- >>>>   arch/x86/include/asm/tlbflush.h    |   29 >>>> +++++------------------------ >>>>   arch/x86/kernel/cpu/mtrr/generic.c |    4 ++-- >>>>   arch/x86/kernel/paravirt.c         |    7 +------ >>>>   arch/x86/mm/mem_encrypt.c          |    2 +- >>>>   arch/x86/mm/tlb.c                  |   33 >>>> ++++++++++++++++++++++++++++++++- >>>>   arch/x86/platform/uv/tlb_uv.c      |    2 +- >>>>   7 files changed, 45 insertions(+), 36 deletions(-) >>>> >>>> --- a/arch/x86/include/asm/paravirt.h >>>> +++ b/arch/x86/include/asm/paravirt.h >>>> @@ -47,7 +47,9 @@ static inline void slow_down_io(void) >>>>   #endif >>>>   } >>>> -static inline void __flush_tlb(void) >>>> +void native_flush_tlb_local(void); >>>> + >>>> +static inline void __flush_tlb_local(void) >>>>   { >>>>       PVOP_VCALL0(mmu.flush_tlb_user); >>>>   } >>>> --- a/arch/x86/include/asm/tlbflush.h >>>> +++ b/arch/x86/include/asm/tlbflush.h >>>> @@ -140,12 +140,13 @@ static inline unsigned long build_cr3_no >>>>       return __sme_pa(pgd) | kern_pcid(asid) | CR3_NOFLUSH; >>>>   } >>>> +void flush_tlb_local(void); >>>> + >>>>   #ifdef CONFIG_PARAVIRT >>>>   #include >>>>   #else >>>> -#define __flush_tlb() __native_flush_tlb() >>>> -#define __flush_tlb_global() __native_flush_tlb_global() >>>> -#define __flush_tlb_one_user(addr) __native_flush_tlb_one_user(addr) >>>> +#define __flush_tlb_global()        __native_flush_tlb_global() >>>> +#define __flush_tlb_one_user(addr) >>>> __native_flush_tlb_one_user(addr) >>>>   #endif >>>>   struct tlb_context { >>>> @@ -371,24 +372,6 @@ static inline void invalidate_user_asid( >>>>   } >>>>   /* >>>> - * flush the entire current user mapping >>>> - */ >>>> -static inline void __native_flush_tlb(void) >>>> -{ >>>> -    /* >>>> -     * Preemption or interrupts must be disabled to protect the access >>>> -     * to the per CPU variable and to prevent being preempted between >>>> -     * read_cr3() and write_cr3(). >>>> -     */ >>>> -    WARN_ON_ONCE(preemptible()); >>>> - >>>> -    invalidate_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid)); >>>> - >>>> -    /* If current->mm == NULL then the read_cr3() "borrows" an mm */ >>>> -    native_write_cr3(__native_read_cr3()); >>>> -} >>>> - >>>> -/* >>>>    * flush everything >>>>    */ >>>>   static inline void __native_flush_tlb_global(void) >>>> @@ -461,7 +444,7 @@ static inline void __flush_tlb_all(void) >>>>           /* >>>>            * !PGE -> !PCID (setup_pcid()), thus every flush is total. >>>>            */ >>>> -        __flush_tlb(); >>>> +        flush_tlb_local(); >>>>       } >>>>   } >>>> @@ -537,8 +520,6 @@ struct flush_tlb_info { >>>>       bool            freed_tables; >>>>   }; >>>> -#define local_flush_tlb() __flush_tlb() >>>> - >>>>   #define flush_tlb_mm(mm)                        \ >>>>           flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL, true) >>>> --- a/arch/x86/kernel/cpu/mtrr/generic.c >>>> +++ b/arch/x86/kernel/cpu/mtrr/generic.c >>>> @@ -761,7 +761,7 @@ static void prepare_set(void) __acquires >>>>       /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ >>>>       count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); >>>> -    __flush_tlb(); >>>> +    flush_tlb_local(); >>>>       /* Save MTRR state */ >>>>       rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); >>>> @@ -778,7 +778,7 @@ static void post_set(void) __releases(se >>>>   { >>>>       /* Flush TLBs (no need to flush caches - they are disabled) */ >>>>       count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); >>>> -    __flush_tlb(); >>>> +    flush_tlb_local(); >>>>       /* Intel (P6) standard MTRRs */ >>>>       mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); >>>> --- a/arch/x86/kernel/paravirt.c >>>> +++ b/arch/x86/kernel/paravirt.c >>>> @@ -160,11 +160,6 @@ unsigned paravirt_patch_insns(void *insn >>>>       return insn_len; >>>>   } >>>> -static void native_flush_tlb(void) >>>> -{ >>>> -    __native_flush_tlb(); >>>> -} >>>> - >>>>   /* >>>>    * Global pages have to be flushed a bit differently. Not a real >>>>    * performance problem because this does not happen often. >>>> @@ -359,7 +354,7 @@ struct paravirt_patch_template pv_ops = >>>>   #endif /* CONFIG_PARAVIRT_XXL */ >>>>       /* Mmu ops. */ >>>> -    .mmu.flush_tlb_user    = native_flush_tlb, >>>> +    .mmu.flush_tlb_user    = native_flush_tlb_local, >>>>       .mmu.flush_tlb_kernel    = native_flush_tlb_global, >>>>       .mmu.flush_tlb_one_user    = native_flush_tlb_one_user, >>>>       .mmu.flush_tlb_others    = native_flush_tlb_others, >>>> --- a/arch/x86/mm/mem_encrypt.c >>>> +++ b/arch/x86/mm/mem_encrypt.c >>>> @@ -134,7 +134,7 @@ static void __init __sme_early_map_unmap >>>>           size = (size <= PMD_SIZE) ? 0 : size - PMD_SIZE; >>>>       } while (size); >>>> -    __native_flush_tlb(); >>>> +    flush_tlb_local(); >>> >>> This invoked __native_flush_tlb() because of how early it is called >>> and the paravirt ops support isn't set up yet, resulting in a crash >>> if not invoking the native version directly. So this needs a "native" >>> version of the tlb flush to invoke. >> >> I don't think this is still true. With my rework of pvops to have all >> functions in one struct which is initialized statically initially >> everything should work from the time the kernel is mapped. >> >> In case it doesn't there is something very wrong IMO. > > The memory encryption support was implemented in 4.14, so it's quite > possible that this isn't an issue now. I'll test out the patch and > verify it. What release did your pvops rework land in? 4.20. Juergen