Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp3801382ybz; Mon, 20 Apr 2020 09:41:13 -0700 (PDT) X-Google-Smtp-Source: APiQypIA7BfPanvDoNYx/e2D+raShj/ggQ0b7RBUM93n4/Zo8/5QouuCrwXutSRzC2s6W2AkfiPf X-Received: by 2002:a17:906:348d:: with SMTP id g13mr16885918ejb.374.1587400873535; Mon, 20 Apr 2020 09:41:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587400873; cv=none; d=google.com; s=arc-20160816; b=ua8GUvKofumNxsyF9czkmLe2z1QeOh9LPsAw/L1zoSa1VSYwnYFBWsiQ5Cf7PxJll/ rfnfynq8iypz06skhozYtLf7B4PRwdSyhfLFl0L59QNcZy8SEfxufDkdT9BJbtRVv5uO TB015xPNrsO3nJVHo8cpPm4wgHEx2vUgwqCDl3dRvqMIioIH+1tngpUd4BdgPTR1hDp4 YT4TCXZC8kQG5BzmsMRL5l0W0X0pJiD0q8WzI8buonyz0K5pKhlJ19JdL0xZVgYUfJfn 7eJ7TDYykUaJszQBAeftOVBvfw+CN/GEG8cOCyIFP8TtXVriXTUI4P/TSamyBKpzqkvh GHFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=qja/ju2JUwg8TfSfcHDQO5b9s8hugBhZMr2GQauhsv0=; b=L0wlc1cKsjWNrB62h91KmkYi1U+sPerEIDUgRzZM4hLCU29m0XZZOw1JhrrOsxqA/l 5W54TeuLkX80gFdGD+diobOjop+2PWz2MJAsBi0CzqQwKeNH7wDM3S/RnvWZn6evkIPt WBZCb3N4SfHPWNrf+LnUVn+gtH4PRvACZoVzXmTsIbsFQOEEn3gWMRdWFrBA7PQr3uub 4L3viGEx10zA5xa76+Dliw0UswodQwxD2KqAEgCg79Tcwi9JgygCt6eD+TvLLOiHyLJC lSnGXN86iyHndkwba3xq1lbDdi/5l+VrJBahTGkrGcRTf+zd9WcPGW4y8Bci0kZjAFOh D4Bg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dp6si1076137ejc.238.2020.04.20.09.40.50; Mon, 20 Apr 2020 09:41:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727787AbgDTODV (ORCPT + 99 others); Mon, 20 Apr 2020 10:03:21 -0400 Received: from mx2.suse.de ([195.135.220.15]:38394 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726871AbgDTODV (ORCPT ); Mon, 20 Apr 2020 10:03:21 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 63091ABD7; Mon, 20 Apr 2020 14:03:18 +0000 (UTC) Subject: Re: [patch 05/15] x86/tlb: Move __flush_tlb() out of line To: Tom Lendacky , Thomas Gleixner , LKML Cc: x86@kernel.org, Kees Cook , Paolo Bonzini , Boris Ostrovsky References: <20200419203137.214111265@linutronix.de> <20200419203336.134117165@linutronix.de> <5857df01-abeb-c6cd-8e92-64eb365dc835@amd.com> From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= Message-ID: Date: Mon, 20 Apr 2020 16:03:17 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0 MIME-Version: 1.0 In-Reply-To: <5857df01-abeb-c6cd-8e92-64eb365dc835@amd.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 20.04.20 15:48, Tom Lendacky wrote: > On 4/19/20 3:31 PM, Thomas Gleixner wrote: >> cpu_tlbstate is exported because various TLB related functions need >> access >> to it, but cpu_tlbstate is sensitive information which should only be >> accessed by well contained kernel functions and not be directly >> exposed to >> modules. >> >> The various TLB flush functions need access to cpu_tlbstate. As a first >> step move __flush_tlb() out of line and hide the native function. The >> latter can be static when CONFIG_PARAVIRT is disabled. >> >> Consolidate the name space while at it and remove the pointless extra >> wrapper in the paravirt code. >> >> No functional change. >> >> Signed-off-by: Thomas Gleixner >> Cc: Thomas Lendacky >> Cc: Juergen Gross >> --- >>   arch/x86/include/asm/paravirt.h    |    4 +++- >>   arch/x86/include/asm/tlbflush.h    |   29 +++++------------------------ >>   arch/x86/kernel/cpu/mtrr/generic.c |    4 ++-- >>   arch/x86/kernel/paravirt.c         |    7 +------ >>   arch/x86/mm/mem_encrypt.c          |    2 +- >>   arch/x86/mm/tlb.c                  |   33 >> ++++++++++++++++++++++++++++++++- >>   arch/x86/platform/uv/tlb_uv.c      |    2 +- >>   7 files changed, 45 insertions(+), 36 deletions(-) >> >> --- a/arch/x86/include/asm/paravirt.h >> +++ b/arch/x86/include/asm/paravirt.h >> @@ -47,7 +47,9 @@ static inline void slow_down_io(void) >>   #endif >>   } >> -static inline void __flush_tlb(void) >> +void native_flush_tlb_local(void); >> + >> +static inline void __flush_tlb_local(void) >>   { >>       PVOP_VCALL0(mmu.flush_tlb_user); >>   } >> --- a/arch/x86/include/asm/tlbflush.h >> +++ b/arch/x86/include/asm/tlbflush.h >> @@ -140,12 +140,13 @@ static inline unsigned long build_cr3_no >>       return __sme_pa(pgd) | kern_pcid(asid) | CR3_NOFLUSH; >>   } >> +void flush_tlb_local(void); >> + >>   #ifdef CONFIG_PARAVIRT >>   #include >>   #else >> -#define __flush_tlb() __native_flush_tlb() >> -#define __flush_tlb_global() __native_flush_tlb_global() >> -#define __flush_tlb_one_user(addr) __native_flush_tlb_one_user(addr) >> +#define __flush_tlb_global()        __native_flush_tlb_global() >> +#define __flush_tlb_one_user(addr)    __native_flush_tlb_one_user(addr) >>   #endif >>   struct tlb_context { >> @@ -371,24 +372,6 @@ static inline void invalidate_user_asid( >>   } >>   /* >> - * flush the entire current user mapping >> - */ >> -static inline void __native_flush_tlb(void) >> -{ >> -    /* >> -     * Preemption or interrupts must be disabled to protect the access >> -     * to the per CPU variable and to prevent being preempted between >> -     * read_cr3() and write_cr3(). >> -     */ >> -    WARN_ON_ONCE(preemptible()); >> - >> -    invalidate_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid)); >> - >> -    /* If current->mm == NULL then the read_cr3() "borrows" an mm */ >> -    native_write_cr3(__native_read_cr3()); >> -} >> - >> -/* >>    * flush everything >>    */ >>   static inline void __native_flush_tlb_global(void) >> @@ -461,7 +444,7 @@ static inline void __flush_tlb_all(void) >>           /* >>            * !PGE -> !PCID (setup_pcid()), thus every flush is total. >>            */ >> -        __flush_tlb(); >> +        flush_tlb_local(); >>       } >>   } >> @@ -537,8 +520,6 @@ struct flush_tlb_info { >>       bool            freed_tables; >>   }; >> -#define local_flush_tlb() __flush_tlb() >> - >>   #define flush_tlb_mm(mm)                        \ >>           flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL, true) >> --- a/arch/x86/kernel/cpu/mtrr/generic.c >> +++ b/arch/x86/kernel/cpu/mtrr/generic.c >> @@ -761,7 +761,7 @@ static void prepare_set(void) __acquires >>       /* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */ >>       count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); >> -    __flush_tlb(); >> +    flush_tlb_local(); >>       /* Save MTRR state */ >>       rdmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); >> @@ -778,7 +778,7 @@ static void post_set(void) __releases(se >>   { >>       /* Flush TLBs (no need to flush caches - they are disabled) */ >>       count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); >> -    __flush_tlb(); >> +    flush_tlb_local(); >>       /* Intel (P6) standard MTRRs */ >>       mtrr_wrmsr(MSR_MTRRdefType, deftype_lo, deftype_hi); >> --- a/arch/x86/kernel/paravirt.c >> +++ b/arch/x86/kernel/paravirt.c >> @@ -160,11 +160,6 @@ unsigned paravirt_patch_insns(void *insn >>       return insn_len; >>   } >> -static void native_flush_tlb(void) >> -{ >> -    __native_flush_tlb(); >> -} >> - >>   /* >>    * Global pages have to be flushed a bit differently. Not a real >>    * performance problem because this does not happen often. >> @@ -359,7 +354,7 @@ struct paravirt_patch_template pv_ops = >>   #endif /* CONFIG_PARAVIRT_XXL */ >>       /* Mmu ops. */ >> -    .mmu.flush_tlb_user    = native_flush_tlb, >> +    .mmu.flush_tlb_user    = native_flush_tlb_local, >>       .mmu.flush_tlb_kernel    = native_flush_tlb_global, >>       .mmu.flush_tlb_one_user    = native_flush_tlb_one_user, >>       .mmu.flush_tlb_others    = native_flush_tlb_others, >> --- a/arch/x86/mm/mem_encrypt.c >> +++ b/arch/x86/mm/mem_encrypt.c >> @@ -134,7 +134,7 @@ static void __init __sme_early_map_unmap >>           size = (size <= PMD_SIZE) ? 0 : size - PMD_SIZE; >>       } while (size); >> -    __native_flush_tlb(); >> +    flush_tlb_local(); > > This invoked __native_flush_tlb() because of how early it is called and > the paravirt ops support isn't set up yet, resulting in a crash if not > invoking the native version directly. So this needs a "native" version > of the tlb flush to invoke. I don't think this is still true. With my rework of pvops to have all functions in one struct which is initialized statically initially everything should work from the time the kernel is mapped. In case it doesn't there is something very wrong IMO. Juergen