Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp433347pxx; Wed, 28 Oct 2020 08:14:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxQU8yUF0dn04igcx6UoIMbt0uwEiCBgJO/007vyMzwTHMkQE09yYw6SFN2ppSR+SBmMmJL X-Received: by 2002:a50:dac1:: with SMTP id s1mr8302773edj.74.1603898073028; Wed, 28 Oct 2020 08:14:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1603898073; cv=none; d=google.com; s=arc-20160816; b=Lov79W9jmZMhYCvlK7iWisfHAzX2KXfmIpJj945M9gcAh6ENmCFyGyW09HI2fl/uaf 476bANbvAsxYAoX+skA9dKKu0tY4WC5As6BFTF/OxBaaQYGmkI2NMHTVi2RyiFPKKsJQ xGUC08ASCQnBI7lEopnQBlinMOdOV9qyQ1gKkb7WnsDOmh0+7Y2Nawb5Xv/HaNlCVzUS COAaJ8A993pe67MX3uTOzZ0Zn3WwXfGCBJUEp8VXP4CIdMvxZ2cisA6h+g2Myumt6414 h5DEq3Q4wu+GUxUIH4hnJNk/Rz63H7/CDQ686MCr6BbAF+y7dQ8+KMB6HojDOiHG10mZ cftQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=QwItKSRUlnnKBCth5us15nmzsWYHfSh0Q2Nlaqh3vdQ=; b=qzVP5mNbZ25V/gZImAQRBazlT6gUzFGC2KMnlUD9k8JznRWpdhRbJtEljzyLmBdCGI X7K4ApwAmG5bkLnrlfwdhMUEZAqZwR/gjNgjyuODbc7n+aNSDvdEPBJv8W7jsFX46aB5 qQYpbr44w5gc+SRyHCqjIRZNLInqX3sGOGTTWL1O5xoHlpu9uOnwqW+4SSrp8BnHG/4X nKOsoUlxtT6ExYlaFP/pecN0ttdOVpTnA06ku17L74s4vYsJJLNZoPHcATUuoDhKdfyx ljTE/4zmFZtB6czeMG7AmrTIAVO51ObnVMIV3BVR0g0XTzTpANAuYyvo8eL7MRjs6LOT N1aQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ahubkIiM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z8si2802677eju.267.2020.10.28.08.14.07; Wed, 28 Oct 2020 08:14:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ahubkIiM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1793563AbgJ0QcP (ORCPT + 99 others); Tue, 27 Oct 2020 12:32:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:50138 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1802522AbgJ0Pt4 (ORCPT ); Tue, 27 Oct 2020 11:49:56 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 34B4A204EF; Tue, 27 Oct 2020 15:49:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603813794; bh=W06gv4MHpN2EYp5m9MR3JNXE4HzyvCiCnegRPutDrR4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ahubkIiMy+4IDTonIzFHe/Ko6zHZu4tvL2D2JgPlnvg7m+SAO8bM+7mU1NRwhBNV5 ZxTTjGdM2xZ3TN4FTOy0WdX3ERLGHVEAHtecrRo/yilz4jAhqjLCWYTwvq4JQwCH2l Ue4Xo9hLRCqy1lnLd5YpL9uKr+rwsRZM4upmyosE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Arvind Sankar , Borislav Petkov , Kees Cook , Miguel Ojeda , Nathan Chancellor , Sedat Dilek , Sasha Levin Subject: [PATCH 5.9 645/757] x86/asm: Replace __force_order with a memory clobber Date: Tue, 27 Oct 2020 14:54:55 +0100 Message-Id: <20201027135520.817341403@linuxfoundation.org> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201027135450.497324313@linuxfoundation.org> References: <20201027135450.497324313@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Arvind Sankar [ Upstream commit aa5cacdc29d76a005cbbee018a47faa6e724dd2d ] The CRn accessor functions use __force_order as a dummy operand to prevent the compiler from reordering CRn reads/writes with respect to each other. The fact that the asm is volatile should be enough to prevent this: volatile asm statements should be executed in program order. However GCC 4.9.x and 5.x have a bug that might result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior to these, including 5.x and 4.9.x, may reorder volatile asm statements with respect to each other. There are some issues with __force_order as implemented: - It is used only as an input operand for the write functions, and hence doesn't do anything additional to prevent reordering writes. - It allows memory accesses to be cached/reordered across write functions, but CRn writes affect the semantics of memory accesses, so this could be dangerous. - __force_order is not actually defined in the kernel proper, but the LLVM toolchain can in some cases require a definition: LLVM (as well as GCC 4.9) requires it for PIE code, which is why the compressed kernel has a definition, but also the clang integrated assembler may consider the address of __force_order to be significant, resulting in a reference that requires a definition. Fix this by: - Using a memory clobber for the write functions to additionally prevent caching/reordering memory accesses across CRn writes. - Using a dummy input operand with an arbitrary constant address for the read functions, instead of a global variable. This will prevent reads from being reordered across writes, while allowing memory loads to be cached/reordered across CRn reads, which should be safe. Signed-off-by: Arvind Sankar Signed-off-by: Borislav Petkov Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda Tested-by: Nathan Chancellor Tested-by: Sedat Dilek Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602 Link: https://lore.kernel.org/lkml/20200527135329.1172644-1-arnd@arndb.de/ Link: https://lkml.kernel.org/r/20200902232152.3709896-1-nivedita@alum.mit.edu Signed-off-by: Sasha Levin --- arch/x86/boot/compressed/pgtable_64.c | 9 --------- arch/x86/include/asm/special_insns.h | 28 ++++++++++++++------------- arch/x86/kernel/cpu/common.c | 4 ++-- 3 files changed, 17 insertions(+), 24 deletions(-) diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c index c8862696a47b9..7d0394f4ebf97 100644 --- a/arch/x86/boot/compressed/pgtable_64.c +++ b/arch/x86/boot/compressed/pgtable_64.c @@ -5,15 +5,6 @@ #include "pgtable.h" #include "../string.h" -/* - * __force_order is used by special_insns.h asm code to force instruction - * serialization. - * - * It is not referenced from the code, but GCC < 5 with -fPIE would fail - * due to an undefined symbol. Define it to make these ancient GCCs work. - */ -unsigned long __force_order; - #define BIOS_START_MIN 0x20000U /* 128K, less than this is insane */ #define BIOS_START_MAX 0x9f000U /* 640K, absolute maximum */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 59a3e13204c34..d6e3bb9363d22 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -11,45 +11,47 @@ #include /* - * Volatile isn't enough to prevent the compiler from reordering the - * read/write functions for the control registers and messing everything up. - * A memory clobber would solve the problem, but would prevent reordering of - * all loads stores around it, which can hurt performance. Solution is to - * use a variable and mimic reads and writes to it to enforce serialization + * The compiler should not reorder volatile asm statements with respect to each + * other: they should execute in program order. However GCC 4.9.x and 5.x have + * a bug (which was fixed in 8.1, 7.3 and 6.5) where they might reorder + * volatile asm. The write functions are not affected since they have memory + * clobbers preventing reordering. To prevent reads from being reordered with + * respect to writes, use a dummy memory operand. */ -extern unsigned long __force_order; + +#define __FORCE_ORDER "m"(*(unsigned int *)0x1000UL) void native_write_cr0(unsigned long val); static inline unsigned long native_read_cr0(void) { unsigned long val; - asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr0,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } static __always_inline unsigned long native_read_cr2(void) { unsigned long val; - asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr2,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } static __always_inline void native_write_cr2(unsigned long val) { - asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order)); + asm volatile("mov %0,%%cr2": : "r" (val) : "memory"); } static inline unsigned long __native_read_cr3(void) { unsigned long val; - asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr3,%0\n\t" : "=r" (val) : __FORCE_ORDER); return val; } static inline void native_write_cr3(unsigned long val) { - asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order)); + asm volatile("mov %0,%%cr3": : "r" (val) : "memory"); } static inline unsigned long native_read_cr4(void) @@ -64,10 +66,10 @@ static inline unsigned long native_read_cr4(void) asm volatile("1: mov %%cr4, %0\n" "2:\n" _ASM_EXTABLE(1b, 2b) - : "=r" (val), "=m" (__force_order) : "0" (0)); + : "=r" (val) : "0" (0), __FORCE_ORDER); #else /* CR4 always exists on x86_64. */ - asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr4,%0\n\t" : "=r" (val) : __FORCE_ORDER); #endif return val; } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index c5d6f17d9b9d3..178499f903661 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -359,7 +359,7 @@ void native_write_cr0(unsigned long val) unsigned long bits_missing = 0; set_register: - asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order)); + asm volatile("mov %0,%%cr0": "+r" (val) : : "memory"); if (static_branch_likely(&cr_pinning)) { if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) { @@ -378,7 +378,7 @@ void native_write_cr4(unsigned long val) unsigned long bits_changed = 0; set_register: - asm volatile("mov %0,%%cr4": "+r" (val), "+m" (cr4_pinned_bits)); + asm volatile("mov %0,%%cr4": "+r" (val) : : "memory"); if (static_branch_likely(&cr_pinning)) { if (unlikely((val & cr4_pinned_mask) != cr4_pinned_bits)) { -- 2.25.1