Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1859187pxk; Tue, 1 Sep 2020 09:24:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy2mSeEAybDCueifPxImNWyMuqFO+WY2pYcuiEOW5uqrVUJxEmYCH4jkXc4U0GcUkid9I7W X-Received: by 2002:a05:6402:17ed:: with SMTP id t13mr2529737edy.163.1598977452985; Tue, 01 Sep 2020 09:24:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598977452; cv=none; d=google.com; s=arc-20160816; b=EbXhE5+jK8qp+QGbY/tTe2iVgBcTmROm6ted+HksZ+Aule9SwSJYCUaAi4hqlPGk7c jq3SZEXeK1vNgtISG0kxQgclIdL78cvCNGg8mZr8bUExug2XrwEUoTWx3IxLYBWk3ArX 3WUQd7XOYt/6ocp7Is2MrQe0tPJ2n7ir+gAfMy78nmCRLmV1H4zNMhZYzhFJ0jdrK9JG 1Q8jDNOww56SpWRq10SIoqHCRI3b9BqymufW45INfJTS6u5KjiXDglg+rueDO99x1RzY +nVrnF2aZ2d63K1oppivSz2sGsHDURWEcVtwX2VZB2L1o9BZtU8DcCjVEf12jlgEJy7o PgZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=Q6HC5jotdEa1iojupPydaZwGs0KBLlCSLSyqAXdojd8=; b=KYLs4zKsNI5ZKKar68wycd8faBLAMNLo+OdOUP3Va1rtS5IgNDQ9IFu+LLuvu435/g pR+lqukFeaAxm+eM+b3WtCJUMx7gU82fXquxFm5RpZpNXDbUxjZtrhrbOs4JqUWysYS1 iAoX7LZ2AIP+7O9Wx+LBmClITvypv9+S20Oo9cczsi/1H80JGEtjxgAEcg2pz/cKqSE1 OgX5oehLgT2ebxz7lgvWkB4MxuH7ID7FNQeOstjDaQEmo5DSrATo4ppirTsBf8GLbo3h hKXiUSpUoVFmrUWv5a7qf2OrOGmFqrEbOwkTKSyavBpQgk2DM5QBw94AOcpsmiRMXIgm 3yZA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=m4RbtglF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i3si827499edb.410.2020.09.01.09.23.49; Tue, 01 Sep 2020 09:24:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=m4RbtglF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729334AbgIAQWP (ORCPT + 99 others); Tue, 1 Sep 2020 12:22:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731442AbgIAQWJ (ORCPT ); Tue, 1 Sep 2020 12:22:09 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80F1BC061244; Tue, 1 Sep 2020 09:22:09 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id u128so1070023pfb.6; Tue, 01 Sep 2020 09:22:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Q6HC5jotdEa1iojupPydaZwGs0KBLlCSLSyqAXdojd8=; b=m4RbtglF9uLIh7ocOiqxiP1hhbwkhURBNmBPRqarrbp9KyWcTlph+PR4P3YqcMEfO3 Lk/27pqo56IqcaGVByJ+jpxuswzeqO6eSEUMZR3f8LVq6nEKzV+7XNrfvE/kp3TVQfdA HJqK3LgECrC/PSyV1Dmu7aX7pwqu5edk/gfned4Ql2kxFZ8vQU2DM8NbyhCh75GjRM5s yYixT1Tm0k6A+OMcMIiCKc1dXt5qg9hCpktcWXverIrKWwO/20x/dRWfK26d4jiSgaZY 9JJJXi1sg+EfK4vfI+1awrIkltscpAtIOOIY1ZGbI6HzU7yRqQtVDTeB8oJoOOTwZg41 gB5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Q6HC5jotdEa1iojupPydaZwGs0KBLlCSLSyqAXdojd8=; b=Qdh++YGQz6HMRUlLAiWe06OqaAu97Cv/XnIfCzc3mYDPOyATai0aDjoWzn0lDoAS2T bdVh+S2gPBWNGbTP8zInL8orW3UmSXyF5Mt9XImIw2TW/byCnVgE7hV0ZG5zQ+x7Gxbp L5aeTEEj+UTPPLFCNsWSLGay4XKm0KUEdsJ6cZOFq6/tmVH1ma7f83Fitr21NZlkakF3 xqzHk8frlMQCgHRYh6XOsXlxmBkbZpMUFTTvlFSzQq1vS3dqQruZvHrcNHhcS2AVVVvr J3mjhVzQi1jh/0t5rO10Rq9VrQLB7u+JgwHLRBOoCfQu/KGrIuvecscF/2J3Gf78nGjE CAHg== X-Gm-Message-State: AOAM531oJkJNwdIVTK0j/Bq9Vgo9t97Hb72n7MPiCTcvs2q5LhTFpbRD QeeQ/vbNuJfpRk1Z/IKaVTopEPbk6AaR3w== X-Received: by 2002:a63:110c:: with SMTP id g12mr2052631pgl.91.1598977328199; Tue, 01 Sep 2020 09:22:08 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id mw8sm1901888pjb.47.2020.09.01.09.22.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Sep 2020 09:22:06 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-kernel@vger.kernel.org, Thomas Gleixner Cc: Nadav Amit , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Peter Zijlstra , Andy Lutomirski , Kees Cook , stable@vger.kernel.org Subject: [PATCH] x86/special_insn: reverse __force_order logic Date: Tue, 1 Sep 2020 09:18:57 -0700 Message-Id: <20200901161857.566142-1-namit@vmware.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Nadav Amit The __force_order logic seems to be inverted. __force_order is supposedly used to manipulate the compiler to use its memory dependencies analysis to enforce orders between CR writes and reads. Therefore, the memory should behave as a "CR": when the CR is read, the memory should be "read" by the inline assembly, and __force_order should be an output. When the CR is written, the memory should be "written". This change should allow to remove the "volatile" qualifier from CR reads at a later patch. While at it, remove the extra new-line from the inline assembly, as it only confuses GCC when it estimates the cost of the inline assembly. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: x86@kernel.org Cc: "H. Peter Anvin" Cc: Peter Zijlstra Cc: Andy Lutomirski Cc: Kees Cook Cc: stable@vger.kernel.org Signed-off-by: Nadav Amit --- Unless I misunderstand the logic, __force_order should also be used by rdpkru() and wrpkru() which do not have dependency on __force_order. I also did not understand why native_write_cr0() has R/W dependency on __force_order, and why native_write_cr4() no longer has any dependency on __force_order. --- arch/x86/include/asm/special_insns.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 5999b0b3dd4a..dff5e5b01a3c 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -24,32 +24,32 @@ void native_write_cr0(unsigned long val); static inline unsigned long native_read_cr0(void) { unsigned long val; - asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr0,%0" : "=r" (val) : "m" (__force_order)); return val; } static __always_inline unsigned long native_read_cr2(void) { unsigned long val; - asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr2,%0" : "=r" (val) : "m" (__force_order)); return val; } static __always_inline void native_write_cr2(unsigned long val) { - asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order)); + asm volatile("mov %1,%%cr2" : "=m" (__force_order) : "r" (val)); } static inline unsigned long __native_read_cr3(void) { unsigned long val; - asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr3,%0" : "=r" (val) : "m" (__force_order)); return val; } static inline void native_write_cr3(unsigned long val) { - asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order)); + asm volatile("mov %1,%%cr3" : "=m" (__force_order) : "r" (val)); } static inline unsigned long native_read_cr4(void) @@ -64,10 +64,10 @@ static inline unsigned long native_read_cr4(void) asm volatile("1: mov %%cr4, %0\n" "2:\n" _ASM_EXTABLE(1b, 2b) - : "=r" (val), "=m" (__force_order) : "0" (0)); + : "=r" (val) : "m" (__force_order), "0" (0)); #else /* CR4 always exists on x86_64. */ - asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order)); + asm volatile("mov %%cr4,%0" : "=r" (val) : "m" (__force_order)); #endif return val; } -- 2.25.1