Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp577652pxk; Wed, 2 Sep 2020 09:11:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwzLxjMpeI1WllB95CxLIXPDtHTQCaVzll8BZ++N78QypeNYj+j1Bv1PA0iClhuZ7h9Zopd X-Received: by 2002:a17:906:4cc7:: with SMTP id q7mr733392ejt.437.1599063094024; Wed, 02 Sep 2020 09:11:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599063094; cv=none; d=google.com; s=arc-20160816; b=QRjTXVHS7JuUN8Cu/jai96W/IP+3CE0I2lS6f3f8QpdkBgjGGqydQiKnHN/PutQT3V EZlOiI1W1Kg82gD4b+ZHe7NRn6lnv6ZjSfVk1ruBwE6RHCaT3+22Tx4f9SpyLWjjKGNs mj/QU/nCOyG6KwuEOHPDn6BfbQ3zorRT9pnqXqJR3QPbrt7Pt5L+FfkJ4AKGMAU5ZtTs n3u1IGmhls9RCK3n3C+fHOaK2i5+41aGBcfJ8xIOJbru32Z626cgVqD0oB+q78c6EHDL f6s7g3FHrtNpBEW+YSKfnON6dG+XVXQ5nkDgv/7JVNIPP//DaTMeFBMyAjf2VulOubxx CKkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:date:from :dkim-signature; bh=D8gITKza0XwTTApPFQOx1l+4YOSWzYngcKCGuzpi0AI=; b=qw2WrdGtEL7E+JoVmI8isiACGj84RlKbvL9WrPRnG/HhXrbP7Ec9fIGESoEoyej6X9 omp0U3QFWxFj9M7zBuH2ZqOLPJCHdzJ39/kV47necwHYwZiUd6alFb110I+fJMwA3dAi ZMn9EALRS7qaNdjIIx3WRz+OsW+jCQm+J005DEEBLcqYRSp7mscZfEjRkGwiv9/cyXXM Y5RjEXPHFLW84V+miMjJeRmVh4sCPtiryS5G8lDXNB2sf0HXTem+GWW7qMnQNmqmF47+ 9TL288fu758VJ4gxN4UP0Lq0y9L7IwV3rZa38ybyvfyIBflAiq95x9jBFdAKihASeZeJ AzNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=Sdj78YXC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id co12si2919937edb.104.2020.09.02.09.11.09; Wed, 02 Sep 2020 09:11:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=Sdj78YXC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726669AbgIBQId (ORCPT + 99 others); Wed, 2 Sep 2020 12:08:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726293AbgIBQIc (ORCPT ); Wed, 2 Sep 2020 12:08:32 -0400 Received: from mail-qk1-x744.google.com (mail-qk1-x744.google.com [IPv6:2607:f8b0:4864:20::744]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A98A7C061244 for ; Wed, 2 Sep 2020 09:08:32 -0700 (PDT) Received: by mail-qk1-x744.google.com with SMTP id p4so210798qkf.0 for ; Wed, 02 Sep 2020 09:08:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=D8gITKza0XwTTApPFQOx1l+4YOSWzYngcKCGuzpi0AI=; b=Sdj78YXCXyAOtsbRsyIYWd2C4LqQGBRna79Z0k5t/Y9YX7cXle9zvQynGr8zta8b0S f9BTNAJDaJX0HlEH2CJ+X/J81LamaxdEjc2JXXnQpsn0enneQJ6SiB1TEy6IWyLw9Log sjJfW62YUFWPx49jINrjlaUCZu09SsczZGfT1C1dKBz/nXtQxb0D3njvdV14m6m0cwUP tlgJyhfRSKOSryGVt0Ei+cM4B1gl9hRY2efv1LmA8DTCdEST9W7MnIocJMgrIrS1RFf9 WJjV/LVUnTvqyWrdr7xPgRcoqo1VQsaF9JGPEl9lavRYJzPMdePS0X9Sx1nLmp2oeiVE oBMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:date:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=D8gITKza0XwTTApPFQOx1l+4YOSWzYngcKCGuzpi0AI=; b=QXRdy2VxcHyLE2A7vbbyHwXnVl/zpCIy8VcbySS+mGj8K8dFbarDC1iHoenPt/klmq lizSuU7kST9zoasp4SrGtlFqA+gm0phJGAbb9F3ucckB9/S4Mu/QEgAg+gM81sePFhqf eicjU6Utr6mBKmYcdOqMoFTC7nk+BiC5QpWVmudEzPl4dpL+hTcwVqFBQiPSIwxOM2pl rbnELZYhUuQZh/pQ/u0N0td9pD6U2Ub+7bctYvhQvN23BglY0pzeBZq+NkTVAF6rjJ+t dhLl/OUFB/2xHILMu9IuCfQvNjrtNuCPHY7k6NIJwhSIKWcb7t/8vzWMeyMGAEVP1myQ kQGQ== X-Gm-Message-State: AOAM531J6VsRNAdU2G8ud4NWJCosF8pO2U+Egkpz2segqglBjzclMAPs JdjDNN30DShRzT1SP0ildxs= X-Received: by 2002:a37:afc3:: with SMTP id y186mr7765891qke.36.1599062911863; Wed, 02 Sep 2020 09:08:31 -0700 (PDT) Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f]) by smtp.gmail.com with ESMTPSA id d76sm30180qkc.81.2020.09.02.09.08.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Sep 2020 09:08:30 -0700 (PDT) From: Arvind Sankar X-Google-Original-From: Arvind Sankar Date: Wed, 2 Sep 2020 12:08:28 -0400 To: Arvind Sankar Cc: Linus Torvalds , Miguel Ojeda , Sedat Dilek , Segher Boessenkool , Thomas Gleixner , Nick Desaulniers , "Paul E. McKenney" , Ingo Molnar , Arnd Bergmann , Borislav Petkov , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , "H. Peter Anvin" , "Kirill A. Shutemov" , Kees Cook , Peter Zijlstra , Juergen Gross , Andy Lutomirski , Andrew Cooper , LKML , clang-built-linux , Will Deacon , nadav.amit@gmail.com, Nathan Chancellor Subject: Re: [PATCH v2] x86/asm: Replace __force_order with memory clobber Message-ID: <20200902160828.GA3297881@rani.riverdale.lan> References: <20200823212550.3377591-1-nivedita@alum.mit.edu> <20200902153346.3296117-1-nivedita@alum.mit.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20200902153346.3296117-1-nivedita@alum.mit.edu> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 02, 2020 at 11:33:46AM -0400, Arvind Sankar wrote: > Fix this by: > - Using a memory clobber for the write functions to additionally prevent > caching/reordering memory accesses across CRn writes. > - Using a dummy input operand with an arbitrary constant address for the > read functions, instead of a global variable. This will prevent reads > from being reordered across writes, while allowing memory loads to be > cached/reordered across CRn reads, which should be safe. > Any thoughts on whether FORCE_ORDER is worth it just for CRn? MSRs don't use it, Nadav pointed out that PKRU doesn't use it (PKRU doesn't have a memory clobber on write either). I would guess that most of the volatile asm has not been written with the assumption that the compiler might decide to reorder it, so protecting just CRn access doesn't mitigate the impact of this bug. Thanks.