Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp638053pxk; Wed, 2 Sep 2020 10:44:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyYdRSqKT6BzG9upukcHCKR4tpJ9z2x+4qkIz87TueZPPjHg7IkWEUhRuwcRszfXGcdGW1R X-Received: by 2002:a50:f102:: with SMTP id w2mr1219596edl.63.1599068699237; Wed, 02 Sep 2020 10:44:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599068699; cv=none; d=google.com; s=arc-20160816; b=FIfZqHe+fg4ZjBaOQNb6BFxcTtbQnERcnUei14vupHiQBoQ3/6c1inyz8RZy5qdV+N 0S1I82chgcW/m6euQH8nIXaiyrWgPyxbN47gJR0P+6aVN474T3cb+xI8z9VwCJnIlLAp QQdag7vSvZRd438luoYb/98bpA440HShnWOTOYqhkB2WZuSweRhI/ReRWDRmWoGhiR2F tIa2VbhjdkJyEOEgZJt00c4lRcCIL2rX2z/1SBQn09BlDwOYPRxxDbtrxrOmaLEn0CxE kKPhDzoWpKqqMFP2K+NhBqBVrdHfiAzeoQ2YYotMJVOThtZK9XOeShUZrrIUlUzOTFvo RCfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:date:from :dkim-signature; bh=vgaKAzdGE8hK8jX5VIAMg2rIsFKjVZ+G64RGVHPspFk=; b=0zPA5Ia0G2Dk5LXlPRHE1AfoGOTZ2T/CfabZ608db6p9H8Y9bANgP7NKmZo2uauRz3 KviaaXxfIjaOtwCGrYUb0w3sP4CG0eD6JyrLMbliVdx7Y87V01+4QIK0jW9TQKeK/fb8 +wCudpWI6RzuMxY5ZojP66ZsR9CC79PZXBOQYT8ybt451mTSfzayalM0/JGyMApIYOI0 PoYulhn/agCg3JSvbaWehVYc31osSMrcH8nd+rHrOR0aqjaQ5ywk5nFAG6s469MndHfh HPqSBmx0hPXVG0IJJGdngrza/e9tsUksyR/MAk0Dokd9gjpUlZrseryQWdqAo1GdhvST pziA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=lisUJLL2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a13si2826627edh.74.2020.09.02.10.44.35; Wed, 02 Sep 2020 10:44:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=lisUJLL2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726384AbgIBRhC (ORCPT + 99 others); Wed, 2 Sep 2020 13:37:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41418 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726140AbgIBRhB (ORCPT ); Wed, 2 Sep 2020 13:37:01 -0400 Received: from mail-qv1-xf43.google.com (mail-qv1-xf43.google.com [IPv6:2607:f8b0:4864:20::f43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA77CC061244 for ; Wed, 2 Sep 2020 10:36:59 -0700 (PDT) Received: by mail-qv1-xf43.google.com with SMTP id cr8so2596016qvb.10 for ; Wed, 02 Sep 2020 10:36:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=vgaKAzdGE8hK8jX5VIAMg2rIsFKjVZ+G64RGVHPspFk=; b=lisUJLL2WukHfkTnc5Q5MLvsfNQo3NRV7V1bKtu/7/1+zzIpFACD4j0K0w1MVuVGjL TzO9kHzuh9SUWXYJsu7bWUre/gCLdfapig834pwjTwWShvYRN7q1FGWwkts4Hr3ey0Q2 8vC9qioheml5zBF0ENAMozGKejPeLRIeH5r0do8bh0nEL6B4VVZlystOjWHuVBec6DkP JPpnmadxpWnNNzZoGRzgbuhomEUO7Xi/4mJvBjlsUJ9LbrGtIvLPFaX7FLjeiQKWufPd E+sZBqVpOIUxssjpQOetHy512+lLj1bfBw8fToUPlBdUIWHrBDQGFgwmBY7+utZOgqYX xWFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:date:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=vgaKAzdGE8hK8jX5VIAMg2rIsFKjVZ+G64RGVHPspFk=; b=B3hiAGCTTDQMcUfYKo7nAqOB3h77RRQMAsqIfuT9HUPI5g1AScAI3q7vzMYJ25g8zC pGhuCF9eQ2s6UKlr0XeHEN/cDy47yAd0J3nB34VDDN1Y/8I8sXY72wVoK2r6fgRRID/g ihj7gygDUNslQ+rHgddRh/0WPiUPm4JHiAkWLBPgNfz+RwMyMR2kFxdC/D4ldT/DVuu+ WRfnL2jCb+FefmzMvUnGZoPqIwx0R0tMaNT1+IRDnyHktj++gFonPQgZv8Lvh4gd0hrk U+VGAYcUwSFEJWmhJ8hIkiYBmN5dYZZrTNWPUYtMdY060Xk4g4c2d0W9p5AE3695t7SF vxKA== X-Gm-Message-State: AOAM530G4Vz3dWHlCRU+2c4JOEJ/+L+Z+H7BuqcRrd4fXOBRmK8iOd1M 9j8Q9Y0ucMZkkdcPJFaqyIk= X-Received: by 2002:ad4:4ae1:: with SMTP id cp1mr7825524qvb.216.1599068218772; Wed, 02 Sep 2020 10:36:58 -0700 (PDT) Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f]) by smtp.gmail.com with ESMTPSA id y9sm260036qka.0.2020.09.02.10.36.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Sep 2020 10:36:57 -0700 (PDT) From: Arvind Sankar X-Google-Original-From: Arvind Sankar Date: Wed, 2 Sep 2020 13:36:55 -0400 To: Segher Boessenkool Cc: Arvind Sankar , Linus Torvalds , Miguel Ojeda , Sedat Dilek , Thomas Gleixner , Nick Desaulniers , "Paul E. McKenney" , Ingo Molnar , Arnd Bergmann , Borislav Petkov , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , "H. Peter Anvin" , "Kirill A. Shutemov" , Kees Cook , Peter Zijlstra , Juergen Gross , Andy Lutomirski , Andrew Cooper , LKML , clang-built-linux , Will Deacon , nadav.amit@gmail.com, Nathan Chancellor Subject: Re: [PATCH v2] x86/asm: Replace __force_order with memory clobber Message-ID: <20200902173655.GA3469316@rani.riverdale.lan> References: <20200823212550.3377591-1-nivedita@alum.mit.edu> <20200902153346.3296117-1-nivedita@alum.mit.edu> <20200902171624.GX28786@gate.crashing.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20200902171624.GX28786@gate.crashing.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 02, 2020 at 12:16:24PM -0500, Segher Boessenkool wrote: > On Wed, Sep 02, 2020 at 11:33:46AM -0400, Arvind Sankar wrote: > > The CRn accessor functions use __force_order as a dummy operand to > > prevent the compiler from reordering the inline asm. > > > > The fact that the asm is volatile should be enough to prevent this > > already, however older versions of GCC had a bug that could sometimes > > result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior > > to these, including 5.x and 4.9.x, may reorder volatile asm. > > Reordering them amongst themselves. Yes, that is bad. Reordering them > with "random" code is Just Fine. Right, that's what I meant, but the text isn't clear. I will edit to clarify. > > Volatile asm should be executed on the real machine exactly as often as > on the C abstract machine, and in the same order. That is all. > > > + * The compiler should not reorder volatile asm, > > So, this comment needs work. And perhaps the rest of the patch as well? > > > Segher I think the patch itself is ok, we do only want to avoid reordering volatile asm vs volatile asm. But the comment needs clarification. Thanks.