Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2371099pxa; Mon, 24 Aug 2020 12:14:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzMQlvPVrQUMgAt9VB+D8wTPrHYwtcrXm4DszFP0ZYzHQyqZFbWWSEDJinVNjAIIGvRjWW2 X-Received: by 2002:a05:6402:1591:: with SMTP id c17mr6764636edv.111.1598296485175; Mon, 24 Aug 2020 12:14:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598296485; cv=none; d=google.com; s=arc-20160816; b=oMkTGrMK+GMWE1p5liW5mMNUzwOf60sN+Xcq86ypmZhKKVro9LIHSyIG38cgOBdn3Q 7FD+FkBKm9fAuE/VWlXiS80oLgapNvuvZT20yzJsXueTe7Bd6ijcz3Q1ZjsVfw7XuzAP RcAUOfigmZbiVSK/eNRcc1jmpZ4ppkozDFFetMnqonMh3LyuH0awPQhTlUoZOmuTC9nM 3JHMBxWg99iV8hUfcwdt8bp37kPa8GsjgVzx39/YE9If4FBDONqtcQZb47fDYfqCXasR CIKfzV/XUMrYiWomPE30wrPW2YAOaTBTqVv+pSRufa0JqYAHXs407xxI8vyOVtw8DUlQ oXYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=iIo/OMGOZUKbLvU8KZnqtxEPhcYdwmaMGDjLYphHlDo=; b=zmJjVfmQWLZ8/tg/qYFPAnUtRvzqYiVUnQWkrsGBip5bjZHMGfn+h5vjG5jvdhxy1Z lPe4pIiGVgcFmNJhkeClGxVascTfQU0zOoTYBAWft9aZVaw4DPSJzCxrm47Em89gcrAt oVynJByWLp1HeGg2wwJrljIvO0Jr3DO1ozXh6lZetSfQxC5S7WCe256MbM1OtPbA5QZC OumA2mKbwh0K/84q5dVZxsnuYRwoG/tNOgMIIQaOpTieTVBAMHm5h81D3nFYCgDaUuI3 UdQqWQuat1Qluqb8FAEQp7GwbA/YnhKrSpJk2o46PZykJgEa3R8Cw0kn/YulwWskV+Lf qFIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=OV65ZGn7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g1si7467078edy.259.2020.08.24.12.14.22; Mon, 24 Aug 2020 12:14:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=OV65ZGn7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726682AbgHXTNt (ORCPT + 99 others); Mon, 24 Aug 2020 15:13:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725976AbgHXTNs (ORCPT ); Mon, 24 Aug 2020 15:13:48 -0400 Received: from mail-lj1-x244.google.com (mail-lj1-x244.google.com [IPv6:2a00:1450:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9956C061573 for ; Mon, 24 Aug 2020 12:13:47 -0700 (PDT) Received: by mail-lj1-x244.google.com with SMTP id v12so10945614ljc.10 for ; Mon, 24 Aug 2020 12:13:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=iIo/OMGOZUKbLvU8KZnqtxEPhcYdwmaMGDjLYphHlDo=; b=OV65ZGn7Ue51KMsxhKY9Pw9yYHazdrP5PyzZj/Ix2mk+Xz5KoKMA6Anus8fGGxZ1Bg RDHPuZMKh0eMRn7ruzxM/nmbmGCuNRf5sR5tIY/AN4TpFBBKo2nUaGctozRd4tByfixN FKRCy427RzyMIQjD1powMOuQzn67sx27XfAYvydH7w+m3PLoz9ujYOZD284oQRaeWf/s lQzM+zyQrjiCYNQDiPTsNSq2OCp8i50faAYkFwJiu3KtlraiWlmMEuZljsEgLVlWObMP 0EOZcIG0Y0mcBXc7VwngsrOB5SN43Mi+rdmFIkKry9JgzEz6Iw1eCivnwW05FNLaWhQd fRgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=iIo/OMGOZUKbLvU8KZnqtxEPhcYdwmaMGDjLYphHlDo=; b=UErdTPTx1R6Lzi+JdnT/mqZRvwjt1cWVh/YzPo/Lm/S3QNqZpwgNBBll3+k35poDp8 Ul4E2i44Kl+aRWcTShtxq+yTQwhbQb0lpIDNev/pI66mkkpuWvruOLU7ZOyfd+zzZ0N0 93bn8mU0dmC556LwOubbvZdl83cTc29c5PcOmTEyl62fW0JgSlkNPNFg/lGotnsYIGO5 6QTlX2fzG7b5p5MEuN/sBY3oCSFUixF5HIsG0WHtdpTErsM0pLanXNXdqrMle+gvlqIM zr4Fh+Gn027EiAZCGe4a4FkQrskklAJ/Yymy0/p7AEQW/3GVyxv530cvGt7so2HmPI3R +uPA== X-Gm-Message-State: AOAM532493vL6LtMRTELqJrc9awSvzFpGUFd0G6HmgewRu7X345RAMC6 7jj1mzyvTiyNvSMEg776euufO5nISYJdwLtBFHQ= X-Received: by 2002:a2e:80c9:: with SMTP id r9mr2881162ljg.95.1598296426138; Mon, 24 Aug 2020 12:13:46 -0700 (PDT) MIME-Version: 1.0 References: <20200823011652.GA1910689@rani.riverdale.lan> <20200823212550.3377591-1-nivedita@alum.mit.edu> In-Reply-To: <20200823212550.3377591-1-nivedita@alum.mit.edu> From: Miguel Ojeda Date: Mon, 24 Aug 2020 21:13:34 +0200 Message-ID: Subject: Re: [PATCH] x86/asm: Replace __force_order with memory clobber To: Arvind Sankar Cc: Linus Torvalds , Sedat Dilek , Segher Boessenkool , Thomas Gleixner , Nick Desaulniers , "Paul E. McKenney" , Ingo Molnar , Arnd Bergmann , Borislav Petkov , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , "H. Peter Anvin" , "Kirill A. Shutemov" , Kees Cook , Peter Zijlstra , Juergen Gross , Andy Lutomirski , Andrew Cooper , LKML , clang-built-linux , Will Deacon Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Arvind, On Sun, Aug 23, 2020 at 11:25 PM Arvind Sankar wrote: > > - Using a dummy input operand with an arbitrary constant address for the > read functions, instead of a global variable. This will prevent reads > from being reordered across writes, while allowing memory loads to be > cached/reordered across CRn reads, which should be safe. Assuming no surprises from compilers, this looks better than dealing with different code for each compiler. > Signed-off-by: Arvind Sankar > Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82602 A lore link to the other discussion would be nice here for context. > + * The compiler should not reorder volatile asm, however older versions of GCC > + * had a bug (which was fixed in 8.1, 7.3 and 6.5) where they could sometimes I'd mention the state of GCC 5 here. > + * reorder volatile asm. The write functions are not a problem since they have > + * memory clobbers preventing reordering. To prevent reads from being reordered > + * with respect to writes, use a dummy memory operand. > */ > -extern unsigned long __force_order; > + Spurious newline? Cheers, Miguel