Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1265098pxa; Sat, 22 Aug 2020 18:25:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyzrMh3Rhumq4n8RIBw6dJk0CIHixyCcJStHPZFzntSWk/f5zmWv2hyPlYkyRmcF4wgns+W X-Received: by 2002:a17:906:640c:: with SMTP id d12mr123897ejm.388.1598145938654; Sat, 22 Aug 2020 18:25:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598145938; cv=none; d=google.com; s=arc-20160816; b=yijBa2JOOxjoyyo3hUryBTrVh3ufGQ8epoYOWrBPoD5CHMqhS0/ULrRIdr328Cz3v8 RXHIgc0VnJxpW6nzjT4HblTuRZ1sP/ehmHmVnDh8MFAy+scTXY0k7mV7utGaNUKuCYeQ tG6Nz3x2iZTaBQPonFhMTJVcVouayED2ZTyGeBvSLWVsDysZXMrPx6bylx/0JCfOBfEY E44Lw7Rbnl6M9E7mm9YnVJDwJe63OPXMAbeSRQ0p0iNNsGJS3VzQWcdCsbS7Zo+KWi/E cPd46kXbO75tavt7im2peuiSqb94j2iw6/WR2ZRmUXpOctJiFGgGq+j+G2T/NOEDKIUr JbTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:date:from :dkim-signature; bh=KMZwOnPr/kgr8gJIALudEGJEnhBEYV4zfZwOjTzah9Q=; b=ljzu3HJ3h6ztZ9IWzpPoWxcN9SR8ic0bCElf3nP2vB1BOcvPATkP0OyBrYBdfKTuGC tPZ1zkCOYtj2jD7jpQnR/NOZf2032YGKw4adAq0o72X0nMpQJhtGDnTXPndxpVxvvtB0 sO7p0MI7kJznrxXhxVK/Rw9fofJbF4DkqE21mFh6CuGfk1uQobbgISzGBv1ZlCh3Dzjl SlDHZU/uhlx0r7vvLue0sJGxXaTonk+IUJYVbRu+tHpam9k4pRF2e0r8CIBQ0bnwhi3t MHsFYvlCpW5fK2XFJQUPQA4vlifid1TwJxaNr/iW0pixh794ljE2Cg27tQhR/8hSjJR+ /4sg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=ZsPMZ5su; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h1si3642435edn.192.2020.08.22.18.25.16; Sat, 22 Aug 2020 18:25:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@gmail.com header.s=20161025 header.b=ZsPMZ5su; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726611AbgHWBQ5 (ORCPT + 99 others); Sat, 22 Aug 2020 21:16:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725767AbgHWBQ4 (ORCPT ); Sat, 22 Aug 2020 21:16:56 -0400 Received: from mail-qv1-xf41.google.com (mail-qv1-xf41.google.com [IPv6:2607:f8b0:4864:20::f41]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DDA8C061573 for ; Sat, 22 Aug 2020 18:16:56 -0700 (PDT) Received: by mail-qv1-xf41.google.com with SMTP id dd12so2315888qvb.0 for ; Sat, 22 Aug 2020 18:16:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=KMZwOnPr/kgr8gJIALudEGJEnhBEYV4zfZwOjTzah9Q=; b=ZsPMZ5suHCRYgZmkwNSWpd6HJbQI/JDLVImdhWuxIDy7gGhqkYl4qUPihJfWztd2DA bAB27cDYV5cOhUk+QtFLKeUZ9I+WOfaRfNSQIDB8mGshEDu6Wn66GnjEQmTqjmEc2Fhs PpWL2xFVnCncpBRij3V9bx0WLWbih8cdF6zn2wxKKYobGJUIthDSvrM5nKU/wWMjri5D dCgwbboFZE9kMTOEFPcM1s7bpt/s/71OADWz9EKwfI4toN3aD2OuHnozDiB4i12zk8i1 6nYsBW0Nv6ezuKmWybBgPqwWV7yclEGefai/x5qHTZUOOrwDivtkPNBB4HPuTrHXJQxm M6mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:date:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=KMZwOnPr/kgr8gJIALudEGJEnhBEYV4zfZwOjTzah9Q=; b=a5Xv1aITS78I23hpw9cwDME/ZpnlRw0qRvSmBuwrRgeigvwcqHAiStOjIzjFCASm0q +RPSYQR64lNFfKSZUiyVt1v4my8zFX0xkQr1z9dhk4GPRZyJ/1nQcHKAW2d2uOcHeZBk sYIbwhcA69QRsSOiLP6Zw8eWK+tBkPBomeZk2Bo+zfRZHhIs/eUxsUVkgjeAU6phN6jW WVNLRevLNU5TZY0omSqvhO07kXrxukL/tYD5cYfQX/WnouVnXnYBKE+CZ5uyJ8G4n0jm mQCKGvVGKjeqOPJiu5/LO7Be8FGgLYXqEsk4YDEQyNMDew6Uif48zS8vky+cFvfTn6ui S8xg== X-Gm-Message-State: AOAM533MFsws2UBN67tJvMkDx53SS+LBvG2cAvJE4bxUROE1MdPU4hJt Li+PAefKsa9hd+E0zI7xiHo= X-Received: by 2002:a0c:f507:: with SMTP id j7mr8530731qvm.82.1598145415396; Sat, 22 Aug 2020 18:16:55 -0700 (PDT) Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f]) by smtp.gmail.com with ESMTPSA id z197sm5958411qkb.66.2020.08.22.18.16.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 22 Aug 2020 18:16:54 -0700 (PDT) From: Arvind Sankar X-Google-Original-From: Arvind Sankar Date: Sat, 22 Aug 2020 21:16:52 -0400 To: Linus Torvalds Cc: Arvind Sankar , Miguel Ojeda , Sedat Dilek , Segher Boessenkool , Thomas Gleixner , Nick Desaulniers , "Paul E. McKenney" , Ingo Molnar , Arnd Bergmann , Borislav Petkov , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , "H. Peter Anvin" , "Kirill A. Shutemov" , Zhenzhong Duan , Kees Cook , Peter Zijlstra , Juergen Gross , Andy Lutomirski , Andrew Cooper , LKML , clang-built-linux , Will Deacon Subject: Re: [PATCH] x86: work around clang IAS bug referencing __force_order Message-ID: <20200823011652.GA1910689@rani.riverdale.lan> References: <87eenzqzmr.fsf@nanos.tec.linutronix.de> <20200822035552.GA104886@rani.riverdale.lan> <20200822084133.GL28786@gate.crashing.org> <20200822231055.GA1871205@rani.riverdale.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Aug 22, 2020 at 05:10:21PM -0700, Linus Torvalds wrote: > On Sat, Aug 22, 2020 at 4:11 PM Arvind Sankar wrote: > > > > Actually, is a memory clobber required for correctness? Memory accesses > > probably shouldn't be reordered across a CRn write. Is asm volatile > > enough to stop that or do you need a memory clobber? > > You do need a memory clobber if you really care about ordering wrt > normal memory references. > > That said, I'm not convinced we do care here. Normal memory accesses > (as seen by the compiler) should be entirely immune to any changes we > do wrt CRx registers. > > Because code that really fundamentally changes kernel mappings or > access rules is already written in low-level assembler (eg the entry > routines or bootup). > > Anything that relies on the more subtle changes (ie user space > accesses etc) should already be ordered by other things - usually by > the fact that they are also "asm volatile". > > But hey, maybe somebody can come up with an exception to that. > > Linus I'm sure in practice it can't happen, as any memory accesses happening immediately around write_cr3() are probably mapped the same in both pagetables anyway, but eg cleanup_trampoline() in arch/x86/boot/compressed/pgtable_64.c: memcpy(pgtable, trampoline_pgtable, PAGE_SIZE); native_write_cr3((unsigned long)pgtable); There'll probably be trouble if the compiler were to reverse the order here. We could actually make write_crn() use memory clobber, and read_crn() use "m"(*(int *)0x1000) as an input operand. A bit hacky, but no global variable needed. And maybe read_crn() doesn't even have to be volatile. Also, if we look at the rdmsr/wrmsr pair, there's no force_order equivalent AFAICT. wrmsr has a memory clobber, but the asm volatile-ness is the only thing enforcing read/write ordering.