Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp4737225ybp; Mon, 14 Oct 2019 09:14:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqxMkV6VUQSGFk65RreKd0HJrn2fv5He0QwUGrZBLCcCewKbhFY30zVxP7YON8E1r7uh21jW X-Received: by 2002:a17:906:11d8:: with SMTP id o24mr29544887eja.224.1571069653611; Mon, 14 Oct 2019 09:14:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571069653; cv=none; d=google.com; s=arc-20160816; b=b14Ns5ZUw2hun3LU+bAztk1BZUyEN28P6i69x+VnQ9KR1WQem9qFmPckg2W+8mz/Ad 49mgw5fm78jaWB7vIJyiMA+sa0yrZXcibW7HnO6eI8fYOQFeuyx9sORjv6047vi51ZGC nekze040PJ4x6hJmnzrfnFhoVIiQPwRUSVHI/wJCRx1XEggenr+sdSd2FNqgyjoofo4N IVnsL0zDyqpFB6WSKFY8bvxFySPw0L/dQqKVospo34NUjcueiSWtglyW/P36Oh8I3J8G dDXZ2kWFp+LXV4/fDXD23FfIMgq3DupcCmLRwHHmBOULBRqHXWiFAdzK8zorHouCDkJ/ q6ZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=tvD8s3MAz5vKA8mYloT52NO0ZiIEPbiMMzSqUQ7mR9o=; b=VS/OZNkKZAtX7kR5ocKwvvvKcNHznnxdxeLV/azHmtHaJy6Z+e8Ka/VsO9GMK97VfQ Ee2jaehk6o7zneX1EBIEdfU0gfu2kNt8dpwl/VXNoWTJJPO5ZW2J2JEf7jcQtf7A9e39 WNItpR04brwM6uUcmeYx29QYhXQeJgnw5RrOzYhj+31wcLIjLno2WBn6dWo3ucX6plZX wI0oBA2f9fR6Wn264h8hVFWXpkEFXwc4P7k7ImBIvlwn+ZoGfTNwxoJeeXpoCwPuDe5o vftpkH77e+rbI2BSl9lbkqLAkQTyKKDOsqWPI43HJ80QVCFho8P6ng9/ZUXmAqlzFlV4 aDRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v17si13166609eda.256.2019.10.14.09.13.17; Mon, 14 Oct 2019 09:14:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387487AbfJNP11 (ORCPT + 99 others); Mon, 14 Oct 2019 11:27:27 -0400 Received: from foss.arm.com ([217.140.110.172]:46892 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732968AbfJNP10 (ORCPT ); Mon, 14 Oct 2019 11:27:26 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 236F728; Mon, 14 Oct 2019 08:27:26 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 886913F68E; Mon, 14 Oct 2019 08:27:24 -0700 (PDT) Date: Mon, 14 Oct 2019 16:27:17 +0100 From: Mark Rutland To: Daniel Axtens Cc: Andrey Ryabinin , kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, dvyukov@google.com, christophe.leroy@c-s.fr, linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory Message-ID: <20191014152717.GA20438@lakrids.cambridge.arm.com> References: <20191001065834.8880-1-dja@axtens.net> <20191001065834.8880-2-dja@axtens.net> <352cb4fa-2e57-7e3b-23af-898e113bbe22@virtuozzo.com> <87ftjvtoo7.fsf@dja-thinkpad.axtens.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87ftjvtoo7.fsf@dja-thinkpad.axtens.net> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 15, 2019 at 12:57:44AM +1100, Daniel Axtens wrote: > Hi Andrey, > > > >> + /* > >> + * Ensure poisoning is visible before the shadow is made visible > >> + * to other CPUs. > >> + */ > >> + smp_wmb(); > > > > I'm not quite understand what this barrier do and why it needed. > > And if it's really needed there should be a pairing barrier > > on the other side which I don't see. > > Mark might be better able to answer this, but my understanding is that > we want to make sure that we never have a situation where the writes are > reordered so that PTE is installed before all the poisioning is written > out. I think it follows the logic in __pte_alloc() in mm/memory.c: > > /* > * Ensure all pte setup (eg. pte page lock and page clearing) are > * visible before the pte is made visible to other CPUs by being > * put into page tables. Yup. We need to ensure that if a thread sees a populated shadow PTE, the corresponding shadow memory has been zeroed. Thus, we need to ensure that the zeroing is observed by other CPUs before we update the PTE. We're relying on the absence of a TLB entry preventing another CPU from loading the corresponding shadow shadow memory until its PTE has been populated (after the zeroing is visible). Consequently there is no barrier on the other side, and just a control-dependency (which would be insufficient on its own). There is a potential problem here, as Will Deacon wrote up at: https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-will@kernel.org/ ... in the section starting: | *** Other architecture maintainers -- start here! *** ... whereby the CPU can spuriously fault on an access after observing a valid PTE. For arm64 we handle the spurious fault, and it looks like x86 would need something like its vmalloc_fault() applying to the shadow region to cater for this. Thanks, Mark.