Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1399179pxk; Thu, 10 Sep 2020 14:26:42 -0700 (PDT) X-Google-Smtp-Source: ABdhPJybLbFxoBJeBnErZcNUdYbz6qBCxzu48p8heEewZOz89zhb5d7t36FVRvTmteORPHOEIwnR X-Received: by 2002:a50:9fe6:: with SMTP id c93mr11399547edf.151.1599773202413; Thu, 10 Sep 2020 14:26:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1599773202; cv=none; d=google.com; s=arc-20160816; b=uSYyYrLN99M+wGYaKy05koU6KaNu6i6TnZjAkFAq/IIW4PKHmQpIULFvJyfg/0Swox DzYvha4N1yU0ArsNrKrbE7JgPmByq1U3vLVHvrlB2OrXEAqNYMtY0fFuwydH1D9KY1wX jCdUsQ9kffN6bc3E95jmY0ybQGz9+wHBO4umV58jGs0fJGX9SWxti2ncDB1akDYySGJe nFQpYz29z1+THNQuqgE9eSUvWxChD00QCYLz3oUcYDc/emEIXVsiUx/chCflUSJ310ad ApCZxcDLs0H6A08nU1IdKwVqm7OuLvOmIAKokoEwwu7puiSFjkGJFLvB0PGm1tueruqJ 82OA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=KgUwNS6Q+E8fqoYsN87vU1D8tqx/ffcZtMThZuyBhrY=; b=M3L0yBe4CZcQPuyvLTxOm+Vb2BYvBVnZmwFcvLNcDbijXFd6NrvaJpL2B/iyVP0l6Y eD/cviHx/xVCtatoc5kXJsFrJOUCgPUsWQK9X9QyWOMYIX4ehQMAXibxTcgMq+rOdbVK FM8KlHNi9DBfg++a42E896C9IfIRhe5nkkuERP4cImshObugFfFSz+ovuBA60c53hB1P 3yjM2IaifJOQdo/PJN4JslDUhKjXakBfnGfH6ZDAiCvP2jydimUGHi6wluCsWd8Yj5qd s8oX3O2lp11scfKuTPExuQeC8ddgMRcOLn8PVE2aJ42kUT8uneSk9k9lLQVHg67iqTuC slUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=iyiND7wW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id re18si4353983ejb.684.2020.09.10.14.26.19; Thu, 10 Sep 2020 14:26:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=iyiND7wW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728243AbgIJVXp (ORCPT + 99 others); Thu, 10 Sep 2020 17:23:45 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:14255 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728254AbgIJVWw (ORCPT ); Thu, 10 Sep 2020 17:22:52 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 10 Sep 2020 14:22:32 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 10 Sep 2020 14:22:45 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 10 Sep 2020 14:22:45 -0700 Received: from [10.2.54.52] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 10 Sep 2020 21:22:37 +0000 Subject: Re: [RFC PATCH v2 1/3] mm/gup: fix gup_fast with dynamic page table folding To: Jason Gunthorpe , Linus Torvalds CC: Alexander Gordeev , Gerald Schaefer , Dave Hansen , LKML , linux-mm , linux-arch , Andrew Morton , Russell King , Mike Rapoport , Catalin Marinas , Will Deacon , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Jeff Dike , Richard Weinberger , Dave Hansen , Andy Lutomirski , "Peter Zijlstra" , Thomas Gleixner , "Ingo Molnar" , Borislav Petkov , Arnd Bergmann , Andrey Ryabinin , linux-x86 , linux-arm , linux-power , linux-sparc , linux-um , linux-s390 , Vasily Gorbik , Heiko Carstens , Christian Borntraeger , Claudio Imbrenda References: <20200907180058.64880-1-gerald.schaefer@linux.ibm.com> <20200907180058.64880-2-gerald.schaefer@linux.ibm.com> <0dbc6ec8-45ea-0853-4856-2bc1e661a5a5@intel.com> <20200909142904.00b72921@thinkpad> <20200909192534.442f8984@thinkpad> <20200909180324.GI87483@ziepe.ca> <20200910093925.GB29166@oc3871087118.ibm.com> <20200910181319.GO87483@ziepe.ca> From: John Hubbard Message-ID: <0c9bcb54-914b-e582-dd6d-3861267b6c94@nvidia.com> Date: Thu, 10 Sep 2020 14:22:37 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20200910181319.GO87483@ziepe.ca> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599772952; bh=KgUwNS6Q+E8fqoYsN87vU1D8tqx/ffcZtMThZuyBhrY=; h=X-PGP-Universal:Subject:To:CC:References:From:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:X-Originating-IP: X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=iyiND7wWuYHj8PeTvg/XtmrYveZZgQlXRnTEfuRaqVXA6x8uYqDCj3nrByA0pX5cI 5pEdOff/AUuhqRtuvklooFd8NslrApTnPUOi4JcmLIL68MhkbQJc9AX9wpzPo47t5P WonYZKJQ7qlOSxr7z0+3iwfyQPcDStoEMZWR2//x0MYBmno68OCeOHSxdecN4/r9lr Jbhl/MI+cwSAsj1DIol///tYiiv7Ue7ZlwRJ+mqssEw4A5AmoU61blAy2i3+rfNZd4 /0vhPiQoPG2wWvgPHKrzc4/jcDRLgdPpH3xb6nJhrSWHQnnGiRHTVHuGyPppzkiMo2 pXOZNwCZ5mRKQ== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/10/20 11:13 AM, Jason Gunthorpe wrote: > On Thu, Sep 10, 2020 at 10:35:38AM -0700, Linus Torvalds wrote: >> On Thu, Sep 10, 2020 at 2:40 AM Alexander Gordeev >> wrote: >>> >>> It is only gup_fast case that exposes the issue. It hits because >>> pointers to stack copies are passed to gup_pXd_range iterators, not >>> pointers to real page tables itself. >> >> Can we possibly change fast-gup to not do the stack copies? >> >> I'd actually rather do something like that, than the "addr_end" thing. > >> As you say, none of the other page table walking code does what the >> GUP code does, and I don't think it's required. > > As I understand it, the requirement is because fast-gup walks without > the page table spinlock, or mmap_sem held so it must READ_ONCE the > *pXX. > > It then checks that it is a valid page table pointer, then calls > pXX_offset(). > > The arch implementation of pXX_offset() derefs again the passed pXX > pointer. So it defeats the READ_ONCE and the 2nd load could observe > something that is no longer a page table pointer and crash. Just to be clear, though, that makes it sound a little wilder and reckless than it really is, right? Because actually, the page tables cannot be freed while gup_fast is walking them, due to either IPI blocking during the walk, or the moral equivalent (MMU_GATHER_RCU_TABLE_FREE) for non-IPI architectures. So the pages tables can *change* underneath gup_fast, and for example pages can be unmapped. But they remain valid page tables, it's just that their contents are unstable. Even if pXd_none()==true. Or am I way off here, and it really is possible (aside from the current s390 situation) to observe something that "is no longer a page table"? thanks, -- John Hubbard NVIDIA