Received: by 2002:a25:b323:0:0:0:0:0 with SMTP id l35csp93143ybj; Thu, 19 Sep 2019 11:11:50 -0700 (PDT) X-Google-Smtp-Source: APXvYqwI6mxFUMiO82QcFDTqtJUqBS5ZdheUyVBtqwwl19G1SS/crBdMxV/lQSTGE2F8HEOQllBY X-Received: by 2002:a17:906:d926:: with SMTP id rn6mr15071125ejb.175.1568916710347; Thu, 19 Sep 2019 11:11:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568916710; cv=none; d=google.com; s=arc-20160816; b=QV079c3akATRy62sM8s3Yvyd58QLWwDeoLzZqOwnEYYQTq8ZmpQkKKf/mfuZn7eqVy UXsQ8kZhNMe2ctVmtZY1QsQpRHWtzyBbrd3pHaQxFXnCkOGiJ1dqTXs8fI73vOU/GMmM mWSMsg3pY9gLxVyOFZ8znivjEj0LoK4KtM/XbcmkMgjBNu5AI6zSYkIvi1Af27+PBZDV Z3C8QjqQ60KT85L+nX/uaXd5fq1F3YZ/igVFeu+IP1N1BRxf9xjuOtZdWZAxE3GoRjw/ BUhBsfr8+Ou8dH2CHIMJXzhnGoW3GXSQLiqQYX6fG97b6qmfCEq9bcdYX+cWOxr1YiX9 QcVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=zbTe1EfPZPUkpS24k30yujuf5ZGPOkQ84slIITemxdg=; b=BFFq/LDpa6HMqNPj1ewqlRD0dKBSrPQS3PLkhco0HjH0vw2/GuwM0MLqOjVcg7ZvQH XsT3eue22Nw7m621Ndpq5LaffzwkGXcw0jkCbQxtwNbWfuo2d7Zx1IGcI6oer7Du9e1v wXjh4aU7M06JjR3eA8M4DR3X31dmwkB0P8WLk4Qg2EB4aLmfxVbvKLMOXlWmZaoU8He7 M94r9WvkBeYbnX4cCP+IsQhfXoPZ8UspTjBe0WumkL6waLKgkktxvfalIUoEZarWjFzm vRUsvAq9tVwifhu5/yQ92a1NBa5AgkeBxJTxhYTDUaupZu4QpubGUoWIpjSuoM56/+Gn msFQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k9si5001626ejh.263.2019.09.19.11.11.26; Thu, 19 Sep 2019 11:11:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391359AbfISPlt (ORCPT + 99 others); Thu, 19 Sep 2019 11:41:49 -0400 Received: from foss.arm.com ([217.140.110.172]:32884 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389646AbfISPlt (ORCPT ); Thu, 19 Sep 2019 11:41:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B199628; Thu, 19 Sep 2019 08:41:48 -0700 (PDT) Received: from arrakis.emea.arm.com (arrakis.cambridge.arm.com [10.1.196.78]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E65BC3F575; Thu, 19 Sep 2019 08:41:45 -0700 (PDT) Date: Thu, 19 Sep 2019 16:41:43 +0100 From: Catalin Marinas To: "Kirill A. Shutemov" Cc: Jia He , Will Deacon , Mark Rutland , James Morse , Marc Zyngier , Matthew Wilcox , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Suzuki Poulose , Punit Agrawal , Anshuman Khandual , Jun Yao , Alex Van Brunt , Robin Murphy , Thomas Gleixner , Andrew Morton , =?iso-8859-1?B?Suly9G1l?= Glisse , Ralph Campbell , hejianet@gmail.com, Kaly Xin Subject: Re: [PATCH v4 3/3] mm: fix double page fault on arm64 if PTE_AF is cleared Message-ID: <20190919154143.GA6472@arrakis.emea.arm.com> References: <20190918131914.38081-1-justin.he@arm.com> <20190918131914.38081-4-justin.he@arm.com> <20190918140027.ckj32xnryyyesc23@box> <20190918180029.GB20601@iMac.local> <20190919150007.k7scjplcya53j7r4@box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190919150007.k7scjplcya53j7r4@box> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 19, 2019 at 06:00:07PM +0300, Kirill A. Shutemov wrote: > On Wed, Sep 18, 2019 at 07:00:30PM +0100, Catalin Marinas wrote: > > On Wed, Sep 18, 2019 at 05:00:27PM +0300, Kirill A. Shutemov wrote: > > > On Wed, Sep 18, 2019 at 09:19:14PM +0800, Jia He wrote: > > > > @@ -2152,20 +2163,34 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo > > > > */ > > > > if (unlikely(!src)) { > > > > void *kaddr = kmap_atomic(dst); > > > > - void __user *uaddr = (void __user *)(va & PAGE_MASK); > > > > + void __user *uaddr = (void __user *)(addr & PAGE_MASK); > > > > + pte_t entry; > > > > > > > > /* > > > > * This really shouldn't fail, because the page is there > > > > * in the page tables. But it might just be unreadable, > > > > * in which case we just give up and fill the result with > > > > - * zeroes. > > > > + * zeroes. On architectures with software "accessed" bits, > > > > + * we would take a double page fault here, so mark it > > > > + * accessed here. > > > > */ > > > > + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { > > > > + spin_lock(vmf->ptl); > > > > + if (likely(pte_same(*vmf->pte, vmf->orig_pte))) { > > > > + entry = pte_mkyoung(vmf->orig_pte); > > > > + if (ptep_set_access_flags(vma, addr, > > > > + vmf->pte, entry, 0)) > > > > + update_mmu_cache(vma, addr, vmf->pte); > > > > + } > > > > > > I don't follow. > > > > > > So if pte has changed under you, you don't set the accessed bit, but never > > > the less copy from the user. > > > > > > What makes you think it will not trigger the same problem? > > > > > > I think we need to make cow_user_page() fail in this case and caller -- > > > wp_page_copy() -- return zero. If the fault was solved by other thread, we > > > are fine. If not userspace would re-fault on the same address and we will > > > handle the fault from the second attempt. > > > > It would be nice to clarify the semantics of this function and do as > > you suggest but the current comment is slightly confusing: > > > > /* > > * If the source page was a PFN mapping, we don't have > > * a "struct page" for it. We do a best-effort copy by > > * just copying from the original user address. If that > > * fails, we just zero-fill it. Live with it. > > */ > > > > Would any user-space rely on getting a zero-filled page here instead of > > a recursive fault? > > I don't see the point in zero-filled page in this case. SIGBUS sounds like > more appropriate response, no? I think misunderstood your comment. So, if !pte_same(), we should let userspace re-fault. This wouldn't be a user ABI change and it is bounded, can't end up in an infinite re-fault loop. In case of a __copy_from_user_inatomic() error, SIGBUS would make more sense but it changes the current behaviour (zero-filling the page). This can be left for a separate patch, doesn't affect the arm64 case here. -- Catalin