Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1107536ybz; Wed, 29 Apr 2020 15:12:09 -0700 (PDT) X-Google-Smtp-Source: APiQypJgmbkGup/8nKE7hHBggjxgnGaSY0Vzr2lDxfsL2HopU44CeYtZUCiLKAjkrL27K3p8GaD6 X-Received: by 2002:a17:906:c7c5:: with SMTP id dc5mr4824387ejb.50.1588198329259; Wed, 29 Apr 2020 15:12:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1588198329; cv=none; d=google.com; s=arc-20160816; b=ndeHOuiDnAOIur3580roZJNCWpjgs73Ps7JenXQKjhDP46v+ei+JcPFqwBABdSXWxT ACzAq8QXZikGthaa3AHlg/Sy6IMDudJpmXQW6mB98UuUy8vYUrEzrLI4xbOANTQlOvOi h2b1lCJAblq/+zqpzx2MoWIAOwHTounZZ9tZcHw95Q6DcSjaIMVYcOchEIlTPLLmCBBd D0/8TnHGrTA+CmtVih7DZC+TvpBMnNqQTe/H6dOgAkhQS9swhkp7fZ0GrBznuDdCZ7f5 Ybb7bJ+Qb0VQ50Sbo1wjBdvzNZ4dn8b1yW66OjhdgLjZZqcNk57Xai7yQhYtgrX91cBo 0OZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=QCBsgNSNX1fo0xvnHsig9fCeSMHL/GSAIeUoKD6SWO4=; b=BSjgaJ4atQbeFj1/EATOoy7kn4MvNp0Bzpornr4q+y6SuAnFE85f5ASmx+4RJSDAG/ oezBVe/JXE3CB2xWDJS3Mo81li30K+AxpNx3j2EKDfIO3qY+/ikmHC+CpKHMoF4zTcnv LylzvxY+NGJ3K4i5ID0DDx5djLq6FhaTut28qKICcdf8hjSJLSajDCX5uBL1WmK4XAWM M2INUxxbtBsX/mBiEUDt8udnvNAwKUQW0UBqWNoa+f7UmtV5sYugxokwrMqpysbnuxTq J6I+q3NdmM2Vj4fwkeWz3rIJJUIR8BtJ3c6UtvoxktMw6i619DypFrqn7BpWA7sgmO37 9UzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id w10si4221731ejk.157.2020.04.29.15.11.45; Wed, 29 Apr 2020 15:12:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727840AbgD2WJt (ORCPT + 99 others); Wed, 29 Apr 2020 18:09:49 -0400 Received: from mga09.intel.com ([134.134.136.24]:61308 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727961AbgD2WIv (ORCPT ); Wed, 29 Apr 2020 18:08:51 -0400 IronPort-SDR: XCmkXFXzVWUmkNJRRf0wBESQzsmYH44P0LBIa9ULutLMPljIZu1i7q1NDuy3b9mfwu+BkATcmu vdU4oH76aK2w== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Apr 2020 15:08:50 -0700 IronPort-SDR: wHiZvt4QeK+1g8UwKX+7tHC1VH92phIoYfNMA55H6uojHwMG3eXRdnqDDBJ8Mm+gl53kjXn4eE 8xXE0tBgskhQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,333,1583222400"; d="scan'208";a="276308915" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga002.jf.intel.com with ESMTP; 29 Apr 2020 15:08:49 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang Cc: Yu-cheng Yu Subject: [PATCH v10 18/26] mm: Update can_follow_write_pte() for shadow stack Date: Wed, 29 Apr 2020 15:07:24 -0700 Message-Id: <20200429220732.31602-19-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200429220732.31602-1-yu-cheng.yu@intel.com> References: <20200429220732.31602-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Can_follow_write_pte() ensures a read-only page is COWed by checking the FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid. Like a writable data page, a shadow stack page is writable, and becomes read-only during copy-on-write, but it is always dirty. Thus, in the can_follow_write_pte() check, it belongs to the writable page case and should be excluded from the read-only page pte_dirty() check. Apply the same changes to can_follow_write_pmd(). Signed-off-by: Yu-cheng Yu --- v10: - Reverse name changes to can_follow_write_*(). mm/gup.c | 8 +++++--- mm/huge_memory.c | 8 +++++--- 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 50681f0286de..c737782403ee 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -385,10 +385,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + !arch_shadow_stack_mapping(vma->vm_flags) && pte_dirty(pte)); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -431,7 +433,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 608746bb9d19..cb1b0cb4b4eb 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1520,10 +1520,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && + !arch_shadow_stack_mapping(vma->vm_flags) && pmd_dirty(pmd)); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1536,7 +1538,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.21.0