Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp4832013pxy; Tue, 27 Apr 2021 13:46:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxneYQzyDLCVM1UjvA2RQSQ+sUu8u0V09h7oyacRqt5Hyu1SwSe3mEndtq16H6tgM+VUEcK X-Received: by 2002:a17:902:9008:b029:e6:f37a:2183 with SMTP id a8-20020a1709029008b02900e6f37a2183mr26223390plp.49.1619556416827; Tue, 27 Apr 2021 13:46:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619556416; cv=none; d=google.com; s=arc-20160816; b=MbJ6yQR3U9b2U+geThIOVCbwRdrGJmDKCCkKop3xc5tZZE5OrNa+N84sHcTrC+oj1z v6KrXmeHPtMJ6EuAJ0yVrEWHHd0IgDd2ujx5k5bF4WpBHdmRiifP21hck31UzumGes/M bSqVebR2wX/EXAE9rgM/ffG+V4K2TJ+yoGzqewDWXdKWPQNQ6o+qOeqz2DbE0ROGpUfi wcA2TeMuamx3u7Ki/q6Td4CcgoV5cctzbgDlGiQI4UWoBYzpRtfDN8k1k/SeWK7xjRo5 TOEnHuDPxeC/w0777Ptkc8pSPbOyGWRV9dgeAaEmgs9mZWVIO2oSOHEPBiNsw79qAH8P OqqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=SDNRo6fRnaSnJOWjPc0awzo/K8porfADeFKUs8iW4P0=; b=tB5eF8D/u8/B4cqS795q45CbT8/lNROjdujvPGz5wsWz6AvS5deJ/bOn1jD2ojRG2X rHNoeVA/6UO4CGcWLwn5V4oMo3rmIl2W1uwdfibh8vMA/w+bWkC2B8p9pP0qvq36Kh2+ CRzaNMBq+DRNCiylXiC4gHwrj9O4WUO0xCQTPrTnb52kEJtQs0J9+YXTskY9EXuhH/yh 0sUe2Kz/LPQCx5FtlODO4Bhh1c8NEYkyBRG76fuBBLaPjWASItnICl4AvLr3Of0hUxkU VXXZOQJs+7+XvoWE55mfIix3/BeOH53agIlBG7lryu1I/oqIDsH7+6pOErZrlfEF8jjQ QSJw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f33si4813112pjd.108.2021.04.27.13.46.44; Tue, 27 Apr 2021 13:46:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239329AbhD0UqX (ORCPT + 99 others); Tue, 27 Apr 2021 16:46:23 -0400 Received: from mga05.intel.com ([192.55.52.43]:31780 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239188AbhD0UqE (ORCPT ); Tue, 27 Apr 2021 16:46:04 -0400 IronPort-SDR: vUO1x/sG79f6UyyVwIYnLVUTZ7RjU2FyBnRYqH93bFcjm3isOGY8kQ00++ZYc3lLBjSPTqP5On SJEaT+mZ4Vgg== X-IronPort-AV: E=McAfee;i="6200,9189,9967"; a="281922486" X-IronPort-AV: E=Sophos;i="5.82,255,1613462400"; d="scan'208";a="281922486" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2021 13:44:19 -0700 IronPort-SDR: Y3eIhPtaBE1OHDJhM0C9YozO+hBI0cR0zf+ufyIl8shy1LMKbOcEAYlkNovrvQtP73wiNQ9ubD daQLBw43M/mg== X-IronPort-AV: E=Sophos;i="5.82,255,1613462400"; d="scan'208";a="465623532" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2021 13:44:18 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu , "Kirill A . Shutemov" Subject: [PATCH v26 19/30] mm: Update can_follow_write_pte() for shadow stack Date: Tue, 27 Apr 2021 13:43:04 -0700 Message-Id: <20210427204315.24153-20-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210427204315.24153-1-yu-cheng.yu@intel.com> References: <20210427204315.24153-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Can_follow_write_pte() ensures a read-only page is COWed by checking the FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid. Like a writable data page, a shadow stack page is writable, and becomes read-only during copy-on-write, but it is always dirty. Thus, in the can_follow_write_pte() check, it belongs to the writable page case and should be excluded from the read-only page pte_dirty() check. Apply the same changes to can_follow_write_pmd(). While at it, also split the long line into smaller ones. Signed-off-by: Yu-cheng Yu Reviewed-by: Kirill A. Shutemov Cc: Kees Cook --- v26: - Instead of passing vm_flags, pass down vma pointer to can_follow_write_*(). v25: - Split long line into smaller ones. v24: - Change arch_shadow_stack_mapping() to is_shadow_stack_mapping(). mm/gup.c | 16 ++++++++++++---- mm/huge_memory.c | 16 ++++++++++++---- 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ef7d2da9f03f..f9705281e853 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -356,10 +356,18 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { - return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + if (pte_write(pte)) + return true; + if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW)) + return false; + if (!pte_dirty(pte)) + return false; + if (is_shadow_stack_mapping(vma->vm_flags)) + return false; + return true; } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -402,7 +410,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 044029ef45cd..cf10c3822853 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1338,10 +1338,18 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { - return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + if (pmd_write(pmd)) + return true; + if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW)) + return false; + if (!pmd_dirty(pmd)) + return false; + if (is_shadow_stack_mapping(vma->vm_flags)) + return false; + return true; } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1354,7 +1362,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */ -- 2.21.0