Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2211808imm; Tue, 10 Jul 2018 15:33:52 -0700 (PDT) X-Google-Smtp-Source: AAOMgpf4mWtcPWib+1BrOEIoiIFust7BVBKmFe1Fz5/g7uJVL9YYgQ29M/VOUzmXkbomAZ2hBwqL X-Received: by 2002:a17:902:15a8:: with SMTP id m37-v6mr6056489pla.219.1531262032263; Tue, 10 Jul 2018 15:33:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531262032; cv=none; d=google.com; s=arc-20160816; b=Q0F/H+pJW/8pV3SX8RHNOQlVE41unBAwWImlzP3m+UmQKMTa48g5/bSkPB81kVXC0E Cf4WIRntIHgFwdwLTsBWRbdlBWKjdoL/KFuZGq/ppIpLvAkEeLdUDKm9hXoSFYxs4AvN Brcygz4G2KlLoDyCROsHKqy3VAargtH+rWhuUhz1wQ3yY6TVDo/O+dR/PmQQ5ohtoSP5 VV4i5PRLd+6nyrZ+ZO7mOeEPNL0rQjER+cEEN9PJGlUg518u3fJASmr2t7OjPQwbNlhY vJDBR2hh5ry2wKmwE7z3xum2mk600h3F58VftWEoXohF5L22TEwT3A+LjGvOHihNl0qD LPig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=0eU6evhIKf6+dqtL1TBhTmQEr3f4c4K6wdtmYX7RRkM=; b=uJQwV2ZXPCZ7FKS3UFXIjDLWNikca3SdNCbmKy1CecJc0XGJ2srf6VoOKP/e5ppXOO boKOqLtBvoDsFFwKjVJO/K15aTqsMthuHI4HiWmxp480loxo59yALyy1gi6YVLGuzQMs bF6/KPUO0z/vScjBftQNoV9YSZuIp07ToYjAhq4mSvQoQbNkR7AQnxqDjBF4mLUxcs1J M2VYwJ+Y7/RLLlhE9nW8gmqEY14znCTvq1jpyezip6pn0+JQ/9EnrNCH6SKcOFMvBHEg hXHX3Ll851OCrYcQjPWSIVzZvZJuRkYg5p/jWK7opU1022hdyw06HY9Bhua14xXSIyl/ yYMg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p9-v6si17728663pff.30.2018.07.10.15.33.36; Tue, 10 Jul 2018 15:33:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732483AbeGJWc0 (ORCPT + 99 others); Tue, 10 Jul 2018 18:32:26 -0400 Received: from mga12.intel.com ([192.55.52.136]:33776 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732444AbeGJWcZ (ORCPT ); Tue, 10 Jul 2018 18:32:25 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jul 2018 15:31:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,335,1526367600"; d="scan'208";a="70305425" Received: from 2b52.sc.intel.com ([143.183.136.52]) by fmsmga004.fm.intel.com with ESMTP; 10 Jul 2018 15:31:14 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu Subject: [RFC PATCH v2 16/27] mm: Modify can_follow_write_pte/pmd for shadow stack Date: Tue, 10 Jul 2018 15:26:28 -0700 Message-Id: <20180710222639.8241-17-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180710222639.8241-1-yu-cheng.yu@intel.com> References: <20180710222639.8241-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are three possible shadow stack PTE settings: Normal SHSTK PTE: (R/O + DIRTY_HW) SHSTK PTE COW'ed: (R/O + DIRTY_HW) SHSTK PTE shared as R/O data: (R/O + DIRTY_SW) Update can_follow_write_pte/pmd for the shadow stack. Signed-off-by: Yu-cheng Yu --- mm/gup.c | 11 ++++++++--- mm/huge_memory.c | 10 +++++++--- 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index b70d7ba7cc13..00171ee847af 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -64,10 +64,13 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, + bool shstk) { + bool pte_cowed = shstk ? is_shstk_pte(pte):pte_dirty(pte); + return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_cowed); } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -78,7 +81,9 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, struct page *page; spinlock_t *ptl; pte_t *ptep, pte; + bool shstk; + shstk = is_shstk_mapping(vma->vm_flags); retry: if (unlikely(pmd_bad(*pmd))) return no_page_table(vma, flags); @@ -105,7 +110,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, shstk)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7f3e11d3b64a..db4c689a960a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1389,10 +1389,13 @@ int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags, + bool shstk) { + bool pmd_cowed = shstk ? is_shstk_pmd(pmd):pmd_dirty(pmd); + return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_cowed); } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1402,10 +1405,11 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; struct page *page = NULL; + bool shstk = is_shstk_mapping(vma->vm_flags); assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, shstk)) goto out; /* Avoid dumping huge zero page */ -- 2.17.1