Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp2297076pxb; Wed, 9 Feb 2022 15:29:16 -0800 (PST) X-Google-Smtp-Source: ABdhPJyhiVCF2SfwmrSSBPsVfNEox+8JH/un3/L47uEDucaPljfQndY5qRZ40ToWhRaQAMsKeo4W X-Received: by 2002:a17:902:8603:: with SMTP id f3mr4960083plo.134.1644449356202; Wed, 09 Feb 2022 15:29:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1644449356; cv=none; d=google.com; s=arc-20160816; b=EU9UpYA5hqhRJr0fDQWFgiVlpquPUs2vMdj66iOzoOOt78RFQqWFYhaXZgvPlNcFlM 9EXLQ1YTzOEL0OT90yrePIJ7J5Ct9KcztvIfsd8BpOA9T2CCDJaxn0HRqADQSAVqBJ+V 9TOaXdgOR2B88aguJ7zltLcQJ7/q8H3KwDQTrHd3shUmowD8EyaFNYt9Yy++O9IQOQsN tV2K/6lAO5rgKXdjeTunvF1o0zhDhxHHGgdZV3dDv8RWmhhDujp6R+uqtbMuvJc3CCeF vdYwiwXmlt3vUman3JmD+QAj1L9UF3596ynhMBAK2qTywps19bS2vJ6AjqmNuGPH1K6u CKbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:subject :from:references:cc:to:content-language:user-agent:mime-version:date :message-id:dkim-signature; bh=nMm2KODOyv126RLp0Xuf/2Ye3eR3oCHycoBtvicKy6k=; b=NGJZJiOyE28qLzdQ1SneZpZYcgTOOwiiOD9ngxCOKkxu0k697fHJMrXZkfwUVdsV5T h5K8wu0oiGw4PVObdPk2oqCE0fGWRdBzwdQ5I5LimicBViqsDPnoKDWMZD0VbxuwDc7Y Zeg4yvdkRgRlpE6GgAHWv7/4JEioZi3vL512Amw4QkLRdh83JIhcxZ6rJ2xgVq80Jccu ki5IoSbMacaO1r9zlm7GrGUFGFrkYyhK33O6ZrEK4MhSkIK89GhLXaXcHDvexzuKd6VZ fmdhdbzuL6AQI0PEzUw2UFKdOsmZ5Kj4+4TfCiY78vqUSFVhvIdx7DIc2xsiGgrqr7ar V10w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="XCpCH/3T"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id m22si277430pls.581.2022.02.09.15.29.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Feb 2022 15:29:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="XCpCH/3T"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 810A0E06FB05; Wed, 9 Feb 2022 15:21:07 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236423AbiBIWui (ORCPT + 99 others); Wed, 9 Feb 2022 17:50:38 -0500 Received: from gmail-smtp-in.l.google.com ([23.128.96.19]:52164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236400AbiBIWua (ORCPT ); Wed, 9 Feb 2022 17:50:30 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FCDDE01925B; Wed, 9 Feb 2022 14:50:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644447032; x=1675983032; h=message-id:date:mime-version:to:cc:references:from: subject:in-reply-to:content-transfer-encoding; bh=4Pwqb82pRVxdynG25KCR6umaUemEEUALqmeKoSY7z9M=; b=XCpCH/3Td0KFvE4yaMT6TS9hx1cEETvcDzl4o8R9X8atUDJAsi8Tr33i t01Go6d8A5PzjQJU0VHAT7enZLKRImwWFNk0eyxd82e/VAbE0uV7DOIYh 0GfBbJWb4NL0WN9b8CVLJKbns9CSv8KZzgS5CVWYKrQbnC8vLVxxiPJ4I /5IM7ulccWLpI7u+1JKEsrLe84rF9h1l+VUBFlqQUsLy47HglmiFNQcBY pzPRyR3Wwd7/27hyGXdcY59jU2Lvw5mldIfl0do4aC2S8RN693duW/Jsz vnZ7lL47aKWywTwTyRsBAL1eIAeErAJy45CwhN3Wxn7pfrbyKp8XBshIM Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10253"; a="312652486" X-IronPort-AV: E=Sophos;i="5.88,356,1635231600"; d="scan'208";a="312652486" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2022 14:50:31 -0800 X-IronPort-AV: E=Sophos;i="5.88,356,1635231600"; d="scan'208";a="701445624" Received: from sanvery-mobl.amr.corp.intel.com (HELO [10.212.232.139]) ([10.212.232.139]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2022 14:50:31 -0800 Message-ID: <74038286-6ff3-7eb2-ea65-2e223a894900@intel.com> Date: Wed, 9 Feb 2022 14:50:27 -0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Content-Language: en-US To: Rick Edgecombe , x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Dave Martin , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com Cc: Yu-cheng Yu References: <20220130211838.8382-1-rick.p.edgecombe@intel.com> <20220130211838.8382-21-rick.p.edgecombe@intel.com> From: Dave Hansen Subject: Re: [PATCH 20/35] mm: Update can_follow_write_pte() for shadow stack In-Reply-To: <20220130211838.8382-21-rick.p.edgecombe@intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,RDNS_NONE,SPF_HELO_NONE, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 1/30/22 13:18, Rick Edgecombe wrote: > From: Yu-cheng Yu > > Can_follow_write_pte() ensures a read-only page is COWed by checking the > FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid. > > Like a writable data page, a shadow stack page is writable, and becomes > read-only during copy-on-write, I thought we could not have read-only shadow stack pages. What does a read-only shadow stack PTE look like? ;) > but it is always dirty. Thus, in the > can_follow_write_pte() check, it belongs to the writable page case and > should be excluded from the read-only page pte_dirty() check. Apply > the same changes to can_follow_write_pmd(). > > While at it, also split the long line into smaller ones. FWIW, I probably would have had a preparatory patch for this part. The advantage is that if you break existing code, it's a lot easier to figure it out if you have a separate refactoring patch. Also, for a patch like this, the refactoring might result in the same exact binary. It's a pretty good sign that your patch won't cause regressions if it results in the same binary. > diff --git a/mm/gup.c b/mm/gup.c > index f0af462ac1e2..95b7d1084c44 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -464,10 +464,18 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, > * FOLL_FORCE can write to even unwritable pte's, but only > * after we've gone through a COW cycle and they are dirty. > */ > -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) > +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, > + struct vm_area_struct *vma) > { > - return pte_write(pte) || > - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); > + if (pte_write(pte)) > + return true; > + if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW)) > + return false; > + if (!pte_dirty(pte)) > + return false; > + if (is_shadow_stack_mapping(vma->vm_flags)) > + return false; You had me up until this is_shadow_stack_mapping(). It wasn't mentioned at all in the changelog. Logically, I think it's trying to say that a shadow stack VMA never allows a FOLL_FORCE override. That makes some sense, but it's a pretty big point not to mention in the changelog. > + return true; > } > > static struct page *follow_page_pte(struct vm_area_struct *vma, > @@ -510,7 +518,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, > } > if ((flags & FOLL_NUMA) && pte_protnone(pte)) > goto no_page; > - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { > + if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) { > pte_unmap_unlock(ptep, ptl); > return NULL; > }