Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp2956023rwb; Mon, 3 Oct 2022 07:48:41 -0700 (PDT) X-Google-Smtp-Source: AMsMyM642RWSjed7wyyEXkyLmM2EApj9CICXvO6lMF2bT09GKLEwow07eHiYjz1r+P6o0/1fP7LP X-Received: by 2002:a05:6402:2709:b0:451:d665:e787 with SMTP id y9-20020a056402270900b00451d665e787mr18779256edd.317.1664808521024; Mon, 03 Oct 2022 07:48:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664808521; cv=none; d=google.com; s=arc-20160816; b=sEdspLL+CJLvfGdu7C6iUODreRlrMLhgDxOQ0Fe8AbZZHKBQNqUCWMP2Q7x6O95BVT pqewNzYbwqgbua3iOwqewFE4LHWeKs8aFO3v66RdSdO/aCllcAaGOJ82l8VGNg/BdWhI j7Sqj+L6FPbl2w6FoHYlSCsgKeRgg7i2hvJUERNrUoQLubV4sSU71MkPsRF74UOWxPwt /L33QB84R290qMMXtHsOhQquv4t3L6vGZxP/5Ev2drfRwa9HsmnVarYk65BU5h14VhnF a9D+sb+y+9tBHAhcyIn7nj3LHA2Dp9ntqbkPL1UXvhW8KGrrl1bfvGnIJYv3GA9ILF2U 4vCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=A4uQVAoRean9WGwhIayls1j7InPZQudSKatwL+TgQPE=; b=dJqRWfd4a3vj7/Qo4o1nkAdFqHVd20ZAc2FeiV1+eGG+qvBUoP+mi+VH7vtSrtM8oI OLN+KJ9KL6f0NaDc+LTgWBXQTAKom6iVZrW35OA9lvL5OBKMecj/KeIPBKtEqTvQpZ17 QGbQDFCjPF0C/+1hU+O4DHGsWaIkFYoqW/SFgOqb74E0cd8h1mCQekb54ZvTBJuHc9DY d2F9N82GNeQv6S3CBarerWavFHcZ2BFrqafqT7UieFx7XxsZUz0ijahJc7c4gxzljkL0 obnDUQFGuhcoKUJRlYVassJXOnZPsbolzYyQiLaZEQHYCWLmDVp0B1lExKUbma4ghiWd EjtQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MSFjMRne; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m5-20020a056402430500b0044e9ca6880bsi9920157edc.364.2022.10.03.07.48.15; Mon, 03 Oct 2022 07:48:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MSFjMRne; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229588AbiJCOSA (ORCPT + 99 others); Mon, 3 Oct 2022 10:18:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbiJCOR6 (ORCPT ); Mon, 3 Oct 2022 10:17:58 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A91D4CA33; Mon, 3 Oct 2022 07:17:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664806677; x=1696342677; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=a8ELpBrWXU8FHNEsI98ijK8e0KakbcFtxr1oBgu/Jbc=; b=MSFjMRneOFc/dyshw4EAdcCepN7SezyOpMc9NT9B1eRBeLR+lU+yQHbc MBQRAmhrsXWZB94ps2xHLnU4JQW4IBxdJoCCOrL6RrfHEy8qfv2cxLTWe VUE8QkNISNZ2xJsUvIO3+6kR4sG0TTGkzopQ06szIXSKnEB3Gu8UkhBlB 7+XoTzCh+T+g1NiSE/cxWzWtuRDQBcs1yGiQriKGEnlvX9PAABYaavkBT v7RU4tW0oVAIPR6TsJ38f8yIaLQMz8admaWx2J9UDhdS3E91q2TovKN5k 4qVxWHcQQeCVBB4aZF3z8gBFzRdLl56N17Xz8J7f1KjkQYL7vT1vjhQ3/ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10489"; a="283013685" X-IronPort-AV: E=Sophos;i="5.93,365,1654585200"; d="scan'208";a="283013685" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2022 07:17:50 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10489"; a="798736983" X-IronPort-AV: E=Sophos;i="5.93,365,1654585200"; d="scan'208";a="798736983" Received: from bandrei-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.37.219]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2022 07:17:42 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 74B03104CE4; Mon, 3 Oct 2022 17:17:39 +0300 (+03) Date: Mon, 3 Oct 2022 17:17:39 +0300 From: "Kirill A . Shutemov" To: Rick Edgecombe Cc: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Weijiang Yang , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, Yu-cheng Yu , Christoph Hellwig Subject: Re: [PATCH v2 08/39] x86/mm: Remove _PAGE_DIRTY from kernel RO pages Message-ID: <20221003141739.qdgdgfr67cycadgs@box.shutemov.name> References: <20220929222936.14584-1-rick.p.edgecombe@intel.com> <20220929222936.14584-9-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220929222936.14584-9-rick.p.edgecombe@intel.com> X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 29, 2022 at 03:29:05PM -0700, Rick Edgecombe wrote: > From: Yu-cheng Yu > > Processors sometimes directly create Write=0,Dirty=1 PTEs. These PTEs are > created by software. One such case is that kernel read-only pages are > historically set up as Dirty. > > New processors that support Shadow Stack regard Write=0,Dirty=1 PTEs as > shadow stack pages. When CR4.CET=1 and IA32_S_CET.SH_STK_EN=1, some > instructions can write to such supervisor memory. The kernel does not set > IA32_S_CET.SH_STK_EN, but to reduce ambiguity between shadow stack and > regular Write=0 pages, removed Dirty=1 from any kernel Write=0 PTEs. > > Signed-off-by: Yu-cheng Yu > Co-developed-by: Rick Edgecombe > Signed-off-by: Rick Edgecombe > Cc: "H. Peter Anvin" > Cc: Kees Cook > Cc: Thomas Gleixner > Cc: Dave Hansen > Cc: Christoph Hellwig > Cc: Andy Lutomirski > Cc: Ingo Molnar > Cc: Borislav Petkov > Cc: Peter Zijlstra > > --- > > v2: > - Normalize PTE bit descriptions between patches > > arch/x86/include/asm/pgtable_types.h | 6 +++--- > arch/x86/mm/pat/set_memory.c | 2 +- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h > index aa174fed3a71..ff82237e7b6b 100644 > --- a/arch/x86/include/asm/pgtable_types.h > +++ b/arch/x86/include/asm/pgtable_types.h > @@ -192,10 +192,10 @@ enum page_cache_mode { > #define _KERNPG_TABLE (__PP|__RW| 0|___A| 0|___D| 0| 0| _ENC) > #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) > #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) > -#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX|___D| 0|___G) > -#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0|___D| 0|___G) > +#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX| 0| 0|___G) > +#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0| 0| 0|___G) > #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) > -#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX|___D| 0|___G) > +#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX| 0| 0|___G) > #define __PAGE_KERNEL_LARGE (__PP|__RW| 0|___A|__NX|___D|_PSE|___G) > #define __PAGE_KERNEL_LARGE_EXEC (__PP|__RW| 0|___A| 0|___D|_PSE|___G) > #define __PAGE_KERNEL_WP (__PP|__RW| 0|___A|__NX|___D| 0|___G| __WP) > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 1abd5438f126..ed9193b469ba 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -1977,7 +1977,7 @@ int set_memory_nx(unsigned long addr, int numpages) > > int set_memory_ro(unsigned long addr, int numpages) > { > - return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW), 0); > + return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW | _PAGE_DIRTY), 0); > } Hm. Do we need to modify also *_wrprotect() helpers to clear dirty bit? I guess not (at least without a lot of audit), as we risk loosing dirty bit on page cache pages. But why is it safe? Do we only care about about kernel PTEs here? Userspace Write=0,Dirty=1 PTEs handled as before? > int set_memory_rw(unsigned long addr, int numpages) > -- > 2.17.1 > -- Kiryl Shutsemau / Kirill A. Shutemov