Received: by 2002:a05:6a10:7420:0:0:0:0 with SMTP id hk32csp3612139pxb; Mon, 21 Feb 2022 01:48:36 -0800 (PST) X-Google-Smtp-Source: ABdhPJwr/gtZujinwcbAsy9bqHmaxLvWaw4HR2B0vJxmVW20G8PYzuxB5Jcs7X7/XOJJEWMRjdhI X-Received: by 2002:a17:90a:b785:b0:1b9:d80e:5f55 with SMTP id m5-20020a17090ab78500b001b9d80e5f55mr20599301pjr.209.1645436916347; Mon, 21 Feb 2022 01:48:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1645436916; cv=none; d=google.com; s=arc-20160816; b=peM4hd2SQjm5JWhz64Y0qL3nFFXqLoiQ7wQFImrm+gx5LF7qivKWpmBtTM2B+j544T ECbgoCBtdESAA1S7dOD5nB8uWt6cZh8TdzWhCF8kv32cqMSYMlVshJYcucS2WlV/Q7aa ni4T1yDKVc02E+5KUgw187k1Cs8N2cJxOhhlUCCI34QmcYcEukBa1CKj9FS9CtUpHImY +/GTLy7kGnchQDIWykuqXB/xkT0Oz2qY69bW4ymTeV7PKC9+rdHso4/C5LzSeqFBswjE WSF18mnIm7EgsNLUhN2BRUdjU3U97+HrteBZOwf9PnjPVT7lPq56HbVy835LIBtI4DsJ fUBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=Z+qC6z8XRBxxB+qYlj75c0PJY3h48mXMNt/d2RqLvHI=; b=ojwQHYVIW0ykbcW22+efIByXQPh4h+BY9gU/c1C+kpPPfbRJ2pbnlMON28x72zjTI/ /PwJKlvd7BeObv6ZA8czQ7CyMEH8LLrx8kBWA/jMj9t7fDgs0X1u2D12SKughzq/94+B p6/n3addgP5VJ+nKCye54euDzr8qJy7SyZFrd7AJ3THL4kQT5rd99+9vB8RrebcpQbYT 0oa4u0yfzCPZCr5WWSKHLDZ6ZDkBjJvkc7LWyruMVRjTXzHzyNAgI/qcGaPr8XlMogl0 0uRVtsKX8EC7NtujcUx3zHToXVFKahiMJfD9nUQQL6DQ1WteJgjLtWtU8SNf0xVSshZk TBSw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s13si3690618plk.321.2022.02.21.01.48.22; Mon, 21 Feb 2022 01:48:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345362AbiBUGkF (ORCPT + 99 others); Mon, 21 Feb 2022 01:40:05 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:41644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345349AbiBUGjs (ORCPT ); Mon, 21 Feb 2022 01:39:48 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E02F22A250; Sun, 20 Feb 2022 22:39:23 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AEF7D1476; Sun, 20 Feb 2022 22:39:23 -0800 (PST) Received: from p8cg001049571a15.arm.com (unknown [10.163.49.67]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 725763F70D; Sun, 20 Feb 2022 22:39:21 -0800 (PST) From: Anshuman Khandual To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, Anshuman Khandual , Christoph Hellwig , linux-arch@vger.kernel.org Subject: [PATCH V2 11/30] mm/mmap: Drop protection_map[] Date: Mon, 21 Feb 2022 12:08:20 +0530 Message-Id: <1645425519-9034-12-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1645425519-9034-1-git-send-email-anshuman.khandual@arm.com> References: <1645425519-9034-1-git-send-email-anshuman.khandual@arm.com> X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are no other users for protection_map[]. Hence just drop this array construct and instead define __vm_get_page_prot() which will provide page protection map based on vm_flags combination switch. Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- include/linux/mm.h | 6 ----- mm/mmap.c | 61 +++++++++++++++++++++++++++++++--------------- 2 files changed, 41 insertions(+), 26 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 213cc569b192..ff74bd2d7850 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -418,12 +418,6 @@ extern unsigned int kobjsize(const void *objp); #endif #define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR) -/* - * mapping from the currently active vm_flags protection bits (the - * low four bits) to a page protection mask.. - */ -extern pgprot_t protection_map[16]; - /* * The default fault flags that should be used by most of the * arch-specific page fault handlers. diff --git a/mm/mmap.c b/mm/mmap.c index ffd70a0c8ddf..f61f74a61f62 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -102,24 +102,6 @@ static void unmap_region(struct mm_struct *mm, * w: (no) no * x: (yes) yes */ -pgprot_t protection_map[16] __ro_after_init = { - [VM_NONE] = __P000, - [VM_READ] = __P001, - [VM_WRITE] = __P010, - [VM_WRITE | VM_READ] = __P011, - [VM_EXEC] = __P100, - [VM_EXEC | VM_READ] = __P101, - [VM_EXEC | VM_WRITE] = __P110, - [VM_EXEC | VM_WRITE | VM_READ] = __P111, - [VM_SHARED] = __S000, - [VM_SHARED | VM_READ] = __S001, - [VM_SHARED | VM_WRITE] = __S010, - [VM_SHARED | VM_WRITE | VM_READ] = __S011, - [VM_SHARED | VM_EXEC] = __S100, - [VM_SHARED | VM_EXEC | VM_READ] = __S101, - [VM_SHARED | VM_EXEC | VM_WRITE] = __S110, - [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = __S111 -}; #ifndef CONFIG_ARCH_HAS_FILTER_PGPROT static inline pgprot_t arch_filter_pgprot(pgprot_t prot) @@ -128,10 +110,49 @@ static inline pgprot_t arch_filter_pgprot(pgprot_t prot) } #endif +static inline pgprot_t __vm_get_page_prot(unsigned long vm_flags) +{ + switch (vm_flags & (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)) { + case VM_NONE: + return __P000; + case VM_READ: + return __P001; + case VM_WRITE: + return __P010; + case VM_READ | VM_WRITE: + return __P011; + case VM_EXEC: + return __P100; + case VM_EXEC | VM_READ: + return __P101; + case VM_EXEC | VM_WRITE: + return __P110; + case VM_EXEC | VM_READ | VM_WRITE: + return __P111; + case VM_SHARED: + return __S000; + case VM_SHARED | VM_READ: + return __S001; + case VM_SHARED | VM_WRITE: + return __S010; + case VM_SHARED | VM_READ | VM_WRITE: + return __S011; + case VM_SHARED | VM_EXEC: + return __S100; + case VM_SHARED | VM_EXEC | VM_READ: + return __S101; + case VM_SHARED | VM_EXEC | VM_WRITE: + return __S110; + case VM_SHARED | VM_EXEC | VM_READ | VM_WRITE: + return __S111; + default: + BUILD_BUG(); + } +} + pgprot_t vm_get_page_prot(unsigned long vm_flags) { - pgprot_t ret = __pgprot(pgprot_val(protection_map[vm_flags & - (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | + pgprot_t ret = __pgprot(pgprot_val(__vm_get_page_prot(vm_flags)) | pgprot_val(arch_vm_get_page_prot(vm_flags))); return arch_filter_pgprot(ret); -- 2.25.1