Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757620AbcK2ObM (ORCPT ); Tue, 29 Nov 2016 09:31:12 -0500 Received: from us01smtprelay-2.synopsys.com ([198.182.60.111]:52910 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756121AbcK2OaY (ORCPT ); Tue, 29 Nov 2016 09:30:24 -0500 From: Yuriy Kolerov To: linux-snps-arc@lists.infradead.org Cc: Vineet.Gupta1@synopsys.com, Alexey.Brodkin@synopsys.com, linux-kernel@vger.kernel.org, Yuriy Kolerov Subject: [PATCH v2] ARC: mm: Fix invalid page mapping in kernel with PAE40 Date: Tue, 29 Nov 2016 17:30:17 +0300 Message-Id: <1480429817-16163-1-git-send-email-yuriy.kolerov@synopsys.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1504 Lines: 36 Originally pfn_pte(pfn, prot) macro is implemented incorrectly and truncates the most significant byte in the value of PTE (Page Table Entry). It leads to the creation of invalid page mapping in the kernel with PAE40 if the physical page frame resides in the memory above of 4GB boundary. The behaviour of the system with such corrupted mappings is undefined. The kernel can crash when such pages are unmapped because the kernel can try to get access to bad address. For example if the kernel with 8KB pages will try to create a mapping of the virtual page to the physical frame (pfn) at 0x110000 then the value of pte will be truncated (0x10000000) and the invalid mapping will be created. Signed-off-by: Yuriy Kolerov --- arch/arc/include/asm/pgtable.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index 89eeb37..e94ca72 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -280,7 +280,7 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep) #define pte_page(pte) pfn_to_page(pte_pfn(pte)) #define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) -#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#define pfn_pte(pfn, prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) -- 2.7.4