Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932466AbcLGHLX (ORCPT ); Wed, 7 Dec 2016 02:11:23 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:58428 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752999AbcLGHKT (ORCPT ); Wed, 7 Dec 2016 02:10:19 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Yuriy Kolerov , Vineet Gupta Subject: [PATCH 4.8 04/35] ARC: mm: PAE40: Fix crash at munmap Date: Wed, 7 Dec 2016 08:08:20 +0100 Message-Id: <20161207070722.586185356@linuxfoundation.org> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161207070722.410336250@linuxfoundation.org> References: <20161207070722.410336250@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1408 Lines: 37 4.8-stable review patch. If anyone has any objections, please let me know. ------------------ From: Yuriy Kolerov commit 6a8b2ca702b279bea0e8f0363056439352e2081c upstream. commit 1c3c90930392 broke PAE40. Macro pfn_pte(pfn, prot) creates paddr from pfn, but the page shift was getting truncated to 32 bits since we lost the proper cast to 64 bits (for PAE400 Instead of reverting that commit, use a better helper which is 32/64 bits safe just like ARM implementation. Fixes: 1c3c90930392 ("ARC: mm: fix build breakage with STRICT_MM_TYPECHECKS") Signed-off-by: Yuriy Kolerov [vgupta: massaged changelog] Signed-off-by: Vineet Gupta Signed-off-by: Greg Kroah-Hartman --- arch/arc/include/asm/pgtable.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -280,7 +280,7 @@ static inline void pmd_set(pmd_t *pmdp, #define pte_page(pte) pfn_to_page(pte_pfn(pte)) #define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) -#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#define pfn_pte(pfn, prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) /* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT)