Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp3688559ybv; Mon, 10 Feb 2020 04:44:41 -0800 (PST) X-Google-Smtp-Source: APXvYqzA8b+THMwFTBb9MiNaLGZQVh7QYgD6c0aQijFjqW1qSvonIGZm6pyYcQcA6MGnNHZkMeck X-Received: by 2002:a9d:7305:: with SMTP id e5mr892751otk.64.1581338681442; Mon, 10 Feb 2020 04:44:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581338681; cv=none; d=google.com; s=arc-20160816; b=ZZ8hmZK75Yf1/gog5HUUitf1qy/OQ5oRjxiheFD9aQE9p4dPa1B6kF9HB4kgiY+p4k eKx5RPMWeg9l+WoQDLph3mICRa8Wc0uh9dOIBl5vdx0/iTWb0xecMVgSnihaKm0/Uv+2 pXgDDHUf5mtGnA4OBFWksxSy/MrV827819Ws8+4B/XrdVp/Y/XGTXCj7XeB8O/yvcYw5 W/bwRq8l1omd+JGQhWVRRogiZnwOKP8XpC1aKnr8u814gAofAda+vjQOS68U42X809K9 S0ZlA4kgbCH/qPG5SsWS7xf/kvC5jL5Srynr7Y7kxk8+tDPAgA18QOGstK461HBPKqct hWdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=acIkElbsddib0tIkAABxh1iLRLB2yqZJF5YbiFjmfSE=; b=GYpxz4y8rqSkwjdkGX+rxAIIp1jr8tvt4O2yb7Y3PztbVZ2lVbrP5SlQel69VIXKpS ECft37MgW+nVN9kn2o/e/F/k/ixLgBGqIZM6h2RxABtCDNNCC7I3dcmW4d1741ke3fIo qp89f5kss1qLVsJETDX0Mm1aO/xD3OpEHjX4itsW42MBASchrZExjIe9fBwaQGfzgeKG dJbAxBAKEYvahuRcs93DkIyVVSd5GhL/qEjxIhgue1MGnYnHT6uJaSrku1Z0BJlnkEPV ss79rCf3LOrOrAMksPiZ38GI1i7KawDEpZq2orAiWyMK+w1OLJapItmIXLV3X51wuxkB BL8A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="X0icsRx/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j71si116990oib.213.2020.02.10.04.44.30; Mon, 10 Feb 2020 04:44:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="X0icsRx/"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730426AbgBJMnX (ORCPT + 99 others); Mon, 10 Feb 2020 07:43:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:38512 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729551AbgBJMjx (ORCPT ); Mon, 10 Feb 2020 07:39:53 -0500 Received: from localhost (unknown [209.37.97.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9048C20842; Mon, 10 Feb 2020 12:39:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581338391; bh=bTGCTKM08JEIOMmidhIB2mxB1JbjBHo4SxkpnV0ddBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X0icsRx/FDGvcX3Q/CWRO+0rdmUxNds+3FwarOaTS/VNEoPFv6u/whpYZZnww1oFp vGwarkve1etBO7TVPrM/4YtAZLxLYgQET6W8vQaA1zHMXiXAU+4pMpbYRiRf9t6NiP bC0TXzEIZiGkJ5f9wgUcWg2lPvKBq3VSIdrqPoQ4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Aneesh Kumar K.V" , "Peter Zijlstra (Intel)" , Michael Ellerman , Andrew Morton , Linus Torvalds Subject: [PATCH 5.5 083/367] powerpc/mmu_gather: enable RCU_TABLE_FREE even for !SMP case Date: Mon, 10 Feb 2020 04:29:56 -0800 Message-Id: <20200210122432.005300978@linuxfoundation.org> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200210122423.695146547@linuxfoundation.org> References: <20200210122423.695146547@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Aneesh Kumar K.V commit 12e4d53f3f04e81f9e83d6fc10edc7314ab9f6b9 upstream. Patch series "Fixup page directory freeing", v4. This is a repost of patch series from Peter with the arch specific changes except ppc64 dropped. ppc64 changes are added here because we are redoing the patch series on top of ppc64 changes. This makes it easy to backport these changes. Only the first 2 patches need to be backported to stable. The thing is, on anything SMP, freeing page directories should observe the exact same order as normal page freeing: 1) unhook page/directory 2) TLB invalidate 3) free page/directory Without this, any concurrent page-table walk could end up with a Use-after-Free. This is esp. trivial for anything that has software page-table walkers (HAVE_FAST_GUP / software TLB fill) or the hardware caches partial page-walks (ie. caches page directories). Even on UP this might give issues since mmu_gather is preemptible these days. An interrupt or preempted task accessing user pages might stumble into the free page if the hardware caches page directories. This patch series fixes ppc64 and add generic MMU_GATHER changes to support the conversion of other architectures. I haven't added patches w.r.t other architecture because they are yet to be acked. This patch (of 9): A followup patch is going to make sure we correctly invalidate page walk cache before we free page table pages. In order to keep things simple enable RCU_TABLE_FREE even for !SMP so that we don't have to fixup the !SMP case differently in the followup patch !SMP case is right now broken for radix translation w.r.t page walk cache flush. We can get interrupted in between page table free and that would imply we have page walk cache entries pointing to tables which got freed already. Michael said "both our platforms that run on Power9 force SMP on in Kconfig, so the !SMP case is unlikely to be a problem for anyone in practice, unless they've hacked their kernel to build it !SMP." Link: http://lkml.kernel.org/r/20200116064531.483522-2-aneesh.kumar@linux.ibm.com Signed-off-by: Aneesh Kumar K.V Acked-by: Peter Zijlstra (Intel) Acked-by: Michael Ellerman Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/book3s/32/pgalloc.h | 8 -------- arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 -- arch/powerpc/include/asm/nohash/pgalloc.h | 8 -------- arch/powerpc/mm/book3s64/pgtable.c | 7 ------- 5 files changed, 1 insertion(+), 26 deletions(-) --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -222,7 +222,7 @@ config PPC select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_FREE select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -49,7 +49,6 @@ static inline void pgtable_free(void *ta #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { @@ -66,13 +65,6 @@ static inline void __tlb_remove_table(vo pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, - void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -19,9 +19,7 @@ extern struct vmemmap_backing *vmemmap_l extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); extern void pmd_fragment_free(unsigned long *); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); -#ifdef CONFIG_SMP extern void __tlb_remove_table(void *_table); -#endif void pte_frag_destroy(void *pte_frag); static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm) --- a/arch/powerpc/include/asm/nohash/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/pgalloc.h @@ -46,7 +46,6 @@ static inline void pgtable_free(void *ta #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { unsigned long pgf = (unsigned long)table; @@ -64,13 +63,6 @@ static inline void __tlb_remove_table(vo pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif - static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) { --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -378,7 +378,6 @@ static inline void pgtable_free(void *ta } } -#ifdef CONFIG_SMP void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) { unsigned long pgf = (unsigned long)table; @@ -395,12 +394,6 @@ void __tlb_remove_table(void *_table) return pgtable_free(table, index); } -#else -void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) -{ - return pgtable_free(table, index); -} -#endif #ifdef CONFIG_PROC_FS atomic_long_t direct_pages_count[MMU_PAGE_COUNT];