Received: by 2002:a05:6358:5282:b0:b5:90e7:25cb with SMTP id g2csp3515649rwa; Tue, 23 Aug 2022 06:07:06 -0700 (PDT) X-Google-Smtp-Source: AA6agR7K03gipK7y4z/UAMGMbivpY6oIghuy2Ny1u4C/nxZZxRHVS6FTB49fp38jLU7K0pl2Ez54 X-Received: by 2002:a17:90b:2643:b0:1fb:707d:af1f with SMTP id pa3-20020a17090b264300b001fb707daf1fmr1207043pjb.7.1661260025674; Tue, 23 Aug 2022 06:07:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1661260025; cv=none; d=google.com; s=arc-20160816; b=iiQ+JglF/F9HO36IwtK61qMCTIEJNyPyRCIm2c436+kngj6SQn2qgrS9sX7lvSVXX4 2M3s2We5ZM1e4ufkDEM7uHHMBsUEzvFagsQ5YZrUkj5bcH7R1sGcBL0BOqYQHP35efdv SWToHx7t1ewtik7XZt9SjiSga7Yg6obBQHgRgcYnbHw0beCjRHpV9xg2cvNPC+ntl7YV b6DJbR3vLfKq95vcg0rSQOCZ2I0ry1biLkSEjz9yY8otQbMJFCte2Zb/lWHCu2fcrqve p/Q9TFpdTYMW12HeT1YziXfRn2QavyFPsc1Xh0HeA3Od0NDsvSLMNAZZTT4RraxXLGcK gO6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=YCDpB94Y67TpfkTYMpK6cAK5E2bfyzKYWL7vPdMzAi8=; b=etQnN4la59PrTsRjbUFDDxbjzsQ1lKM8Frnb2iXfgBf4gQjep9UGoaCsN3ysnqEC0A zCjLJUE61YvNw+TtsgxSdl3gybvHwIXGIEhd1xXCzIvITwXcQDROX4wD4fxVvi7QtFLz C1q2tU5PeSkFacoMWF5zgaHOcTo7lz6xNLLMsCa/8H/aaJQHdzgNGNPTFaetAvZskRwU N5kf/c5fXLsKpv8tBPVsRXeUWKERdkQ0Ce+Wv5UKngnB3GgiE+cFNe64Y8VMYdehGaf0 iW6BlFzmWtQjuvBEV8V0zOZ7eGt9ilOzffQLUK3lRMaVjyXnmvEuoBDylhk1wJq0iAkR W2fQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=P1xofIAv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k21-20020a63ff15000000b00412b1d694d6si16039454pgi.155.2022.08.23.06.06.53; Tue, 23 Aug 2022 06:07:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=P1xofIAv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357095AbiHWLIT (ORCPT + 99 others); Tue, 23 Aug 2022 07:08:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357252AbiHWLG0 (ORCPT ); Tue, 23 Aug 2022 07:06:26 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97F74B442A; Tue, 23 Aug 2022 02:15:52 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4C01C60F50; Tue, 23 Aug 2022 09:15:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 04B6EC433D6; Tue, 23 Aug 2022 09:15:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1661246151; bh=eZt4aXXHjAiNIS0nYJ8lrHZ438x5Po3ctGxlRoSnBpY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P1xofIAvAohXn2I0L/l0vlldXyR4qmAb6MxK++fSb1OilY1ynS8kEPNtta3/jXSYO RtNtuHqqg+Z4WP/dezibXOGpmxXMlhRTxUn3YNJ/1oAmVWqsZ0kLE/SxuWaNLqS1Tn qa3ZJ6RtTg90sga6/1L9hav8KVnYv3Rw9HfUBXm8= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Aneesh Kumar K.V" , Hugh Dickins , "Kirill A. Shutemov" , Christophe Leroy , Joel Fernandes , Kalesh Singh , "Kirill A. Shutemov" , Michael Ellerman , Nicholas Piggin , Stephen Rothwell , Andrew Morton , Linus Torvalds , Jann Horn Subject: [PATCH 5.4 018/389] mm/mremap: hold the rmap lock in write mode when moving page table entries. Date: Tue, 23 Aug 2022 10:21:36 +0200 Message-Id: <20220823080116.470675684@linuxfoundation.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220823080115.331990024@linuxfoundation.org> References: <20220823080115.331990024@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Aneesh Kumar K.V commit 97113eb39fa7972722ff490b947d8af023e1f6a2 upstream. To avoid a race between rmap walk and mremap, mremap does take_rmap_locks(). The lock was taken to ensure that rmap walk don't miss a page table entry due to PTE moves via move_pagetables(). The kernel does further optimization of this lock such that if we are going to find the newly added vma after the old vma, the rmap lock is not taken. This is because rmap walk would find the vmas in the same order and if we don't find the page table attached to older vma we would find it with the new vma which we would iterate later. As explained in commit eb66ae030829 ("mremap: properly flush TLB before releasing the page") mremap is special in that it doesn't take ownership of the page. The optimized version for PUD/PMD aligned mremap also doesn't hold the ptl lock. This can result in stale TLB entries as show below. This patch updates the rmap locking requirement in mremap to handle the race condition explained below with optimized mremap:: Optmized PMD move CPU 1 CPU 2 CPU 3 mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one mmap_write_lock_killable() addr = old_addr lock(pte_ptl) lock(pmd_ptl) pmd = *old_pmd pmd_clear(old_pmd) flush_tlb_range(old_addr) *new_pmd = pmd *new_addr = 10; and fills TLB with new addr and old pfn unlock(pmd_ptl) ptep_clear_flush() old pfn is free. Stale TLB entry Optimized PUD move also suffers from a similar race. Both the above race condition can be fixed if we force mremap path to take rmap lock. Link: https://lkml.kernel.org/r/20210616045239.370802-7-aneesh.kumar@linux.ibm.com Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") Fixes: c49dd3401802 ("mm: speedup mremap on 1GB or larger regions") Link: https://lore.kernel.org/linux-mm/CAHk-=wgXVR04eBNtxQfevontWnP6FDm+oj5vauQXP3S-huwbPw@mail.gmail.com Signed-off-by: Aneesh Kumar K.V Acked-by: Hugh Dickins Acked-by: Kirill A. Shutemov Cc: Christophe Leroy Cc: Joel Fernandes Cc: Kalesh Singh Cc: Kirill A. Shutemov Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Stephen Rothwell Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [patch rewritten for backport since the code was refactored since] Signed-off-by: Jann Horn Signed-off-by: Greg Kroah-Hartman --- mm/mremap.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) --- a/mm/mremap.c +++ b/mm/mremap.c @@ -293,12 +293,10 @@ unsigned long move_page_tables(struct vm */ bool moved; - if (need_rmap_locks) - take_rmap_locks(vma); + take_rmap_locks(vma); moved = move_normal_pmd(vma, old_addr, new_addr, old_end, old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); + drop_rmap_locks(vma); if (moved) continue; #endif