Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp8287623imu; Fri, 28 Dec 2018 14:37:09 -0800 (PST) X-Google-Smtp-Source: ALg8bN5Z9bPnpCb14Vio40BZZ9YZpbltjLRs4hE0OMR6hlt/PASmLb18I08vQ88xd9AaYnRFlcgk X-Received: by 2002:a65:4b82:: with SMTP id t2mr28020826pgq.189.1546036628975; Fri, 28 Dec 2018 14:37:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546036628; cv=none; d=google.com; s=arc-20160816; b=rJxokPRovXtWAUH6ZAKr7V6tgGGsQ4a5lRLMEk0+Vy7Bb0K4F56L1SW5GEAhrwplj3 lV++a2kPviy2UpL/ZshYap45tTPK60XWzAJ0WcyYc/YDUFanxTZb+ccKocUOTEh5uigE 3nUe7vUlFnH8oy8t+EM/U4pDaJsWVYpunMx6UmT7ViSTH3LlkPmpEETjP8hIzQag6ZlG khxaT4Ok4ZejqDX5Pimzw6OKiNwA1xdWvw6/vVinABpg7DZzvg3A7ZM/50gZP/mQzUX3 9E6it06+dQQ8IBgsd9415WEsMRM81CGdgkYbuTolzhDo9pgSnh9AbEFM2wZaLwgojMsW lZ5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=3vxHuHIH2aeZ4QmB8CWXdi3ZvQB1Ad46gbWRKPmuVjE=; b=HV7e7pVJiySZhoGrG3/79KDEcls4QidDv5w4IRl4ZP6AjPN3own4XpMLMMY7QIlyQF zRymlO1pQgOMK7w8j9NnvT3grOnTKelsgEzUxPvK57y1QKF5kQ/0WkID7Uqn/QurxoMz 0DwJKBi0YaxjsffrAOooOI3JXtWo/yzbzgolDYhlt4hUiCyi6z6C+molmXnjBdTVxfxB 0lA7NQWW+DqV4baq3RCnn/SNu5swNGtkJF1sp8nRlMQOhwS88EEhUtS0p7AfPed4zV5x B1jvUsme+fNp+TO0w3Frak6N+ek/YzSJJtoJx68Os9QSuxrWsOBzMqjrqv/uTsOfbfnz R82w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=sWMzxclv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d25si35685296pgd.88.2018.12.28.14.36.53; Fri, 28 Dec 2018 14:37:08 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=sWMzxclv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732353AbeL1MPT (ORCPT + 99 others); Fri, 28 Dec 2018 07:15:19 -0500 Received: from mail.kernel.org ([198.145.29.99]:34262 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732244AbeL1MPQ (ORCPT ); Fri, 28 Dec 2018 07:15:16 -0500 Received: from localhost (5356596B.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 319D52087F; Fri, 28 Dec 2018 12:15:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1545999315; bh=gu7M0Z5v+iM88bbNBvrHffzghDndUaCJKkTnI97OBVY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sWMzxclvf8ThmaJJXN2CE1ZgUJPi9cAr1BJfCP9gfju102qP1Y1HsjsnJ58tD7bXB 5WMFbjBMDNxb35og4WpdVtNTZf9IpOzFpYK8jK+P516LmxPC0wjREs86xLMXkw4s+m ASHZyovGNYaoBsMGDAUeCPT9Maf1ZoebSNMkXsOE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Peter Xu , Konstantin Khlebnikov , William Kucharski , "Kirill A. Shutemov" , Andrea Arcangeli , Matthew Wilcox , Michal Hocko , Dave Jiang , "Aneesh Kumar K.V" , Souptick Joarder , Zi Yan , Andrew Morton , Linus Torvalds Subject: [PATCH 4.19 41/46] mm: thp: fix flags for pmd migration when split Date: Fri, 28 Dec 2018 12:52:35 +0100 Message-Id: <20181228113127.324147093@linuxfoundation.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20181228113124.971620049@linuxfoundation.org> References: <20181228113124.971620049@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review X-Patchwork-Hint: ignore MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Xu commit 2e83ee1d8694a61d0d95a5b694f2e61e8dde8627 upstream. When splitting a huge migrating PMD, we'll transfer all the existing PMD bits and apply them again onto the small PTEs. However we are fetching the bits unconditionally via pmd_soft_dirty(), pmd_write() or pmd_yound() while actually they don't make sense at all when it's a migration entry. Fix them up. Since at it, drop the ifdef together as not needed. Note that if my understanding is correct about the problem then if without the patch there is chance to lose some of the dirty bits in the migrating pmd pages (on x86_64 we're fetching bit 11 which is part of swap offset instead of bit 2) and it could potentially corrupt the memory of an userspace program which depends on the dirty bit. Link: http://lkml.kernel.org/r/20181213051510.20306-1-peterx@redhat.com Signed-off-by: Peter Xu Reviewed-by: Konstantin Khlebnikov Reviewed-by: William Kucharski Acked-by: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Matthew Wilcox Cc: Michal Hocko Cc: Dave Jiang Cc: "Aneesh Kumar K.V" Cc: Souptick Joarder Cc: Konstantin Khlebnikov Cc: Zi Yan Cc: [4.14+] Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/huge_memory.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2127,23 +2127,25 @@ static void __split_huge_pmd_locked(stru */ old_pmd = pmdp_invalidate(vma, haddr, pmd); -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION pmd_migration = is_pmd_migration_entry(old_pmd); - if (pmd_migration) { + if (unlikely(pmd_migration)) { swp_entry_t entry; entry = pmd_to_swp_entry(old_pmd); page = pfn_to_page(swp_offset(entry)); - } else -#endif + write = is_write_migration_entry(entry); + young = false; + soft_dirty = pmd_swp_soft_dirty(old_pmd); + } else { page = pmd_page(old_pmd); + if (pmd_dirty(old_pmd)) + SetPageDirty(page); + write = pmd_write(old_pmd); + young = pmd_young(old_pmd); + soft_dirty = pmd_soft_dirty(old_pmd); + } VM_BUG_ON_PAGE(!page_count(page), page); page_ref_add(page, HPAGE_PMD_NR - 1); - if (pmd_dirty(old_pmd)) - SetPageDirty(page); - write = pmd_write(old_pmd); - young = pmd_young(old_pmd); - soft_dirty = pmd_soft_dirty(old_pmd); /* * Withdraw the table only after we mark the pmd entry invalid.