Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp3098423pxb; Mon, 18 Apr 2022 15:53:56 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw5OnXzNRCTibiJ3nW/2EwHH76aiar9OhM/djmG82YejfKEyE3+qUzpLls0MSjYmnxwiMLw X-Received: by 2002:aa7:dc53:0:b0:421:d085:9a0d with SMTP id g19-20020aa7dc53000000b00421d0859a0dmr14634239edu.0.1650322436603; Mon, 18 Apr 2022 15:53:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1650322436; cv=none; d=google.com; s=arc-20160816; b=aK6ACYdhny73Jjj7lMHs37mFL8sya9V5QH9yqHyMG1xII7KqVdse5ubkqvJKoAonFu NipLEypYMiFCQXqbbUUPBa5Sf4pNPCTqNcz3pzcQ+UrYjqlflvITxm1OtyM2Rm5db+XA WC6BIKalq+2VKbtlT6LV1e/50genfoO974LO0/+rF7uc+EFUrsO9Bobc5E9tHUqERqt4 vMrRItISfND+PE/V4N2WWAf8/L6owfbcnitC68k11nZK77HVGQpcKA1SP4qBTD+w3pX4 Gxi0hRWUBtVEVbzZdzsEIBDTeKIKAOYdQQgNvHeaVN1JEMHHniUcz+gnICmw/f2ldldx JgZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=4Ej+dsmStwV3P6+Iph25T0xQqyL3grQi8TMOhkhlfkw=; b=LbiUb0sAnzPCi8vlqGZD+Ak1khDHGcL134TbbWfn5iPqZII0VE6zRLMHwOKghSiKse eGMeXG1xSoyRCYfXpzPoBRA8zle+IaNB6T7KOGbp1AypajBipxfMwSeWmYgnxfIwmzDP JqIUACHdGos43ouo43SforOjPb9KuLkHpWGIODlMErpWDBpmtd8fj2MgkblR6+XCSM6R ooHR7+xg8W4OlrDmT9z8ULTAWSxVWgdvecy9wplREsbdgFaRc40wyPQTeoE4zgReLxZc Tww27JOqlErOYTzHl3/qgbqzofKGG/IkqjlZZEnwlMGo1zo5gCpmi+MljpuF3ZOKUQAX 8clw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=nZ9gXzft; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z23-20020a170906435700b006e898982060si7350116ejm.476.2022.04.18.15.53.33; Mon, 18 Apr 2022 15:53:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=nZ9gXzft; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343781AbiDROIW (ORCPT + 99 others); Mon, 18 Apr 2022 10:08:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244959AbiDRNvk (ORCPT ); Mon, 18 Apr 2022 09:51:40 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6BA245075; Mon, 18 Apr 2022 06:02:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DED07B80E4B; Mon, 18 Apr 2022 13:02:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01DDDC385A7; Mon, 18 Apr 2022 13:02:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1650286936; bh=+HHC1tRgOfu0vQmgf+S3jxVTbSdU76tszuyaFr2BVWA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nZ9gXzft7LPvmlYaYPYsCyhg/o60NTta48n094KPapiReR47M5mgPDofx9xqsBNtu n0wBR2PcighBRjcHGOmE3E/HP9WG02gf4dsjAi7YpLejzJ7ctjpTEJeY6lSiPt/1BF /NuZ7q5Ft4T6vSmjVeYpBPb5VcNLcF9X3HCh/3JE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Peter Xu , John Hubbard , David Hildenbrand , Hugh Dickins , Alistair Popple , Andrea Arcangeli , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Yang Shi , Andrew Morton , Linus Torvalds Subject: [PATCH 4.14 254/284] mm: dont skip swap entry even if zap_details specified Date: Mon, 18 Apr 2022 14:13:55 +0200 Message-Id: <20220418121219.567288515@linuxfoundation.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220418121210.689577360@linuxfoundation.org> References: <20220418121210.689577360@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Xu commit 5abfd71d936a8aefd9f9ccd299dea7a164a5d455 upstream. Patch series "mm: Rework zap ptes on swap entries", v5. Patch 1 should fix a long standing bug for zap_pte_range() on zap_details usage. The risk is we could have some swap entries skipped while we should have zapped them. Migration entries are not the major concern because file backed memory always zap in the pattern that "first time without page lock, then re-zap with page lock" hence the 2nd zap will always make sure all migration entries are already recovered. However there can be issues with real swap entries got skipped errornoously. There's a reproducer provided in commit message of patch 1 for that. Patch 2-4 are cleanups that are based on patch 1. After the whole patchset applied, we should have a very clean view of zap_pte_range(). Only patch 1 needs to be backported to stable if necessary. This patch (of 4): The "details" pointer shouldn't be the token to decide whether we should skip swap entries. For example, when the callers specified details->zap_mapping==NULL, it means the user wants to zap all the pages (including COWed pages), then we need to look into swap entries because there can be private COWed pages that was swapped out. Skipping some swap entries when details is non-NULL may lead to wrongly leaving some of the swap entries while we should have zapped them. A reproducer of the problem: ===8<=== #define _GNU_SOURCE /* See feature_test_macros(7) */ #include #include #include #include #include int page_size; int shmem_fd; char *buffer; void main(void) { int ret; char val; page_size = getpagesize(); shmem_fd = memfd_create("test", 0); assert(shmem_fd >= 0); ret = ftruncate(shmem_fd, page_size * 2); assert(ret == 0); buffer = mmap(NULL, page_size * 2, PROT_READ | PROT_WRITE, MAP_PRIVATE, shmem_fd, 0); assert(buffer != MAP_FAILED); /* Write private page, swap it out */ buffer[page_size] = 1; madvise(buffer, page_size * 2, MADV_PAGEOUT); /* This should drop private buffer[page_size] already */ ret = ftruncate(shmem_fd, page_size); assert(ret == 0); /* Recover the size */ ret = ftruncate(shmem_fd, page_size * 2); assert(ret == 0); /* Re-read the data, it should be all zero */ val = buffer[page_size]; if (val == 0) printf("Good\n"); else printf("BUG\n"); } ===8<=== We don't need to touch up the pmd path, because pmd never had a issue with swap entries. For example, shmem pmd migration will always be split into pte level, and same to swapping on anonymous. Add another helper should_zap_cows() so that we can also check whether we should zap private mappings when there's no page pointer specified. This patch drops that trick, so we handle swap ptes coherently. Meanwhile we should do the same check upon migration entry, hwpoison entry and genuine swap entries too. To be explicit, we should still remember to keep the private entries if even_cows==false, and always zap them when even_cows==true. The issue seems to exist starting from the initial commit of git. [peterx@redhat.com: comment tweaks] Link: https://lkml.kernel.org/r/20220217060746.71256-2-peterx@redhat.com Link: https://lkml.kernel.org/r/20220217060746.71256-1-peterx@redhat.com Link: https://lkml.kernel.org/r/20220216094810.60572-1-peterx@redhat.com Link: https://lkml.kernel.org/r/20220216094810.60572-2-peterx@redhat.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Peter Xu Reviewed-by: John Hubbard Cc: David Hildenbrand Cc: Hugh Dickins Cc: Alistair Popple Cc: Andrea Arcangeli Cc: "Kirill A . Shutemov" Cc: Matthew Wilcox Cc: Vlastimil Babka Cc: Yang Shi Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/memory.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) --- a/mm/memory.c +++ b/mm/memory.c @@ -1306,6 +1306,17 @@ int copy_page_range(struct mm_struct *ds return ret; } +/* Whether we should zap all COWed (private) pages too */ +static inline bool should_zap_cows(struct zap_details *details) +{ + /* By default, zap all pages */ + if (!details) + return true; + + /* Or, we zap COWed pages only if the caller wants to */ + return !details->check_mapping; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, @@ -1394,17 +1405,19 @@ again: continue; } - /* If details->check_mapping, we leave swap entries. */ - if (unlikely(details)) - continue; - entry = pte_to_swp_entry(ptent); - if (!non_swap_entry(entry)) + if (!non_swap_entry(entry)) { + /* Genuine swap entry, hence a private anon page */ + if (!should_zap_cows(details)) + continue; rss[MM_SWAPENTS]--; - else if (is_migration_entry(entry)) { + } else if (is_migration_entry(entry)) { struct page *page; page = migration_entry_to_page(entry); + if (details && details->check_mapping && + details->check_mapping != page_rmapping(page)) + continue; rss[mm_counter(page)]--; } if (unlikely(!free_swap_and_cache(entry)))