Received: by 2002:a05:6a10:6d10:0:0:0:0 with SMTP id gq16csp1397587pxb; Thu, 14 Apr 2022 05:26:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyJlNxPman5QOwUNTUp09tqKO2dd5nRcYbRw+7vZCZfiUYV+Kn8FILlDVSzfTD5lKL0iJow X-Received: by 2002:a17:90a:d3d3:b0:1bf:2e8d:3175 with SMTP id d19-20020a17090ad3d300b001bf2e8d3175mr3498924pjw.2.1649939176301; Thu, 14 Apr 2022 05:26:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1649939176; cv=none; d=google.com; s=arc-20160816; b=FtVIMQnL31Y16tg4RHNj4p6aBTUxL0rzi+IGBTXl9Q+DrlvxmyhxM9dbwabJWvCZgZ cBLgzhHeckCjYqVnGL2AdxW/kU4rT8hNNbAQ6/Fnc0i71iVIKoaRj8A+nANJGDGmPBkd h7pYhyB1E0BDqkquBHEsVZTHN9M2GtAu/mmuIabN2pW0PDndyP9YQeil+grjM9luQtLt LiwLW2dHop/bzBzaSSAwdoRNL5m+7qW6gQrMNFg1uwguhF0wiIizy/oPoKXmeH8eJImo nU3r31e4eaOKTTpkLkWa85g2JzgWPUYY25BddCiMyxVu54gKrTeuRMHisSH+DqGlghX1 YLZw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=q9x9NQNG7FZ1i4yj/OobORS0i4P64gAXU6rZs02S7w8=; b=AN9oqfOvqVU+ACMJHtje7bKof32l4VRmFGOxgl3zZv8LKkkIz/QPf8aGdV0p5TyqvH FUX0lrjWWAWbS05LwcguTHPLuUcxADm7OIBHw/Ydl8Cj5FlH5sVaWphsKtsqazbB1yYi 9TpZKQwBCbqaARLDE4rjls6XALH92zU0gDejN8eha9ieSsIT0PnoJlAKqshplG0ElkYN 0wn4fwuSEomWD7OFUHI7aI3NafRayx5Q4fDYCxTA2Cry6kya/4Br12QykJlJ+zefzqpn /Oeqlz+doLZc5kX4xltsEpvdP9qAtH9K71xBylzttVKBxzFmORwqZ7kpePbzQlCJHSRQ PbVQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ik19-20020a170902ab1300b00153b2d1643csi15006694plb.68.2022.04.14.05.26.03; Thu, 14 Apr 2022 05:26:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233970AbiDNCmt (ORCPT + 99 others); Wed, 13 Apr 2022 22:42:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239654AbiDNCmo (ORCPT ); Wed, 13 Apr 2022 22:42:44 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60EC71AF3E for ; Wed, 13 Apr 2022 19:40:20 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Kf3Zj4wc4zfYsF; Thu, 14 Apr 2022 10:39:41 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 14 Apr 2022 10:40:18 +0800 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit To: David Hildenbrand CC: linux-kernel , Linux-MM , Minchan Kim References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> <28142e3e-2556-0ca2-7ac5-7420ef862259@huawei.com> <374d2be1-e13d-e605-ff80-b9d5eee4c40e@redhat.com> <3b9c6cc6-c5f5-8a8d-0b0f-9ca903cfab20@huawei.com> From: Miaohe Lin Message-ID: <7bc96de0-b8e8-ecfd-82d2-95b345dc7864@huawei.com> Date: Thu, 14 Apr 2022 10:40:18 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00,NICE_REPLY_A, RCVD_IN_DNSWL_MED,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022/4/13 20:31, David Hildenbrand wrote: > On 13.04.22 11:38, Miaohe Lin wrote: >> On 2022/4/13 17:30, David Hildenbrand wrote: >>> On 13.04.22 10:58, Miaohe Lin wrote: >>>> On 2022/3/30 0:43, David Hildenbrand wrote: >>>>> Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about >>>> ... >>>>> diff --git a/mm/memory.c b/mm/memory.c >>>>> index 14618f446139..9060cc7f2123 100644 >>>>> --- a/mm/memory.c >>>>> +++ b/mm/memory.c >>>>> @@ -792,6 +792,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, >>>>> &src_mm->mmlist); >>>>> spin_unlock(&mmlist_lock); >>>>> } >>>>> + /* Mark the swap entry as shared. */ >>>>> + if (pte_swp_exclusive(*src_pte)) { >>>>> + pte = pte_swp_clear_exclusive(*src_pte); >>>>> + set_pte_at(src_mm, addr, src_pte, pte); >>>>> + } >>>>> rss[MM_SWAPENTS]++; >>>>> } else if (is_migration_entry(entry)) { >>>>> page = pfn_swap_entry_to_page(entry); >>>>> @@ -3559,6 +3564,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>> struct page *page = NULL, *swapcache; >>>>> struct swap_info_struct *si = NULL; >>>>> rmap_t rmap_flags = RMAP_NONE; >>>>> + bool exclusive = false; >>>>> swp_entry_t entry; >>>>> pte_t pte; >>>>> int locked; >>>>> @@ -3724,6 +3730,46 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>> BUG_ON(!PageAnon(page) && PageMappedToDisk(page)); >>>>> BUG_ON(PageAnon(page) && PageAnonExclusive(page)); >>>>> >>>>> + /* >>>>> + * Check under PT lock (to protect against concurrent fork() sharing >>>>> + * the swap entry concurrently) for certainly exclusive pages. >>>>> + */ >>>>> + if (!PageKsm(page)) { >>>>> + /* >>>>> + * Note that pte_swp_exclusive() == false for architectures >>>>> + * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE. >>>>> + */ >>>>> + exclusive = pte_swp_exclusive(vmf->orig_pte); >>>>> + if (page != swapcache) { >>>>> + /* >>>>> + * We have a fresh page that is not exposed to the >>>>> + * swapcache -> certainly exclusive. >>>>> + */ >>>>> + exclusive = true; >>>>> + } else if (exclusive && PageWriteback(page) && >>>>> + !(swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) { >>>> >>>> Really sorry for late respond and a newbie question. IIUC, if SWP_STABLE_WRITES is set, >>>> it means concurrent page modifications while under writeback is not supported. For these >>>> problematic swap backends, exclusive marker is dropped. So the above if statement is to >>>> filter out these problematic swap backends which have SWP_STABLE_WRITES set. If so, the >>>> above check should be && (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)), i.e. no "!". >>>> Or am I miss something? >>> >>> Oh, thanks for your careful eyes! >>> >>> Indeed, SWP_STABLE_WRITES indicates that the backend *requires* stable >>> writes, meaning, we must not modify the page while writeback is active. >>> >>> So if and only if that is set, we must drop the exclusive marker. >>> >>> This essentially corresponds to previous reuse_swap_page() logic: >>> >>> bool reuse_swap_page(struct page *page) >>> { >>> ... >>> if (!PageWriteback(page)) { >>> ... >>> } else { >>> ... >>> if (p->flags & SWP_STABLE_WRITES) { >>> spin_unlock(&p->lock); >>> return false; >>> } >>> ... >>> } >>> >>> Fortunately, this only affects such backends. For backends without >>> SWP_STABLE_WRITES, the current code is simply sub-optimal. >>> >>> >>> So yes, this has to be >>> >>> } else if (exclusive && PageWriteback(page) && >>> (swp_swap_info(entry)->flags & SWP_STABLE_WRITES)) { >>> >> >> I am glad that my question helps. :) >> >>> >>> Let me try finding a way to test this, the tests I was running so far >>> were apparently not using a backend with SWP_STABLE_WRITES. >>> >> >> That will be really helpful. Many thanks for your hard work! >> > > FWIW, I tried with zram, which sets SWP_STABLE_WRITES ... but, it seems > to always do a synchronous writeback, so it cannot really trigger this > code path. That's a pity. We really need a asynchronous writeback to trigger this code path. > > commit f05714293a591038304ddae7cb0dd747bb3786cc > Author: Minchan Kim > Date: Tue Jan 10 16:58:15 2017 -0800 > > mm: support anonymous stable page > > > mentions "During developemnt for zram-swap asynchronous writeback,"; > maybe that can be activated somehow? Putting Minchan on CC. > ZRAM_WRITEBACK might need to be configured to enable asynchronous IO: + +config ZRAM_WRITEBACK + bool "Write back incompressible page to backing device" + depends on ZRAM + default n + help + With incompressible page, there is no memory saving to keep it + in memory. Instead, write it out to backing device. + For this feature, admin should set up backing device via + /sys/block/zramX/backing_dev. + + See zram.txt for more infomration. It seems there is only asynchronous IO for swapin ops. I browsed the source code and I can only found read_from_bdev_async. But I'm not familiar with the zram code. Minchan might could kindly help us solving this question. Thanks!