Received: by 2002:ab2:710b:0:b0:1ef:a325:1205 with SMTP id z11csp1664437lql; Wed, 13 Mar 2024 05:04:41 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXqMPZcRdK2LxJYJSdJjoDsQCMObAMQqvJV2I6QwUUUtSsyF1OtD9LYEw4GwLIojAYe84K+4j1Ibj14qNDrM9SNMM6a+1wqdI0RZw1iBQ== X-Google-Smtp-Source: AGHT+IFfLkD59sU77RjYuJACGkMhvPKegG6X9UC4G5xKGUg+DI4J9+sEI6O4uEwDbUWO3qTI35K+ X-Received: by 2002:a17:902:6f01:b0:1db:de64:97ac with SMTP id w1-20020a1709026f0100b001dbde6497acmr3792201plk.15.1710331481405; Wed, 13 Mar 2024 05:04:41 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1710331481; cv=pass; d=google.com; s=arc-20160816; b=hD9wxE7oA6Lt8qBARKmY1SBHs9piSDlWMG+utFflX58GDvf0aNRz0QlIyBlF4GvCRo KVt7Xmsny+6Gf+FNo8klqe7GsHn7OTggY+3oNiQhkUARn4XdId0AC3VKvCJ+HCWG88td xDH5JDWD5ixlvwde/2kvnmGnykXKzohM8kfKUn57t8mzuqcmV/Xrn+d9M1RYtRWsxcVl pYM7LfG8i3d2g1b3KX65bq3xF9yHop+hSgi09SEviTUonx/nkuNojlpKDl8gKHcj1Ed0 j3QupZwVZ2KV1XcIZz8JrN5YArjs+wQjMLrdBel/y+D3Now0EeN7aDWM1lNI4UaC/3dY YdTg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=oqk4ZlykuI8Zl3vW0cOVULlTmrDsdonFv7bik29YiLg=; fh=BB1fA45cDv7DsyZvD5WexqpJKdbh2lPaxtYQ9eCG+gg=; b=pm2uDvNBhc8xz9cYh4drA02BraNpwv5hT1fRXN78CVAVhSQvOy+Kz+V7FQ8+57N/rY ukrKgb+CaCf3DocQljzyyUGtrde7juD36w6idC57R/Y+YTDQaCAE4QN0Dhl2+0qBAB2s tIHihtNle+Ce7jHx6i6P5H6/0U995hq7QBP4WsbUe86llhNYJZLQXHJ71UCXWwW6OA4H zbI/QnjwWJ0ZvcY8fdfT7DvHXR5H3MhGpI4r8T0aGkOJnjdg3be7TOpyOFQh3BV6Brz4 y9K1E2vsNpWm6msq9a74ea9DhVajmXMUDoEIl8FEC98vHms1fTwljelFH8douzEDDB05 AnBw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-101454-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-101454-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id x12-20020a170902a38c00b001dbf38fee8asi8670334pla.316.2024.03.13.05.04.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Mar 2024 05:04:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-101454-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-101454-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-101454-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 1149D28405A for ; Wed, 13 Mar 2024 12:02:58 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 95B583FB1C; Wed, 13 Mar 2024 12:02:53 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8EB713F9D5 for ; Wed, 13 Mar 2024 12:02:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710331372; cv=none; b=UqiDiltMQDPY7tXmbovXm66guGotgun6p0opYGkRxceq6oLq+53VKOZbrKWxAtRqE62qdyljl+OiwuXoJx3tddtN52EoJz+G2u9QVTiebcxIqxlQx9VjQ91U7jZVugYnq3c1QRlzsAKbwXoDIC8Ky5yg2IL5Y8PR035U6j85RIE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710331372; c=relaxed/simple; bh=f/3QvJQ2oIN6gkYumPPjJYeGXu728/XxbMELUwrE7Pc=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=QZBweWygyiWaPw5HwXx9OxJ9/vfye2Tgqm+zNA++U79QxV3ta41cOFG5SXIk02H4+7DdipaGmAMKK32kzvXFiHXlzpXtLXKHoWL24e4LU/A89ou+/0H2tqEJqTEu4SZpse4ocakgEwLTFBuDfcEEn/wjRfJBl1j1Uxd7YGj/tSI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8BCA31007; Wed, 13 Mar 2024 05:03:25 -0700 (PDT) Received: from [10.57.67.164] (unknown [10.57.67.164]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E58F33F762; Wed, 13 Mar 2024 05:02:44 -0700 (PDT) Message-ID: Date: Wed, 13 Mar 2024 12:02:42 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Content-Language: en-GB To: Barry Song <21cnbao@gmail.com> Cc: Lance Yang , Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240311150058.1122862-1-ryan.roberts@arm.com> <20240311150058.1122862-7-ryan.roberts@arm.com> <00a3ba1d-98e1-409b-ae6e-7fbcbdcd74d5@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 13/03/2024 11:37, Barry Song wrote: > On Wed, Mar 13, 2024 at 7:08 PM Ryan Roberts wrote: >> >> On 13/03/2024 10:37, Barry Song wrote: >>> On Wed, Mar 13, 2024 at 10:36 PM Ryan Roberts wrote: >>>> >>>> On 13/03/2024 09:16, Barry Song wrote: >>>>> On Wed, Mar 13, 2024 at 10:03 PM Ryan Roberts wrote: >>>>>> >>>>>> On 13/03/2024 07:19, Barry Song wrote: >>>>>>> On Tue, Mar 12, 2024 at 4:01 AM Ryan Roberts wrote: >>>>>>>> >>>>>>>> Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large >>>>>>>> folio that is fully and contiguously mapped in the pageout/cold vm >>>>>>>> range. This change means that large folios will be maintained all the >>>>>>>> way to swap storage. This both improves performance during swap-out, by >>>>>>>> eliding the cost of splitting the folio, and sets us up nicely for >>>>>>>> maintaining the large folio when it is swapped back in (to be covered in >>>>>>>> a separate series). >>>>>>>> >>>>>>>> Folios that are not fully mapped in the target range are still split, >>>>>>>> but note that behavior is changed so that if the split fails for any >>>>>>>> reason (folio locked, shared, etc) we now leave it as is and move to the >>>>>>>> next pte in the range and continue work on the proceeding folios. >>>>>>>> Previously any failure of this sort would cause the entire operation to >>>>>>>> give up and no folios mapped at higher addresses were paged out or made >>>>>>>> cold. Given large folios are becoming more common, this old behavior >>>>>>>> would have likely lead to wasted opportunities. >>>>>>>> >>>>>>>> While we are at it, change the code that clears young from the ptes to >>>>>>>> use ptep_test_and_clear_young(), which is more efficent than >>>>>>>> get_and_clear/modify/set, especially for contpte mappings on arm64, >>>>>>>> where the old approach would require unfolding/refolding and the new >>>>>>>> approach can be done in place. >>>>>>>> >>>>>>>> Signed-off-by: Ryan Roberts >>>>>>> >>>>>>> This looks so much better than our initial RFC. >>>>>>> Thank you for your excellent work! >>>>>> >>>>>> Thanks - its a team effort - I had your PoC and David's previous batching work >>>>>> to use as a template. >>>>>> >>>>>>> >>>>>>>> --- >>>>>>>> mm/madvise.c | 89 ++++++++++++++++++++++++++++++---------------------- >>>>>>>> 1 file changed, 51 insertions(+), 38 deletions(-) >>>>>>>> >>>>>>>> diff --git a/mm/madvise.c b/mm/madvise.c >>>>>>>> index 547dcd1f7a39..56c7ba7bd558 100644 >>>>>>>> --- a/mm/madvise.c >>>>>>>> +++ b/mm/madvise.c >>>>>>>> @@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>>>> LIST_HEAD(folio_list); >>>>>>>> bool pageout_anon_only_filter; >>>>>>>> unsigned int batch_count = 0; >>>>>>>> + int nr; >>>>>>>> >>>>>>>> if (fatal_signal_pending(current)) >>>>>>>> return -EINTR; >>>>>>>> @@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>>>> return 0; >>>>>>>> flush_tlb_batched_pending(mm); >>>>>>>> arch_enter_lazy_mmu_mode(); >>>>>>>> - for (; addr < end; pte++, addr += PAGE_SIZE) { >>>>>>>> + for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) { >>>>>>>> + nr = 1; >>>>>>>> ptent = ptep_get(pte); >>>>>>>> >>>>>>>> if (++batch_count == SWAP_CLUSTER_MAX) { >>>>>>>> @@ -447,55 +449,66 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>>>> continue; >>>>>>>> >>>>>>>> /* >>>>>>>> - * Creating a THP page is expensive so split it only if we >>>>>>>> - * are sure it's worth. Split it if we are only owner. >>>>>>>> + * If we encounter a large folio, only split it if it is not >>>>>>>> + * fully mapped within the range we are operating on. Otherwise >>>>>>>> + * leave it as is so that it can be swapped out whole. If we >>>>>>>> + * fail to split a folio, leave it in place and advance to the >>>>>>>> + * next pte in the range. >>>>>>>> */ >>>>>>>> if (folio_test_large(folio)) { >>>>>>>> - int err; >>>>>>>> - >>>>>>>> - if (folio_estimated_sharers(folio) > 1) >>>>>>>> - break; >>>>>>>> - if (pageout_anon_only_filter && !folio_test_anon(folio)) >>>>>>>> - break; >>>>>>>> - if (!folio_trylock(folio)) >>>>>>>> - break; >>>>>>>> - folio_get(folio); >>>>>>>> - arch_leave_lazy_mmu_mode(); >>>>>>>> - pte_unmap_unlock(start_pte, ptl); >>>>>>>> - start_pte = NULL; >>>>>>>> - err = split_folio(folio); >>>>>>>> - folio_unlock(folio); >>>>>>>> - folio_put(folio); >>>>>>>> - if (err) >>>>>>>> - break; >>>>>>>> - start_pte = pte = >>>>>>>> - pte_offset_map_lock(mm, pmd, addr, &ptl); >>>>>>>> - if (!start_pte) >>>>>>>> - break; >>>>>>>> - arch_enter_lazy_mmu_mode(); >>>>>>>> - pte--; >>>>>>>> - addr -= PAGE_SIZE; >>>>>>>> - continue; >>>>>>>> + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | >>>>>>>> + FPB_IGNORE_SOFT_DIRTY; >>>>>>>> + int max_nr = (end - addr) / PAGE_SIZE; >>>>>>>> + >>>>>>>> + nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, >>>>>>>> + fpb_flags, NULL); >>>>>>> >>>>>>> I wonder if we have a quick way to avoid folio_pte_batch() if users >>>>>>> are doing madvise() on a portion of a large folio. >>>>>> >>>>>> Good idea. Something like this?: >>>>>> >>>>>> if (pte_pfn(pte) == folio_pfn(folio) >>>>> >>>>> what about >>>>> >>>>> "If (pte_pfn(pte) == folio_pfn(folio) && max_nr >= nr_pages)" >>>>> >>>>> just to account for cases where the user's end address falls within >>>>> the middle of a large folio? >>>> >>>> yes, even better. I'll add this for the next version. >>>> >>>>> >>>>> >>>>> BTW, another minor issue is here: >>>>> >>>>> if (++batch_count == SWAP_CLUSTER_MAX) { >>>>> batch_count = 0; >>>>> if (need_resched()) { >>>>> arch_leave_lazy_mmu_mode(); >>>>> pte_unmap_unlock(start_pte, ptl); >>>>> cond_resched(); >>>>> goto restart; >>>>> } >>>>> } >>>>> >>>>> We are increasing 1 for nr ptes, thus, we are holding PTL longer >>>>> than small folios case? we used to increase 1 for each PTE. >>>>> Does it matter? >>>> >>>> I thought about that, but the vast majority of the work is per-folio, not >>>> per-pte. So I concluded it would be best to continue to increment per-folio. >>> >>> Okay. The original patch commit b2f557a21bc8 ("mm/madvise: add >>> cond_resched() in madvise_cold_or_pageout_pte_range()") >>> primarily addressed the real-time wake-up latency issue. MADV_PAGEOUT >>> and MADV_COLD are much less critical compared >>> to other scenarios where operations like do_anon_page or do_swap_page >>> necessarily need PTL to progress. Therefore, adopting >>> an approach that relatively aggressively releases the PTL seems to >>> neither harm MADV_PAGEOUT/COLD nor disadvantage >>> others. >>> >>> We are slightly increasing the duration of holding the PTL due to the >>> iteration of folio_pte_batch() potentially taking longer than >>> the case of small folios, which do not require it. >> >> If we can't scan all the PTEs in a page table without dropping the PTL >> intermittently we have bigger problems. This all works perfectly fine in all the >> other PTE iterators; see zap_pte_range() for example. > > I've no doubt about folio_pte_batch(). was just talking about the > original rt issue > it might affect. > >> >>> However, compared >>> to operations like folio_isolate_lru() and folio_deactivate(), >>> this increase seems negligible. Recently, we have actually removed >>> ptep_test_and_clear_young() for MADV_PAGEOUT, >>> which should also benefit real-time scenarios. Nonetheless, there is a >>> small risk with large folios, such as 1 MiB mTHP, where >>> we may need to loop 256 times in folio_pte_batch(). >> >> As I understand it, RT and THP are mutually exclusive. RT can't handle the extra >> latencies THPs can cause in allocation path, etc. So I don't think you will see >> a problem here. > > I was actually taking a different approach on the phones as obviously > we have some > UX(user-experience)/UI/audio related tasks which cannot tolerate > allocation latency. with > a TAO-similar optimization(we did it by ext_migratetype for some pageblocks), we > actually don't push buddy to do compaction or reclamation for forming > 64KiB folio. > We immediately fallback to small folios if a zero-latency 64KiB > allocation can't be > obtained from some kinds of pools - ext_migratetype pageblocks. You can opt-in to avoiding latency due to compaction, etc. with various settings in /sys/kernel/mm/transparent_hugepage/defrag. That applies to mTHP as well. See Documentation/admin-guide/mm/transhuge.rst. Obviously this is not as useful as the TAO approach because it does nothing to avoid fragmentation in the first place. The other source of latency for THP allocation, which I believe RT doesn't like, is the cost of zeroing the huge page, IIRC. > >> >>> >>> I would vote for increasing 'nr' or maybe max(log2(nr), 1) rather than >>> 1 for two reasons: >>> >>> 1. We are not making MADV_PAGEOUT/COLD worse; in fact, we are >>> improving them by reducing the time taken to put the same >>> number of pages into the reclaim list. >>> >>> 2. MADV_PAGEOUT/COLD scenarios are not urgent compared to others that >>> genuinely require the PTL to progress. Moreover, >>> the majority of time spent on PAGEOUT is actually reclaim_pages(). >> >> I understand your logic. But I'd rather optimize for fewer lock acquisitions for >> the !RT+THP case, since RT+THP is not supported. > > Fair enough. I agree we can postpone this until RT and THP become an > available option. > For now, keeping this patch simpler seems to be better. OK thanks. > >> >>> >>>>> >>>>>> nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, >>>>>> fpb_flags, NULL); >>>>>> >>>>>> If we are not mapping the first page of the folio, then it can't be a full >>>>>> mapping, so no need to call folio_pte_batch(). Just split it. >>>>>> >>>>>>> >>>>>>>> + >>>>>>>> + if (nr < folio_nr_pages(folio)) { >>>>>>>> + int err; >>>>>>>> + >>>>>>>> + if (folio_estimated_sharers(folio) > 1) >>>>>>>> + continue; >>>>>>>> + if (pageout_anon_only_filter && !folio_test_anon(folio)) >>>>>>>> + continue; >>>>>>>> + if (!folio_trylock(folio)) >>>>>>>> + continue; >>>>>>>> + folio_get(folio); >>>>>>>> + arch_leave_lazy_mmu_mode(); >>>>>>>> + pte_unmap_unlock(start_pte, ptl); >>>>>>>> + start_pte = NULL; >>>>>>>> + err = split_folio(folio); >>>>>>>> + folio_unlock(folio); >>>>>>>> + folio_put(folio); >>>>>>>> + if (err) >>>>>>>> + continue; >>>>>>>> + start_pte = pte = >>>>>>>> + pte_offset_map_lock(mm, pmd, addr, &ptl); >>>>>>>> + if (!start_pte) >>>>>>>> + break; >>>>>>>> + arch_enter_lazy_mmu_mode(); >>>>>>>> + nr = 0; >>>>>>>> + continue; >>>>>>>> + } >>>>>>>> } >>>>>>>> >>>>>>>> /* >>>>>>>> * Do not interfere with other mappings of this folio and >>>>>>>> - * non-LRU folio. >>>>>>>> + * non-LRU folio. If we have a large folio at this point, we >>>>>>>> + * know it is fully mapped so if its mapcount is the same as its >>>>>>>> + * number of pages, it must be exclusive. >>>>>>>> */ >>>>>>>> - if (!folio_test_lru(folio) || folio_mapcount(folio) != 1) >>>>>>>> + if (!folio_test_lru(folio) || >>>>>>>> + folio_mapcount(folio) != folio_nr_pages(folio)) >>>>>>>> continue; >>>>>>> >>>>>>> This looks so perfect and is exactly what I wanted to achieve. >>>>>>> >>>>>>>> >>>>>>>> if (pageout_anon_only_filter && !folio_test_anon(folio)) >>>>>>>> continue; >>>>>>>> >>>>>>>> - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); >>>>>>>> - >>>>>>>> - if (!pageout && pte_young(ptent)) { >>>>>>>> - ptent = ptep_get_and_clear_full(mm, addr, pte, >>>>>>>> - tlb->fullmm); >>>>>>>> - ptent = pte_mkold(ptent); >>>>>>>> - set_pte_at(mm, addr, pte, ptent); >>>>>>>> - tlb_remove_tlb_entry(tlb, pte, addr); >>>>>>>> + if (!pageout) { >>>>>>>> + for (; nr != 0; nr--, pte++, addr += PAGE_SIZE) { >>>>>>>> + if (ptep_test_and_clear_young(vma, addr, pte)) >>>>>>>> + tlb_remove_tlb_entry(tlb, pte, addr); >>>>>>>> + } >>>>>>> >>>>>>> This looks so smart. if it is not pageout, we have increased pte >>>>>>> and addr here; so nr is 0 and we don't need to increase again in >>>>>>> for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) >>>>>>> >>>>>>> otherwise, nr won't be 0. so we will increase addr and >>>>>>> pte by nr. >>>>>> >>>>>> Indeed. I'm hoping that Lance is able to follow a similar pattern for >>>>>> madvise_free_pte_range(). >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>>> } >>>>>>>> >>>>>>>> /* >>>>>>>> -- >>>>>>>> 2.25.1 >>>>>>>> >>>>>>> >>>>>>> Overall, LGTM, >>>>>>> >>>>>>> Reviewed-by: Barry Song >>>>>> >>>>>> Thanks! >>>>>> >>> >>> Thanks >>> Barry >>