Received: by 2002:ab2:6857:0:b0:1ef:ffd0:ce49 with SMTP id l23csp541360lqp; Thu, 21 Mar 2024 08:25:10 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVLBPlngqoWwi1lCX80angIelpX0niX3p0L3IinNpWzxWXQbbn/NcONsykaPYBwKMNJMSWzPJ9rA7SJbvGhI2Q9AbeJnGn77nyl6GDneA== X-Google-Smtp-Source: AGHT+IGD+Dco/fC2XcMrbt87pOUcHIS4jBNny9K68gJ4mYcAWIbyRhDsuoP3S+Pb6TrXAHd131Uk X-Received: by 2002:a05:6a20:438f:b0:1a3:1048:3dfe with SMTP id i15-20020a056a20438f00b001a310483dfemr2850091pzl.38.1711034710294; Thu, 21 Mar 2024 08:25:10 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711034710; cv=pass; d=google.com; s=arc-20160816; b=pIC0yy8cbF1qg6ExLVtJvmf0qeNSA+x/DYzuFaDaVqQZUfuDzFp4/2+PaQdgCabD7C drBIpnbod2PUQUQqjSseZRkEWKe8R8og3RbIJESe+Yv5y9bBtXVLN83I3p91+Wmu809X U2qQ20y4NfSqKmDE5bYlWa9s/2/JGgL+AASRqCIl9Hg8jSxjOVzLXgWuTuIuN/BUhh6z ucf5UVQFbXm96vEFsuyM9m18e6mPWMvj8g9NJv925kA/fss6QN65FMpkWBXBKjwg5wSO JlzwUnNerutWxpCQULVBKYMaGlNY6p5WV3RxmOlWiZyGMC9I0sceIigq86queOBGcNn0 Q4Yw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=+op4Tb3B0/7emBQQmeIYCcMTVkwH5JDuPmn8Qbas0eI=; fh=OVN5RRFNkY00h+zW+Ru161Bxmxh6IY5y03QWBLsGvvM=; b=x94fNZQhPUcvxfvrVgb8iT/23LJjC/Z0oyh9oeLXmdZNhilPTiph+SNwlc9ozDgVUQ RRjEz7c0x1EDnYD038C9Ymoa4+1RTKWOO38kFCuZBraI/BKDQf5gibfz852wpCXiW7vz +QJi6Lxm5PP6vAzNGh+ptooMp5Zyvq7n2cDMAYpkLDIs6uruA7FdDoPDUzp6fEQhmuy7 woSDpKFMsWpTGHpxSyssYo0PB7BC1JRxJIYk+++ftOmFigneWHSQ65kfdfaCxLkHSKEd rMVgl6BV0oWc2m/wPvMeWi+mNT73kOl9V6CzIuWEgvDdWoUwNBDYddbl+dpNI1yOLCgI eRbQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-110217-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-110217-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id p9-20020a634f49000000b005dc50605acfsi14779436pgl.519.2024.03.21.08.25.10 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Mar 2024 08:25:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-110217-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-110217-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-110217-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id EBEB6285A22 for ; Thu, 21 Mar 2024 15:25:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 52E1286650; Thu, 21 Mar 2024 15:24:41 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 311DB86265 for ; Thu, 21 Mar 2024 15:24:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711034680; cv=none; b=LSJx6QxBAsddo6Yj53YpOGpaJkiDQBJX6hzrJNbGGDSDWfOL/oP1/0+N2bVuG6Np6kii2fWyP1kwCr1g9Kg2r32hGqv8qhM9QtWAXbzACcITuLOLNiWmxz2/bbDSffPgMNRGR0NIZOKgFCFKSGlbUXuzP8w/ldAv1k+MwFkRMIs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711034680; c=relaxed/simple; bh=3NKkuWB79YkKuLoJUAirLEYClaABoW8s3EVR+Opt7N4=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=IuQ58ByIoDafTntI6JWNq6oln2vrQiadAg8GnpfbNcoLzzAP5MkJOZ5Co7yVnP5OcKDadi54dSUbOnZ6Yzt8nhkslpw1LyxHDmPrNhyp8sa8CWV9iFIAn2tgf7jrlEVyJG7ij4eOeoOYvYauk74/RXcjJxZqMLQDNYI7/XLTNyQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BCFFE1007; Thu, 21 Mar 2024 08:25:11 -0700 (PDT) Received: from [10.1.33.177] (XHFQ2J9959.cambridge.arm.com [10.1.33.177]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6428B3F67D; Thu, 21 Mar 2024 08:24:35 -0700 (PDT) Message-ID: <9930c86a-c0c8-4112-9122-0e4faca475f5@arm.com> Date: Thu, 21 Mar 2024 15:24:33 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Content-Language: en-GB To: Lance Yang Cc: Barry Song <21cnbao@gmail.com>, Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240311150058.1122862-1-ryan.roberts@arm.com> <20240311150058.1122862-7-ryan.roberts@arm.com> <7ba06704-2090-4eb2-9534-c4d467cc085a@arm.com> <269375a4-78a3-4c22-8e6e-570368a2c053@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 21/03/2024 14:55, Lance Yang wrote: > On Thu, Mar 21, 2024 at 9:38 PM Ryan Roberts wrote: >> >>>>>>>>>> - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); >>>>>>>>>> - >>>>>>>>>> - if (!pageout && pte_young(ptent)) { >>>>>>>>>> - ptent = ptep_get_and_clear_full(mm, addr, pte, >>>>>>>>>> - tlb->fullmm); >>>>>>>>>> - ptent = pte_mkold(ptent); >>>>>>>>>> - set_pte_at(mm, addr, pte, ptent); >>>>>>>>>> - tlb_remove_tlb_entry(tlb, pte, addr); >>>>>>>>>> + if (!pageout) { >>>>>>>>>> + for (; nr != 0; nr--, pte++, addr += PAGE_SIZE) { >>>>>>>>>> + if (ptep_test_and_clear_young(vma, addr, pte)) >>>>>>>>>> + tlb_remove_tlb_entry(tlb, pte, addr); >>>>>>> >>>>>>> IIRC, some of the architecture(ex, PPC) don't update TLB with set_pte_at and >>>>>>> tlb_remove_tlb_entry. So, didn't we consider remapping the PTE with old after >>>>>>> pte clearing? >>>>>> >>>>>> Sorry Lance, I don't understand this question, can you rephrase? Are you saying >>>>>> there is a good reason to do the original clear-mkold-set for some arches? >>>>> >>>>> IIRC, some of the architecture(ex, PPC) don't update TLB with >>>>> ptep_test_and_clear_young() >>>>> and tlb_remove_tlb_entry(). >> >> Afraid I'm still struggling with this comment. Do you mean to say that powerpc >> invalidates the TLB entry as part of the call to ptep_test_and_clear_young()? So >> tlb_remove_tlb_entry() would be redundant here, and likely cause performance >> degradation on that architecture? > > I just thought that using ptep_test_and_clear_young() instead of > ptep_get_and_clear_full() + pte_mkold() might not be correct. > However, it's most likely that I was mistaken :( OK, I'm pretty confident that my usage is correct. > > I also have a question. Why aren't we using ptep_test_and_clear_young() in > madvise_cold_or_pageout_pte_range(), but instead > ptep_get_and_clear_full() + pte_mkold() as we did previously. > > /* > * Some of architecture(ex, PPC) don't update TLB > * with set_pte_at and tlb_remove_tlb_entry so for > * the portability, remap the pte with old|clean > * after pte clearing. > */ Ahh, I see; this is a comment from madvise_free_pte_range() I don't quite understand that comment. I suspect it might be out of date, or saying that doing set_pte_at(pte_mkold(ptep_get(ptent))) is not correct because it is not atomic and the HW could set the dirty bit between the get and the set. Doing the atomic ptep_get_and_clear_full() means you go via a pte_none() state, so if the TLB is racing it will see the entry isn't valid and fault. Note that madvise_free_pte_range() is trying to clear both the access and dirty bits, whereas madvise_cold_or_pageout_pte_range() is only trying to clear the access bit. There is a special helper to clear the access bit atomically - ptep_test_and_clear_young() - but there is no helper to clear the access *and* dirty bit, I don't believe. There is ptep_set_access_flags(), but that sets flags to a "more permissive setting" (i.e. allows setting the flags, not clearing them). Perhaps this constraint can be relaxed given we will follow up with an explicit TLBI - it would require auditing all the implementations. > > According to this comment from madvise_free_pte_range. IIUC, we need to > call ptep_get_and_clear_full() to clear the PTE, and then remap the > PTE with old|clean. > > Thanks, > Lance > >> >> IMHO, ptep_test_and_clear_young() really shouldn't be invalidating the TLB >> entry, that's what ptep_clear_flush_young() is for. >> >> But I do see that for some cases of the 32-bit ppc, there appears to be a flush: >> >> #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG >> static inline int __ptep_test_and_clear_young(struct mm_struct *mm, >> unsigned long addr, pte_t *ptep) >> { >> unsigned long old; >> old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0); >> if (old & _PAGE_HASHPTE) >> flush_hash_entry(mm, ptep, addr); <<<<<<<< >> >> return (old & _PAGE_ACCESSED) != 0; >> } >> #define ptep_test_and_clear_young(__vma, __addr, __ptep) \ >> __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep) >> >> Is that what you are describing? Does any anyone know why flush_hash_entry() is >> called? I'd say that's a bug in ppc and not a reason not to use >> ptep_test_and_clear_young() in the common code! >> >> Thanks, >> Ryan >> >> >>>> >>>> Err, I assumed tlb_remove_tlb_entry() meant "invalidate the TLB entry for this >>>> address please" - albeit its deferred and batched. I'll look into this. >>>> >>>>> >>>>> In my new patch[1], I use refresh_full_ptes() and >>>>> tlb_remove_tlb_entries() to batch-update the >>>>> access and dirty bits. >>>> >>>> I want to avoid the per-pte clear-modify-set approach, because this doesn't >>>> perform well on arm64 when using contpte mappings; it will cause the contpe >>>> mapping to be unfolded by the first clear that touches the contpte block, then >>>> refolded by the last set to touch the block. That's expensive. >>>> ptep_test_and_clear_young() doesn't suffer that problem. >>> >>> Thanks for explaining. I got it. >>> >>> I think that other architectures will benefit from the per-pte clear-modify-set >>> approach. IMO, refresh_full_ptes() can be overridden by arm64. >>> >>> Thanks, >>> Lance >>>> >>>>> >>>>> [1] https://lore.kernel.org/linux-mm/20240316102952.39233-1-ioworker0@gmail.com >>>>> >>>>> Thanks, >>>>> Lance >>>>> >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Lance >>>>>>> >>>>>>> >>>>>>> >>>>>>>>>> + } >>>>>>>>> >>>>>>>>> This looks so smart. if it is not pageout, we have increased pte >>>>>>>>> and addr here; so nr is 0 and we don't need to increase again in >>>>>>>>> for (; addr < end; pte += nr, addr += nr * PAGE_SIZE) >>>>>>>>> >>>>>>>>> otherwise, nr won't be 0. so we will increase addr and >>>>>>>>> pte by nr. >>>>>>>> >>>>>>>> Indeed. I'm hoping that Lance is able to follow a similar pattern for >>>>>>>> madvise_free_pte_range(). >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> /* >>>>>>>>>> -- >>>>>>>>>> 2.25.1 >>>>>>>>>> >>>>>>>>> >>>>>>>>> Overall, LGTM, >>>>>>>>> >>>>>>>>> Reviewed-by: Barry Song >>>>>>>> >>>>>>>> Thanks! >>>>>>>> >>>>>>>> >>>>>> >>>> >>