Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp314898rwb; Wed, 9 Nov 2022 02:56:00 -0800 (PST) X-Google-Smtp-Source: AMsMyM5EFtpNXuF5Z623cuItl2NkWdfuluLTwHls4tMvJwVYCcqlGJu733qjmwoOwH8stz+WljxU X-Received: by 2002:a17:907:1c88:b0:7ad:8f76:699e with SMTP id nb8-20020a1709071c8800b007ad8f76699emr55628592ejc.114.1667991359835; Wed, 09 Nov 2022 02:55:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667991359; cv=none; d=google.com; s=arc-20160816; b=OG6Is0sQqrdNaKjfrmcyv6LLN6Re7z+eQ2DX8TUdpBMLMxiGSBJryxiL32jlsGLBei TR4os0TPKyc7gwAP40V6hQ6MvdrzVd99EHKAWK08syDSUsEtARA7f3al7wpyEVRlZ6NA 4tJ4Fh/T4y/D9tHtncUFsP0l8+DBjoMltd2J9RahVhjmICwT01dw3BuX+LSUnjuq99nf Xf+5wQpO6m8ooRpclzqasySBSAn4RabRX5RUMZ4qY39DsgGW7QByrg1jj2i+Cj2AAzx1 YEza2EV9zleeevwEFMIEMOGl0SkKpqpbVCrEhavkkOoB+QQBylGTMBOlVnec2QO+cLzy vrtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=ghM+MrFkDjihsnfedSO0+Kvltl+ciIhgRsORAbYIaOc=; b=BiGYDQZf2SbRx6yPpaqN3cg5qNVL0aJFZEKdKGDo/Gq3bYwkEuj2Ku3HQDJfgwchcz shAeFCx8cF+SFEbbe/Zx3ARtSIK6xXM6yanAsV/DMtwFO4pGnc91bx8Z6OlXq7YxcRkl uHw2ab3j1Q8dXwtqmExYp6AApd6g4g0MvxpGfmY1hJOzTI6VIgUfu/NGgbyheSpc4mG/ fO8aHwKgydd/vK9eHbBMM+u+pLyHzgI+CFsRvWiWqxZBJlMbzx1YZozM/1NdLFgKF2al dFTZ//J/sA60emmFICozDLKR29e7VAxViujnJ3MVJOhEkZdMSOWio5KTEoMsaClrzPbA mgaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="CBf/Oiyc"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ds4-20020a0564021cc400b004605f289f68si14897898edb.158.2022.11.09.02.55.36; Wed, 09 Nov 2022 02:55:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b="CBf/Oiyc"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230343AbiKIKXx (ORCPT + 93 others); Wed, 9 Nov 2022 05:23:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230108AbiKIKXr (ORCPT ); Wed, 9 Nov 2022 05:23:47 -0500 Received: from madras.collabora.co.uk (madras.collabora.co.uk [IPv6:2a00:1098:0:82:1000:25:2eeb:e5ab]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 702F065E0; Wed, 9 Nov 2022 02:23:43 -0800 (PST) Received: from localhost.localdomain (unknown [39.45.244.84]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: usama.anjum) by madras.collabora.co.uk (Postfix) with ESMTPSA id AB10566029B1; Wed, 9 Nov 2022 10:23:36 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1667989422; bh=MLOajoALxs2BA+DAlpPUi9P0UdFtJcqV/tFJX2ymRCg=; h=From:To:Subject:Date:In-Reply-To:References:From; b=CBf/OiycRNQbQV/CqYlpZLBGB+7thrxA+KHna8fbml4zuObzUAXhWmwXbWHV3XzZp HAsMS5chjgbv/A/ExTxW7TaGn7ha9TdPspqqF16zzg/7ILOXKcoctCcbly+Sf7KJ7u Z+xaZ7yB1or3LDSbNWcbEKE5StQA5KKQHaAieRnhXYtxSheColM7V0G8fLe/CFveWi oA8E569RXwkvZlP2lq15iPKxd5EVVtpywrdJW0mvvPYoCU3oNrFtumctbtR7cXWjQn L4ZDPlcOSaqX2inCRIejgQfS0kREiBCgxTLlvFxHsBS+URTNyqE4ImJWW3Wi8VbN5W 9kKomdLEmGjOA== From: Muhammad Usama Anjum To: =?UTF-8?q?Micha=C5=82=20Miros=C5=82aw?= , Andrei Vagin , Danylo Mocherniuk , Alexander Viro , Andrew Morton , Suren Baghdasaryan , Greg KH , Christian Brauner , Peter Xu , Yang Shi , Vlastimil Babka , "Zach O'Keefe" , "Matthew Wilcox (Oracle)" , "Gustavo A. R. Silva" , Dan Williams , Muhammad Usama Anjum , kernel@collabora.com, Gabriel Krisman Bertazi , David Hildenbrand , Peter Enderborg , "open list : KERNEL SELFTEST FRAMEWORK" , Shuah Khan , open list , "open list : PROC FILESYSTEM" , "open list : MEMORY MANAGEMENT" , Paul Gofman Subject: [PATCH v6 1/3] fs/proc/task_mmu: update functions to clear the soft-dirty PTE bit Date: Wed, 9 Nov 2022 15:23:01 +0500 Message-Id: <20221109102303.851281-2-usama.anjum@collabora.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221109102303.851281-1-usama.anjum@collabora.com> References: <20221109102303.851281-1-usama.anjum@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Update the clear_soft_dirty() and clear_soft_dirty_pmd() to optionally clear and return the status if page is dirty. Signed-off-by: Muhammad Usama Anjum --- Changes in v2: - Move back the functions back to their original file --- fs/proc/task_mmu.c | 82 ++++++++++++++++++++++++++++------------------ 1 file changed, 51 insertions(+), 31 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 8a74cdcc9af0..8235c536ac70 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1095,8 +1095,8 @@ static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, return page_maybe_dma_pinned(page); } -static inline void clear_soft_dirty(struct vm_area_struct *vma, - unsigned long addr, pte_t *pte) +static inline bool check_soft_dirty(struct vm_area_struct *vma, + unsigned long addr, pte_t *pte, bool clear) { /* * The soft-dirty tracker uses #PF-s to catch writes @@ -1105,55 +1105,75 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma, * of how soft-dirty works. */ pte_t ptent = *pte; + int dirty = 0; if (pte_present(ptent)) { pte_t old_pte; - if (pte_is_pinned(vma, addr, ptent)) - return; - old_pte = ptep_modify_prot_start(vma, addr, pte); - ptent = pte_wrprotect(old_pte); - ptent = pte_clear_soft_dirty(ptent); - ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); + dirty = pte_soft_dirty(ptent); + + if (dirty && clear && !pte_is_pinned(vma, addr, ptent)) { + old_pte = ptep_modify_prot_start(vma, addr, pte); + ptent = pte_wrprotect(old_pte); + ptent = pte_clear_soft_dirty(ptent); + ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); + } } else if (is_swap_pte(ptent)) { - ptent = pte_swp_clear_soft_dirty(ptent); - set_pte_at(vma->vm_mm, addr, pte, ptent); + dirty = pte_swp_soft_dirty(ptent); + + if (dirty && clear) { + ptent = pte_swp_clear_soft_dirty(ptent); + set_pte_at(vma->vm_mm, addr, pte, ptent); + } } + + return !!dirty; } #else -static inline void clear_soft_dirty(struct vm_area_struct *vma, - unsigned long addr, pte_t *pte) +static inline bool check_soft_dirty(struct vm_area_struct *vma, + unsigned long addr, pte_t *pte, bool clear) { + return false; } #endif #if defined(CONFIG_MEM_SOFT_DIRTY) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmdp) +static inline bool check_soft_dirty_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, bool clear) { pmd_t old, pmd = *pmdp; + int dirty = 0; if (pmd_present(pmd)) { - /* See comment in change_huge_pmd() */ - old = pmdp_invalidate(vma, addr, pmdp); - if (pmd_dirty(old)) - pmd = pmd_mkdirty(pmd); - if (pmd_young(old)) - pmd = pmd_mkyoung(pmd); - - pmd = pmd_wrprotect(pmd); - pmd = pmd_clear_soft_dirty(pmd); - - set_pmd_at(vma->vm_mm, addr, pmdp, pmd); + dirty = pmd_soft_dirty(pmd); + if (dirty && clear) { + /* See comment in change_huge_pmd() */ + old = pmdp_invalidate(vma, addr, pmdp); + if (pmd_dirty(old)) + pmd = pmd_mkdirty(pmd); + if (pmd_young(old)) + pmd = pmd_mkyoung(pmd); + + pmd = pmd_wrprotect(pmd); + pmd = pmd_clear_soft_dirty(pmd); + + set_pmd_at(vma->vm_mm, addr, pmdp, pmd); + } } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { - pmd = pmd_swp_clear_soft_dirty(pmd); - set_pmd_at(vma->vm_mm, addr, pmdp, pmd); + dirty = pmd_swp_soft_dirty(pmd); + + if (dirty && clear) { + pmd = pmd_swp_clear_soft_dirty(pmd); + set_pmd_at(vma->vm_mm, addr, pmdp, pmd); + } } + return !!dirty; } #else -static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmdp) +static inline bool check_soft_dirty_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmdp, bool clear) { + return false; } #endif @@ -1169,7 +1189,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { if (cp->type == CLEAR_REFS_SOFT_DIRTY) { - clear_soft_dirty_pmd(vma, addr, pmd); + check_soft_dirty_pmd(vma, addr, pmd, true); goto out; } @@ -1195,7 +1215,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, ptent = *pte; if (cp->type == CLEAR_REFS_SOFT_DIRTY) { - clear_soft_dirty(vma, addr, pte); + check_soft_dirty(vma, addr, pte, true); continue; } -- 2.30.2