Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4472406ioa; Wed, 27 Apr 2022 04:45:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwiudRQBBjpCCJrhfE9yDqFOYMp0XK6osKRnzqNME4LE/FfxU+wyZM/OQi8lZ6S6x/42sOs X-Received: by 2002:a17:90a:ab81:b0:1ca:8a76:cdda with SMTP id n1-20020a17090aab8100b001ca8a76cddamr42985641pjq.26.1651059913551; Wed, 27 Apr 2022 04:45:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651059913; cv=none; d=google.com; s=arc-20160816; b=jOx/Lr1lafnde7cdwdkb0V95bA2yNTTWLK9ildupE/RfILHDtIu4bwQsLOuQs4XeAK lVrSfi9Adw3FrgLM2J3PigpBs0k0w/Nl5UAAvDB51CEXAmk+Meo7jw0wIjZxb4SyzyHo /CiUr8ZBUNTwJ3GBmq8PKWd/y9yjlxNmuduBLKDz+gJSo3KhxmkgVE2QmsFnejteTRST qn5H5YFblqmdw2KYsYGzsAtlYSker2F/Oa0eLgJo7/3mOg0ajlJg827yoUbhi1AbwmX5 EtDJ3a57w65xohWCNJ56z5P81Lo9zzi67uskvBRQMZEiEuND/kZCs/lgN70OPjum5yB5 +Vfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:references:in-reply-to :message-id:date:subject:cc:to:from; bh=gETu6vqEh30NvhG5TZQN8R/k5Qs77eRIKGUZYRi8rcI=; b=SC8en94LibqfLgDGdpMASLuYcgxLSEngNVl3EiS/h4qYuaXDi1A+lQm4oaXTz2P6vl mniDr8AJ5IDFXccEKBpHAq9419rYM+R6oCLG+YtYnIsUhqETquqg+36qyRuLSwVH0pqf Zn4A1osbj9gEg6t/s6e+FL5vRuANwaSyhgdPYAsUmp6g3xSr0Sy+OyglEDLUMbRSQXQA Cvp0prX2xsDy+iFC6pAfiAq8qKeOXfKeMkG9WDjU5+nPVlU9MioVVT8zvMVqAlMCTnkh tgL1m+SVqrLBbs+52Wz7qxoFo9GFmxFlvl7dPGmstF17GurKrXgI2xlQ5XK0Xz4/w0yr UCPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id c32-20020a630d20000000b003aa32965dcesi1257575pgl.721.2022.04.27.04.45.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 04:45:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AFA263D2816; Wed, 27 Apr 2022 04:03:30 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231725AbiD0LFr (ORCPT + 99 others); Wed, 27 Apr 2022 07:05:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232297AbiD0LDc (ORCPT ); Wed, 27 Apr 2022 07:03:32 -0400 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05A6048B885 for ; Wed, 27 Apr 2022 03:52:19 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0VBTIjXy_1651056735; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VBTIjXy_1651056735) by smtp.aliyun-inc.com(127.0.0.1); Wed, 27 Apr 2022 18:52:16 +0800 From: Baolin Wang To: akpm@linux-foundation.org, mike.kravetz@oracle.com Cc: almasrymina@google.com, songmuchun@bytedance.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/3] mm: rmap: Use flush_cache_range() to flush cache for hugetlb pages Date: Wed, 27 Apr 2022 18:52:07 +0800 Message-Id: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now we will use flush_cache_page() to flush cache for anonymous hugetlb pages when unmapping or migrating a hugetlb page mapping, but the flush_cache_page() only handles a PAGE_SIZE range on some architectures (like arm32, arc and so on), which will cause potential cache issues. Thus change to use flush_cache_range() to cover the whole size of a hugetlb page. Signed-off-by: Baolin Wang --- mm/rmap.c | 90 ++++++++++++++++++++++++++++++++++----------------------------- 1 file changed, 48 insertions(+), 42 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 4f0d115..6fdd198 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1528,13 +1528,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(subpage); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - */ - VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + if (folio_test_hugetlb(folio)) { /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may @@ -1544,22 +1538,31 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ flush_cache_range(vma, range.start, range.end); - if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { - flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, range.start, - range.end); - + if (!folio_test_anon(folio)) { /* - * The ref count of the PMD page was dropped - * which is part of the way map counting - * is done for shared PMDs. Return 'true' - * here. When there is no other sharing, - * huge_pmd_unshare returns false and we will - * unmap the actual page and drop map count - * to zero. + * To call huge_pmd_unshare, i_mmap_rwsem must be + * held in write mode. Caller needs to explicitly + * do this outside rmap routines. */ - page_vma_mapped_walk_done(&pvmw); - break; + VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + + if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { + flush_tlb_range(vma, range.start, range.end); + mmu_notifier_invalidate_range(mm, range.start, + range.end); + + /* + * The ref count of the PMD page was dropped + * which is part of the way map counting + * is done for shared PMDs. Return 'true' + * here. When there is no other sharing, + * huge_pmd_unshare returns false and we will + * unmap the actual page and drop map count + * to zero. + */ + page_vma_mapped_walk_done(&pvmw); + break; + } } } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); @@ -1885,13 +1888,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, anon_exclusive = folio_test_anon(folio) && PageAnonExclusive(subpage); - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { - /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - */ - VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + if (folio_test_hugetlb(folio)) { /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may @@ -1901,22 +1898,31 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, */ flush_cache_range(vma, range.start, range.end); - if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { - flush_tlb_range(vma, range.start, range.end); - mmu_notifier_invalidate_range(mm, range.start, - range.end); - + if (!folio_test_anon(folio)) { /* - * The ref count of the PMD page was dropped - * which is part of the way map counting - * is done for shared PMDs. Return 'true' - * here. When there is no other sharing, - * huge_pmd_unshare returns false and we will - * unmap the actual page and drop map count - * to zero. + * To call huge_pmd_unshare, i_mmap_rwsem must be + * held in write mode. Caller needs to explicitly + * do this outside rmap routines. */ - page_vma_mapped_walk_done(&pvmw); - break; + VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + + if (huge_pmd_unshare(mm, vma, &address, pvmw.pte)) { + flush_tlb_range(vma, range.start, range.end); + mmu_notifier_invalidate_range(mm, range.start, + range.end); + + /* + * The ref count of the PMD page was dropped + * which is part of the way map counting + * is done for shared PMDs. Return 'true' + * here. When there is no other sharing, + * huge_pmd_unshare returns false and we will + * unmap the actual page and drop map count + * to zero. + */ + page_vma_mapped_walk_done(&pvmw); + break; + } } } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); -- 1.8.3.1