Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp4579062rdh; Wed, 29 Nov 2023 05:34:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IEB3Qd3wxaf0oDBblKGQCR3+jP8bJPfOKlg2ZruQaUQIfb5w5wmR8x2zWMQOK1VyplT96Sz X-Received: by 2002:a17:902:e88f:b0:1cf:cf40:3cef with SMTP id w15-20020a170902e88f00b001cfcf403cefmr9907590plg.64.1701264854682; Wed, 29 Nov 2023 05:34:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701264854; cv=none; d=google.com; s=arc-20160816; b=EkumcuY3OjmJ7vTH89e9NXZ4zJ/u71/dK42a1nDJ7RtQqw7jbZdFyrstKjoN9D4823 FYqD+QaLMyYh2s19dQh75+epPnAmzoG1MUB9B1jyzXcVi+jwiYIQXp4Frgre3c3nWOoF qu7odClEJCJKjHM7h8uz1jITdOFxk5I0Am+5H15d2CBeradkCJxj1U3dD0ArOA8w4xOU cUjTzg4KXvZjzsmm3UsPHmgUSihoC6cg3x3VIS/pNya75hlQ4rTkyWoAGnH+FQliB7fu upTNYk57Y4YbT41qlxnmGzW5EalEUmVFpHK+KhAkY7Hf52JE1CqgftT5ITp45x5uG/Sf Q6uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=4f6KhRGfPaFJ/FW21UAV6SI87p8bdhK3smMk0oNzxeM=; fh=662FnxvaMUPpGpbpQaKOb5FYDcM6B26e8RC2ZBxI39o=; b=pWTT1JMtDPQbBRWqZg+tN9bgteX4rR6k3HBz3ezyyUYgaON9w9+DDGktHlc0kcjrNp yhCd+9Zako2vwBBqPOo5R/KW5QBUKM/3aT6Cakwu6Kb+UMzg88glriJ+R5mOlYM8WB/H 6BeU4DU4byZV980LvvV9kDGMNmiF1vXFfD+TbNyx7juz39q+ce14OD8e6YVfvY1WCXEk S5+BHg5ayZBH737VR1+dYyLPEuaU7ESLRYy0aXGumft1z2MfKlyAfj+AdA+ztXnC1den vz4CekqN5vNAUwQtV+7Hg+bb+gMRU8KPZghpn9HPro8zuvYmeWgqGMj4Ya/p0I0pX2Ju 1wzA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id o14-20020a170902d4ce00b001d0050e246bsi3001559plg.43.2023.11.29.05.34.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Nov 2023 05:34:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 12E7F822A49E; Wed, 29 Nov 2023 05:34:04 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234014AbjK2Ndm (ORCPT + 99 others); Wed, 29 Nov 2023 08:33:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231452AbjK2Ndl (ORCPT ); Wed, 29 Nov 2023 08:33:41 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5742EA8; Wed, 29 Nov 2023 05:33:46 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D4522F4; Wed, 29 Nov 2023 05:34:32 -0800 (PST) Received: from raptor (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4A2BB3F5A1; Wed, 29 Nov 2023 05:33:40 -0800 (PST) Date: Wed, 29 Nov 2023 13:33:37 +0000 From: Alexandru Elisei To: Hyesoo Yu Cc: catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, arnd@arndb.de, akpm@linux-foundation.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, mhiramat@kernel.org, rppt@kernel.org, hughd@google.com, pcc@google.com, steven.price@arm.com, anshuman.khandual@arm.com, vincenzo.frascino@arm.com, david@redhat.com, eugenis@google.com, kcc@google.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: Re: [PATCH RFC v2 16/27] arm64: mte: Manage tag storage on page allocation Message-ID: References: <20231119165721.9849-1-alexandru.elisei@arm.com> <20231119165721.9849-17-alexandru.elisei@arm.com> <20231129091040.GC2988384@tiffany> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231129091040.GC2988384@tiffany> X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Wed, 29 Nov 2023 05:34:04 -0800 (PST) Hi, On Wed, Nov 29, 2023 at 06:10:40PM +0900, Hyesoo Yu wrote: > On Sun, Nov 19, 2023 at 04:57:10PM +0000, Alexandru Elisei wrote: > > [..] > > +static int order_to_num_blocks(int order) > > +{ > > + return max((1 << order) / 32, 1); > > +} > > [..] > > +int reserve_tag_storage(struct page *page, int order, gfp_t gfp) > > +{ > > + unsigned long start_block, end_block; > > + struct tag_region *region; > > + unsigned long block; > > + unsigned long flags; > > + unsigned int tries; > > + int ret = 0; > > + > > + VM_WARN_ON_ONCE(!preemptible()); > > + > > + if (page_tag_storage_reserved(page)) > > + return 0; > > + > > + /* > > + * __alloc_contig_migrate_range() ignores gfp when allocating the > > + * destination page for migration. Regardless, massage gfp flags and > > + * remove __GFP_TAGGED to avoid recursion in case gfp stops being > > + * ignored. > > + */ > > + gfp &= ~__GFP_TAGGED; > > + if (!(gfp & __GFP_NORETRY)) > > + gfp |= __GFP_RETRY_MAYFAIL; > > + > > + ret = tag_storage_find_block(page, &start_block, ®ion); > > + if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page))) > > + return 0; > > + end_block = start_block + order_to_num_blocks(order) * region->block_size; > > + > > Hello. > > If the page size is 4K, block size is 2 (block size bytes 8K), and order is 6, > then we need 2 pages for the tag. However according to the equation, order_to_num_blocks > is 2 and block_size is also 2, so end block will be incremented by 4. > > However we actually only need 8K of tag, right for 256K ? > Could you explain order_to_num_blocks * region->block_size more detail ? I think you are correct, thank you for pointing it out. The formula should probably be something like: static int order_to_num_blocks(int order, u32 block_size) { int num_tag_pages = max((1 << order) / 32, 1); return DIV_ROUND_UP(num_tag_pages, block_size); } and that will make end_block = start_block + 2 in your scenario. Does that look correct to you? Thanks, Alex > > Thanks, > Regards. > > > + mutex_lock(&tag_blocks_lock); > > + > > + /* Check again, this time with the lock held. */ > > + if (page_tag_storage_reserved(page)) > > + goto out_unlock; > > + > > + /* Make sure existing entries are not freed from out under out feet. */ > > + xa_lock_irqsave(&tag_blocks_reserved, flags); > > + for (block = start_block; block < end_block; block += region->block_size) { > > + if (tag_storage_block_is_reserved(block)) > > + block_ref_add(block, region, order); > > + } > > + xa_unlock_irqrestore(&tag_blocks_reserved, flags); > > + > > + for (block = start_block; block < end_block; block += region->block_size) { > > + /* Refcount incremented above. */ > > + if (tag_storage_block_is_reserved(block)) > > + continue; > > + > > + tries = 3; > > + while (tries--) { > > + ret = alloc_contig_range(block, block + region->block_size, MIGRATE_CMA, gfp); > > + if (ret == 0 || ret != -EBUSY) > > + break; > > + } > > + > > + if (ret) > > + goto out_error; > > + > > + ret = tag_storage_reserve_block(block, region, order); > > + if (ret) { > > + free_contig_range(block, region->block_size); > > + goto out_error; > > + } > > + > > + count_vm_events(CMA_ALLOC_SUCCESS, region->block_size); > > + } > > + > > + page_set_tag_storage_reserved(page, order); > > +out_unlock: > > + mutex_unlock(&tag_blocks_lock); > > + > > + return 0; > > + > > +out_error: > > + xa_lock_irqsave(&tag_blocks_reserved, flags); > > + for (block = start_block; block < end_block; block += region->block_size) { > > + if (tag_storage_block_is_reserved(block) && > > + block_ref_sub_return(block, region, order) == 1) { > > + __xa_erase(&tag_blocks_reserved, block); > > + free_contig_range(block, region->block_size); > > + } > > + } > > + xa_unlock_irqrestore(&tag_blocks_reserved, flags); > > + > > + mutex_unlock(&tag_blocks_lock); > > + > > + count_vm_events(CMA_ALLOC_FAIL, region->block_size); > > + > > + return ret; > > +} > > + > > +void free_tag_storage(struct page *page, int order) > > +{ > > + unsigned long block, start_block, end_block; > > + struct tag_region *region; > > + unsigned long flags; > > + int ret; > > + > > + ret = tag_storage_find_block(page, &start_block, ®ion); > > + if (WARN_ONCE(ret, "Missing tag storage block for pfn 0x%lx", page_to_pfn(page))) > > + return; > > + > > + end_block = start_block + order_to_num_blocks(order) * region->block_size; > > + > > + xa_lock_irqsave(&tag_blocks_reserved, flags); > > + for (block = start_block; block < end_block; block += region->block_size) { > > + if (WARN_ONCE(!tag_storage_block_is_reserved(block), > > + "Block 0x%lx is not reserved for pfn 0x%lx", block, page_to_pfn(page))) > > + continue; > > + > > + if (block_ref_sub_return(block, region, order) == 1) { > > + __xa_erase(&tag_blocks_reserved, block); > > + free_contig_range(block, region->block_size); > > + } > > + } > > + xa_unlock_irqrestore(&tag_blocks_reserved, flags); > > +} > > diff --git a/fs/proc/page.c b/fs/proc/page.c > > index 195b077c0fac..e7eb584a9234 100644 > > --- a/fs/proc/page.c > > +++ b/fs/proc/page.c > > @@ -221,6 +221,7 @@ u64 stable_page_flags(struct page *page) > > #ifdef CONFIG_ARCH_USES_PG_ARCH_X > > u |= kpf_copy_bit(k, KPF_ARCH_2, PG_arch_2); > > u |= kpf_copy_bit(k, KPF_ARCH_3, PG_arch_3); > > + u |= kpf_copy_bit(k, KPF_ARCH_4, PG_arch_4); > > #endif > > > > return u; > > diff --git a/include/linux/kernel-page-flags.h b/include/linux/kernel-page-flags.h > > index 859f4b0c1b2b..4a0d719ffdd4 100644 > > --- a/include/linux/kernel-page-flags.h > > +++ b/include/linux/kernel-page-flags.h > > @@ -19,5 +19,6 @@ > > #define KPF_SOFTDIRTY 40 > > #define KPF_ARCH_2 41 > > #define KPF_ARCH_3 42 > > +#define KPF_ARCH_4 43 > > > > #endif /* LINUX_KERNEL_PAGE_FLAGS_H */ > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > > index a88e64acebfe..7915165a51bd 100644 > > --- a/include/linux/page-flags.h > > +++ b/include/linux/page-flags.h > > @@ -135,6 +135,7 @@ enum pageflags { > > #ifdef CONFIG_ARCH_USES_PG_ARCH_X > > PG_arch_2, > > PG_arch_3, > > + PG_arch_4, > > #endif > > __NR_PAGEFLAGS, > > > > diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h > > index 6ca0d5ed46c0..ba962fd10a2c 100644 > > --- a/include/trace/events/mmflags.h > > +++ b/include/trace/events/mmflags.h > > @@ -125,7 +125,8 @@ IF_HAVE_PG_HWPOISON(hwpoison) \ > > IF_HAVE_PG_IDLE(idle) \ > > IF_HAVE_PG_IDLE(young) \ > > IF_HAVE_PG_ARCH_X(arch_2) \ > > -IF_HAVE_PG_ARCH_X(arch_3) > > +IF_HAVE_PG_ARCH_X(arch_3) \ > > +IF_HAVE_PG_ARCH_X(arch_4) > > > > #define show_page_flags(flags) \ > > (flags) ? __print_flags(flags, "|", \ > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index f31f02472396..9beead961a65 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -2474,6 +2474,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail, > > #ifdef CONFIG_ARCH_USES_PG_ARCH_X > > (1L << PG_arch_2) | > > (1L << PG_arch_3) | > > + (1L << PG_arch_4) | > > #endif > > (1L << PG_dirty) | > > LRU_GEN_MASK | LRU_REFS_MASK)); > > -- > > 2.42.1 > > > >