Received: by 2002:ab2:b82:0:b0:1f3:401:3cfb with SMTP id 2csp438897lqh; Thu, 28 Mar 2024 06:43:01 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXSSnRTmgacudaYM3j1Ogw/3RhMjgU8gBP0MdpEqQFo57FqTwS44cBwfQX3RMDLAEKoApZ5BmXfMEHQOOy6mmhsXmoDyKILHlRcOW9Yeg== X-Google-Smtp-Source: AGHT+IH889IAMBdVL+Ygs9A4CYpMMRcCG5fSG8wFdCcPK161Ol6OVAlyB8ZAiqdy5UgpSV46/SM3 X-Received: by 2002:a05:620a:3bd8:b0:78a:5225:d24b with SMTP id yf24-20020a05620a3bd800b0078a5225d24bmr2692635qkn.41.1711633381735; Thu, 28 Mar 2024 06:43:01 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711633381; cv=pass; d=google.com; s=arc-20160816; b=Ua/vBN0GHwUcZ1NSnekRZeSCtSuKqgaDnsEWSP3xrY4M0l/XYuXWqi2K+FNeBl1wZz I8aX+4+k7mHhkicOHVfZm1o6AQQI7ZvU103/v45WWv/+kpwsNw9n0sZiCkEeuoAsgKwN TuibLPr/qldoOkkFNQojAovs80BWJVgOY4mDafmssFL9yIKfMgpX+HzMYdLNlVWDxV7q yEc4ZKkadgplHT9rDwfYAEBjnyT64DrY2IvFP38NLVK4jhQPQlXj5by/r8QC7nJVlvf0 j5KiVBYkZ8iSoqIKef7suC/3xB94kUhIf3iS/p1Xp7nVO/rsEeWup/OdIazs+zA3Xgru QE1Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=w4kQekiS03AKW+jku5NPBM/1NO5oEvu58K65xm21xKA=; fh=inFsbniLgByi1PvLyextVSu1v80SuS19K1zpiR/HfaU=; b=tFUTYw2Fw1Wk81YVwB2upKHoeKTNAsZUbr7Iyne6asBsG8a3kRCxZQNPmdeT6GEPfn 7DJoLBiEVuGDUkD1+j3Q/XSv7yrWexOxqgNp4XBeJgP/b5ExRVvnDyKUopZ2Oi6bewod Teqp3VSs6RFHGEyRLQkFy6PRTs3XTxeU9yS64JybVLkIWCiljioyq1Kyl97ZATYI2v4x 4LeORqKo6KJIFR4/+aON8sIRr6eXVlfIIhXE3Ck7RrB8u7jJ+LiqHA0x7b2NWnzltaXi r1g2VB58z1WUHXmQcSrUzVv88dmeCBDnRZ36fAPbgc6XNkGun2PxFROmsjORxpttSbh7 UjRg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-122976-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-122976-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id wh3-20020a05620a56c300b0078a6cf831a1si1454025qkn.650.2024.03.28.06.43.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 06:43:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-122976-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-122976-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-122976-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 67B601C270CF for ; Thu, 28 Mar 2024 13:43:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 08A3412F5BE; Thu, 28 Mar 2024 13:41:03 +0000 (UTC) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EBE112EBED; Thu, 28 Mar 2024 13:41:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711633262; cv=none; b=lgiNK9e8dQntggzAYLDhtV2LcRUEEwbZKYtIyCP1UlU5Ml6RdAOoBgK61H44nDoSSFz/1FpuRRkVN8oHp0KRNGPOif+a0UY1vpzDsyHbyHLWiichqVfx54TjrbyA9wZYxOzL5e5qJwmWYqxGVPUwHTFkS94AMruK7flNNxNL0Oo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711633262; c=relaxed/simple; bh=T3nQ4t6AdTJzTSsRvU0YOtw1ZevkGPPtt5UssTH6CA8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=SNQ3bBAaFPkscn8rgaRwqPrINVAb8oCCsq42IjxioiYZ3o7bwcwTDaYiAWJBThx3HIY7S62oc12KoikLAVExACbjFabk0S0iY1vuyO21fvpHn4WYISAUjizPIqMAYOK3OfxdpwsOmnsBb0UZyD42ODHF6t2umUHip4rB1THfsu0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4V54P4204YzwQ0X; Thu, 28 Mar 2024 21:38:16 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id BDDE91400CD; Thu, 28 Mar 2024 21:40:57 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 28 Mar 2024 21:40:57 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Subject: [PATCH RFC 07/10] mm: page_frag: reuse existing bit field of 'va' for pagecnt_bias Date: Thu, 28 Mar 2024 21:38:36 +0800 Message-ID: <20240328133839.13620-8-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240328133839.13620-1-linyunsheng@huawei.com> References: <20240328133839.13620-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500005.china.huawei.com (7.185.36.74) As alignment of 'va' is always aligned with the order of the page allocated, we can reuse the LSB bits for the pagecount bias, and remove the orginal space needed by 'pagecnt_bias'. Also limit the 'fragsz' to be at least the size of 'usigned int' to match the limited pagecnt_bias. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 20 +++++++---- mm/page_frag_alloc.c | 63 +++++++++++++++++++-------------- 2 files changed, 50 insertions(+), 33 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 40a7d6da9ef0..a97a1ac017d6 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -9,7 +9,18 @@ #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE) struct page_frag_cache { - void *va; + union { + void *va; + /* we maintain a pagecount bias, so that we dont dirty cache + * line containing page->_refcount every time we allocate a + * fragment. As 'va' is always aligned with the order of the + * page allocated, we can reuse the LSB bits for the pagecount + * bias, and its bit width happens to be indicated by the + * 'size_mask' below. + */ + unsigned long pagecnt_bias; + + }; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) __u16 offset; __u16 size_mask:15; @@ -18,10 +29,6 @@ struct page_frag_cache { __u32 offset:31; __u32 pfmemalloc:1; #endif - /* we maintain a pagecount bias, so that we dont dirty cache line - * containing page->_refcount every time we allocate a fragment. - */ - unsigned int pagecnt_bias; }; static inline void page_frag_cache_init(struct page_frag_cache *nc) @@ -56,7 +63,8 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE); + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE || + fragsz < sizeof(unsigned int)); return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index a02e57a439f0..ae1393d0619a 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -18,8 +18,8 @@ #include #include "internal.h" -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, - gfp_t gfp_mask) +static bool __page_frag_cache_refill(struct page_frag_cache *nc, + gfp_t gfp_mask) { struct page *page = NULL; gfp_t gfp = gfp_mask; @@ -35,9 +35,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, if (unlikely(!page)) page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); - nc->va = page ? page_address(page) : NULL; + if (unlikely(!page)) { + nc->va = NULL; + return false; + } + + nc->va = page_address(page); - return page; +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + VM_BUG_ON(nc->pagecnt_bias & nc->size_mask); + page_ref_add(page, nc->size_mask - 1); + nc->pagecnt_bias |= nc->size_mask; +#else + VM_BUG_ON(nc->pagecnt_bias & (PAGE_SIZE - 1)); + page_ref_add(page, PAGE_SIZE - 2); + nc->pagecnt_bias |= (PAGE_SIZE - 1); +#endif + + nc->pfmemalloc = page_is_pfmemalloc(page); + nc->offset = 0; + return true; } void page_frag_cache_drain(struct page_frag_cache *nc) @@ -67,38 +84,31 @@ EXPORT_SYMBOL(__page_frag_cache_drain); void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { - unsigned int size, offset; + unsigned long size_mask; + unsigned int offset; struct page *page; + void *va; if (unlikely(!nc->va)) { refill: - page = __page_frag_cache_refill(nc, gfp_mask); - if (!page) + if (!__page_frag_cache_refill(nc, gfp_mask)) return NULL; - - /* Even if we own the page, we do not use atomic_set(). - * This would break get_page_unless_zero() users. - */ - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); - - /* reset page count bias and offset to start of new frag */ - nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - nc->offset = 0; } #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size_mask + 1; + size_mask = nc->size_mask; #else - size = PAGE_SIZE; + size_mask = PAGE_SIZE - 1; #endif + va = (void *)((unsigned long)nc->va & ~size_mask); offset = nc->offset; - if (unlikely(offset + fragsz > size)) { - page = virt_to_page(nc->va); - if (!page_ref_sub_and_test(page, nc->pagecnt_bias)) + if (unlikely(offset + fragsz > (size_mask + 1))) { + page = virt_to_page(va); + + if (!page_ref_sub_and_test(page, nc->pagecnt_bias & size_mask)) goto refill; if (unlikely(nc->pfmemalloc)) { @@ -107,12 +117,11 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, } /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + set_page_count(page, size_mask); + nc->pagecnt_bias |= size_mask; - /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; offset = 0; - if (unlikely(fragsz > size)) { + if (unlikely(fragsz > (size_mask + 1))) { /* * The caller is trying to allocate a fragment * with fragsz > PAGE_SIZE but the cache isn't big @@ -129,7 +138,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, nc->pagecnt_bias--; nc->offset = offset + fragsz; - return nc->va + offset; + return va + offset; } EXPORT_SYMBOL(page_frag_alloc_va); -- 2.33.0