Received: by 2002:ab2:3350:0:b0:1f4:6588:b3a7 with SMTP id o16csp834848lqe; Sun, 7 Apr 2024 06:14:44 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWNkOO99eA9jGLeejmif8eSq0euKCyGFGMWlFflu0cI1nyze/T5OoJp9OUZFL3AxkNbf227xR8vgN2SzrkpvVJsb8LYYRNxzL6CgIFwfg== X-Google-Smtp-Source: AGHT+IHlKHETjTSUDAlCUsM9Q/RXrDRBrc52Jvy8osmQ5YEwQiTc/G+lbbqj0tbuTCHuomCQEiEQ X-Received: by 2002:a50:f609:0:b0:56a:ae8a:acc0 with SMTP id c9-20020a50f609000000b0056aae8aacc0mr4145546edn.21.1712495683961; Sun, 07 Apr 2024 06:14:43 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1712495683; cv=pass; d=google.com; s=arc-20160816; b=KjPbiktc9oSAjcTZE5PTZNpqAzD4vD8sHoQ1mthqeOT5reTZR6UT8ucyXaTb0kucHF I+oCbUbi9TiX7IorcbglnvvWMclkc5QjJbygRniNRxLCsTYqa8tWm2sQmPudMSySMk1z iyKflEVOjlDFBNM51Dkl2Qi/gqhscCOn/cDmmwkexl7+EwY4KsCQoUlWiYTtvDV4NL+q x8rZiu825kpNIIp4DWgSzMAbjHZ6kFqMKz8hUW3pvyv0SQUy4+dcdr6ioI356ygKQrAB K0CeFj9Nl5qNHMI7w1Vz90u9E7jTSaYFBRSUhaGjqtkp1LpPptyZfPKtlilwE7nmrdtd +WXg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=CAHPxwty7eLu8Z1l4SI6yGL3kVGtu+SoL+tGqBKsr0M=; fh=inFsbniLgByi1PvLyextVSu1v80SuS19K1zpiR/HfaU=; b=D54W/F5EVwLEb2B97nknJoEJoCeSDgiFAWB4lQjriBRylWdNFqW/QRehgbviDfV6j0 n96fDU04sFTv4NhO+qdyD6stFbEwv51wwHTeM25AA/S9G/RB5VA7pfyH5/jcWJntJk7t mrE50PgfcoOLmzA8F+FxGaVSKzVTxpZv2NHfKLye1xyrTI6CpmrxSy2Vcyw+e8Kpa2mT DE4I/uqdYWTrfPAKH6mRAmQZhN6D537tEluzxNODKWZSpwcyGVhwFy9F7ZcyyFFRLhTZ oSZbEFByB1PODon5eQ5JAsFNgzGqUf6Cb80dHX9KBV5ATQpF2NJZE6qX7q+JQSAwo9N2 qhug==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-134407-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-134407-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id 18-20020a508752000000b0056e21b42abasi2666243edv.331.2024.04.07.06.14.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 07 Apr 2024 06:14:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-134407-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-134407-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-134407-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 860F91F21886 for ; Sun, 7 Apr 2024 13:14:43 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 64F7F44C7C; Sun, 7 Apr 2024 13:11:23 +0000 (UTC) Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A91E240BEB; Sun, 7 Apr 2024 13:11:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712495482; cv=none; b=u6S327AucpM27bvW0PtBaiZqSW3RTnZq3ykssV3H1a4NUjdIOMu74XKZkJewi3xqpsY2pkMz7/SPaRVxyY3NJXWY7SKs0RYyP/pl36hT2OrI7D7ZcTX0jAoo4nnYk4e3FkCr8fm0hAEc/p+zWlKAQaQNlNJDDku4t1ldlfqTSeM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712495482; c=relaxed/simple; bh=pzP6X31Z4/s7y8HXW1cwy6Xr+yD0+ZY3i/X90DuBceg=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mtJRkGijbJ1ol2v1F8ZVJIucdrF/UB0ywJ+QFnPJKvXpwWb4ZeqqQ0aeYQqheHcleR0DWnCZRMtOTFKQJ/CLk1SipbEofcxYZAAWonQYEvW6e4frY/z7hfEVw9tl9u7bTh5PAZ471RyXzE8BHAYh8hpsDhVMTTYf9gX6wQCbp8k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4VCCJW4f0Xz1GGCR; Sun, 7 Apr 2024 21:10:35 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 9352D18005F; Sun, 7 Apr 2024 21:11:18 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Sun, 7 Apr 2024 21:11:18 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Subject: [PATCH net-next v1 09/12] mm: page_frag: introduce prepare/commit API for page_frag Date: Sun, 7 Apr 2024 21:08:46 +0800 Message-ID: <20240407130850.19625-10-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240407130850.19625-1-linyunsheng@huawei.com> References: <20240407130850.19625-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500005.china.huawei.com (7.185.36.74) There are many use cases that need minimum memory in order for forward progressing, but can do better if there is more memory available. Currently skb_page_frag_refill() API is used to solve the above usecases, as mentioned in [1], its implementation is similar to the one in mm subsystem. To unify those two page_frag implementations, introduce a prepare API to ensure minimum memory is satisfied and return how much the actual memory is available to the caller. And the caller can decide how much memory to use by calling commit API, or not calling the commit API if deciding to not use any memory. Note it seems hard to decide which header files for caling virt_to_page() in the inline helper, so macro is used instead of inline helper to avoid dealing with that. 1. https://lore.kernel.org/all/20240228093013.8263-1-linyunsheng@huawei.com/ Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 141 +++++++++++++++++++++++++++++++- mm/page_frag_cache.c | 13 ++- 2 files changed, 144 insertions(+), 10 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index a97a1ac017d6..28185969cd2c 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -43,8 +43,25 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask); +void *page_frag_cache_refill(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask); + +static inline void *page_frag_alloc_va(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) +{ + unsigned int offset; + void *va; + + va = page_frag_cache_refill(nc, fragsz, gfp_mask); + if (unlikely(!va)) + return NULL; + + offset = nc->offset; + nc->pagecnt_bias--; + nc->offset = offset + fragsz; + + return va + offset; +} static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, @@ -69,6 +86,126 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } +static inline void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + gfp_t gfp_mask) +{ + void *va; + + va = page_frag_cache_refill(nc, *size, gfp_mask); + if (unlikely(!va)) + return NULL; + + *offset = nc->offset; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + *size = nc->size_mask - *offset + 1; +#else + *size = PAGE_SIZE - *offset; +#endif + + return va + *offset; +} + +static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + unsigned int align, + gfp_t gfp_mask) +{ + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE || + *size < sizeof(unsigned int)); + + *offset = nc->offset; + nc->offset = ALIGN(*offset, align); + return page_frag_alloc_va_prepare(nc, offset, size, gfp_mask); +} + +static inline void *__page_frag_alloc_pg_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + gfp_t gfp_mask) +{ + void *va; + + va = page_frag_cache_refill(nc, *size, gfp_mask); + if (unlikely(!va)) + return NULL; + + *offset = nc->offset; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + *size = nc->size_mask - *offset + 1; +#else + *size = PAGE_SIZE - *offset; +#endif + + return va; +} + +#define page_frag_alloc_pg_prepare(nc, offset, size, gfp) \ +({ \ + struct page *__page = NULL; \ + void *__va; \ + \ + __va = __page_frag_alloc_pg_prepare(nc, offset, size, gfp); \ + if (likely(__va)) \ + __page = virt_to_page(__va); \ + \ + __page; \ +}) + +static inline void *__page_frag_alloc_prepare(struct page_frag_cache *nc, + unsigned int *offset, + unsigned int *size, + void **va, gfp_t gfp_mask) +{ + void *nc_va; + + nc_va = page_frag_cache_refill(nc, *size, gfp_mask); + if (unlikely(!nc_va)) + return NULL; + + *offset = nc->offset; + *va = nc_va + *offset; + +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) + *size = nc->size_mask - *offset + 1; +#else + *size = PAGE_SIZE - *offset; +#endif + + return nc_va; +} + +#define page_frag_alloc_prepare(nc, offset, size, va, gfp) \ +({ \ + struct page *__page = NULL; \ + void *__va; \ + \ + __va = __page_frag_alloc_prepare(nc, offset, size, va, gfp); \ + if (likely(__va)) \ + __page = virt_to_page(__va); \ + \ + __page; \ +}) + +static inline void page_frag_alloc_commit(struct page_frag_cache *nc, + unsigned int offset, + unsigned int size) +{ + nc->pagecnt_bias--; + nc->offset = offset + size; +} + +static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, + unsigned int offset, + unsigned int size) +{ + nc->offset = offset + size; +} + void page_frag_free_va(void *addr); #endif diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index ae1393d0619a..cbd0ed82a596 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -81,8 +81,8 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask) +void *page_frag_cache_refill(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask) { unsigned long size_mask; unsigned int offset; @@ -120,7 +120,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, set_page_count(page, size_mask); nc->pagecnt_bias |= size_mask; - offset = 0; + nc->offset = 0; if (unlikely(fragsz > (size_mask + 1))) { /* * The caller is trying to allocate a fragment @@ -135,12 +135,9 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, } } - nc->pagecnt_bias--; - nc->offset = offset + fragsz; - - return va + offset; + return va; } -EXPORT_SYMBOL(page_frag_alloc_va); +EXPORT_SYMBOL(page_frag_cache_refill); /* * Frees a page fragment allocated out of either a compound or order 0 page. -- 2.33.0