Received: by 2002:ab2:b82:0:b0:1f3:401:3cfb with SMTP id 2csp438676lqh; Thu, 28 Mar 2024 06:42:40 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVgOA/rUqVqlsvLRpKXsh1O7zF+W716iKsZtMfJoN25h+NuFueOOYwLHu3CJ0tI8gGzPL+s9iOP+nww1ig9IGYOehuD0qo4NfcdlsvjnA== X-Google-Smtp-Source: AGHT+IEvgtUWQD2J4HRXKkZ4LvJ4CKcBlttuIzd0nt+E7fZGm7ruMqHXcmZsRTXdlvQuBTnGP9b9 X-Received: by 2002:a17:902:6e01:b0:1e0:a326:e89e with SMTP id u1-20020a1709026e0100b001e0a326e89emr2208560plk.37.1711633360676; Thu, 28 Mar 2024 06:42:40 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711633360; cv=pass; d=google.com; s=arc-20160816; b=ChLm0eF+DycPPoTu1Nm1zIQBF+aiUvteLspT0z1WG1vvHD5wOoE2AqEdJax7SwjN2v TcuSyzpYiToeFct66I92ADXU+k5HDMbJWMD4lO0gg0HZEwx0ibwpQ0tzYKjq0v8Rchje kRDX1xUE5gmSHzzqyD9C09BVLsGyvQrihxVtFIysEIoU1WnhSxPf6M6OV+BuVeECTeIq +y9kn4efbu5tR8/zRsttEj5lxsdVCSJWNouhXPOjF+uOCkGIIR5ORAbkv8Kaotedr65y UCsJ1SZubZEXEyU1KJ9kU6pFTpvYnUQREh8nzCXRArTHCbQeOyPV4p5CPmljmHGWMq5r QzVw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=gRd8uG7vDon8UIOOd8j8PGsSlhcHwIdSJGe8YCIjnFU=; fh=inFsbniLgByi1PvLyextVSu1v80SuS19K1zpiR/HfaU=; b=QjTRGE+3sQ3/wvVU1JAr8N9/COqN3bnOGwxqfMo4Nq7j40GUNsA9Q9MeFrrJnbG05B eSYSFgNggjC9OhvJ9aa+bc6Vw514mjR/W19JnCnyu4O+cdhbNah2U7tQF0mlJ+DW047b /GXom219avDVhEq8246VkN9uAUgN9n6g8gPT8aUAW8DOn1CEAhVbP7LePvjdW8XH1qUd dPxmWDRF40ZY7/qIPlHmlabYn1GxZ7ixtN+g9fKzJ8PclGQ5jBh0xRCViYHJ5SpR3YX8 3YC7uTiV9Qq3PcwI/cfT+taeXhVPuBSRVMs+OwGtR/JrbrVff2Y4Y3qztiMkrkbWk4sW 92YA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-122975-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-122975-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id x1-20020a170902a38100b001e0334705cbsi1409136pla.597.2024.03.28.06.42.40 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 06:42:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-122975-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-122975-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-122975-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 3AD03293C95 for ; Thu, 28 Mar 2024 13:42:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E771D12F379; Thu, 28 Mar 2024 13:41:00 +0000 (UTC) Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 640628175B; Thu, 28 Mar 2024 13:40:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711633260; cv=none; b=hlu+Clj6KIe8m3QpQnPcg6x/50SL3c+2tSFwUULVXfkfFKWGfbEi1lq408Jg0se2ZqoUE1Dz/md8rQISsD9SxBnDV1uJ/HVzFLFkHzdObt9O16quiIJx72Hqjbcr8NKuLTYS/TK7LZMYiUkd/LG9rm0JULVsEGoPMlOkZhE+HKQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711633260; c=relaxed/simple; bh=Ix04qNi/vak0WyTKQjVjhF/8M+BDkDbVXujtrUoQ80k=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QXRKUWLUXCaAxAICNx5dhnAElXd+tFJpBPdMeglsOHTlZpHaoKRjhXN63nUsNbO3pBNg+WptpvNsSfDz+AURswKkFVM1RxsuZc+r4OgkuR75XC7VdwRvcVYz+I5HjBauIVywYihoFBuvByk3ePqvy5JG1l9ixPb2J+fxbVMJcH8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4V54P32lZ6z1h4Kn; Thu, 28 Mar 2024 21:38:15 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id B5FBC140258; Thu, 28 Mar 2024 21:40:55 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 28 Mar 2024 21:40:55 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Andrew Morton , Subject: [PATCH RFC 06/10] mm: page_frag: reuse MSB of 'size' field for pfmemalloc Date: Thu, 28 Mar 2024 21:38:35 +0800 Message-ID: <20240328133839.13620-7-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240328133839.13620-1-linyunsheng@huawei.com> References: <20240328133839.13620-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500005.china.huawei.com (7.185.36.74) The '(PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)' case is for the system with page size less than 32KB, which is 0x8000 bytes requiring 16 bits space, change 'size' to 'size_mask' to avoid using the MSB, and change 'pfmemalloc' field to reuse the that MSB, so that we remove the orginal space needed by 'pfmemalloc'. For another case, the MSB of 'offset' is reused for 'pfmemalloc'. Signed-off-by: Yunsheng Lin --- include/linux/page_frag_cache.h | 13 ++++++++----- mm/page_frag_alloc.c | 5 +++-- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index fe5faa80b6c3..40a7d6da9ef0 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -12,15 +12,16 @@ struct page_frag_cache { void *va; #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) __u16 offset; - __u16 size; + __u16 size_mask:15; + __u16 pfmemalloc:1; #else - __u32 offset; + __u32 offset:31; + __u32 pfmemalloc:1; #endif /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. */ unsigned int pagecnt_bias; - bool pfmemalloc; }; static inline void page_frag_cache_init(struct page_frag_cache *nc) @@ -43,7 +44,9 @@ static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - nc->offset = ALIGN(nc->offset, align); + unsigned int offset = nc->offset; + + nc->offset = ALIGN(offset, align); return page_frag_alloc_va(nc, fragsz, gfp_mask); } @@ -53,7 +56,7 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, gfp_t gfp_mask, unsigned int align) { - WARN_ON_ONCE(!is_power_of_2(align)); + WARN_ON_ONCE(!is_power_of_2(align) || align >= PAGE_SIZE); return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index 7f639af4e518..a02e57a439f0 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -29,7 +29,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC; page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER); - nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; + nc->size_mask = page ? PAGE_FRAG_CACHE_MAX_SIZE - 1 : PAGE_SIZE - 1; + VM_BUG_ON(page && nc->size_mask != PAGE_FRAG_CACHE_MAX_SIZE - 1); #endif if (unlikely(!page)) page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); @@ -88,7 +89,7 @@ void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) /* if size can vary use size else just use PAGE_SIZE */ - size = nc->size; + size = nc->size_mask + 1; #else size = PAGE_SIZE; #endif -- 2.33.0