Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp5277516pxv; Wed, 21 Jul 2021 01:22:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwnNANFz7wuC7RAeahB0KRKtT836/AW6/tQ1SW/mxRJ4qmYBf4QsVyVvbyqdnLCgJMEAZag X-Received: by 2002:a17:906:58c9:: with SMTP id e9mr37350101ejs.144.1626855779346; Wed, 21 Jul 2021 01:22:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626855779; cv=none; d=google.com; s=arc-20160816; b=cIjykpfrvUONBAROIf9Rdu6PhrsOuG1AUpLt+henonJ64or5zWENHq+uWaozb3AdZE vF66uKSHi4w60WyRCpIpd0T3ikQjAs+OsX3iiEmS2NuqYBICC65sVVZ/F5ZF5qtDNaCO 7wiqHUT9I0RV6B3ImBWXQz8fDk8VpddYtniUjaiigY/Y5nowbnDvqxoaZIEJC8kslMgO CTwbXZMxos6ErrC4N7l1guKdSlhuXxniVGL4R6YZEjVi5xmN7UPc4X/Kkeh4jqyY/PGr xxk4tIDGtCY8Yn2vAmwuvOhX1Onmotf0rGr9E5nlWpE8QoYMOKxyGHja0J2Mcz/WkBhG ft4Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=GKbYRPhGKJrCIoATel+VNFg8i7M/4zRaYSEl8l8rybQ=; b=kNgcr6RrhHnvjxwQcnmFUdMf1K6LJJ/I3vH3KKH0ndH5ah1SbD4pzzRGiJb7FRlLR2 1ioxuWTUt3od1DhYaDVRHtLCpfxBsZmYAIDVRU7E3d/YIE4EsAdoH3trmyVAj1Tfdb/n HE2sbnTDWhIuOaAur5V6a/QwSHxI3zUpoDXXo29vEgKw1e24iy3W5FeoNjIIZR8zo4Kq BSYEuW1MpAh24jFCG2mIT9ukPL2Q20ovLhZTcXsjwQ9JWz/NPP6BdvvPTzi3OtLa2FV4 MN69lSzw/6yI6Ty/XPiCFBes+YNpp2KeL46zTw1daGrCuPQSC8HtDUqwBnrZOaw90rxq GlnQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d21si27621499edx.270.2021.07.21.01.22.35; Wed, 21 Jul 2021 01:22:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236186AbhGUHjS (ORCPT + 99 others); Wed, 21 Jul 2021 03:39:18 -0400 Received: from szxga03-in.huawei.com ([45.249.212.189]:12282 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236341AbhGUHfH (ORCPT ); Wed, 21 Jul 2021 03:35:07 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4GV7Z021S8z7wb4; Wed, 21 Jul 2021 16:10:48 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 21 Jul 2021 16:15:24 +0800 Received: from [10.69.30.204] (10.69.30.204) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.2; Wed, 21 Jul 2021 16:15:23 +0800 Subject: Re: [PATCH rfc v6 2/4] page_pool: add interface to manipulate frag count in page pool To: Alexander Duyck CC: David Miller , Jakub Kicinski , Russell King - ARM Linux , Marcin Wojtas , , , "Salil Mehta" , , , Ilias Apalodimas , "Alexei Starovoitov" , Daniel Borkmann , "John Fastabend" , Andrew Morton , Peter Zijlstra , "Will Deacon" , Matthew Wilcox , "Vlastimil Babka" , , , Peter Xu , Feng Tang , Jason Gunthorpe , Matteo Croce , Hugh Dickins , Jonathan Lemon , "Alexander Lobakin" , Willem de Bruijn , , Cong Wang , Kevin Hao , , Marco Elver , Yonghong Song , , , "Martin KaFai Lau" , , Netdev , LKML , bpf References: <1626752145-27266-1-git-send-email-linyunsheng@huawei.com> <1626752145-27266-3-git-send-email-linyunsheng@huawei.com> From: Yunsheng Lin Message-ID: <92e68f4e-49a4-568c-a281-2865b54a146e@huawei.com> Date: Wed, 21 Jul 2021 16:15:23 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.69.30.204] X-ClientProxiedBy: dggeme706-chm.china.huawei.com (10.1.199.102) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/7/20 23:43, Alexander Duyck wrote: > On Mon, Jul 19, 2021 at 8:36 PM Yunsheng Lin wrote: >> >> For 32 bit systems with 64 bit dma, dma_addr[1] is used to >> store the upper 32 bit dma addr, those system should be rare >> those days. >> >> For normal system, the dma_addr[1] in 'struct page' is not >> used, so we can reuse dma_addr[1] for storing frag count, >> which means how many frags this page might be splited to. >> >> In order to simplify the page frag support in the page pool, >> the PAGE_POOL_DMA_USE_PP_FRAG_COUNT macro is added to indicate >> the 32 bit systems with 64 bit dma, and the page frag support >> in page pool is disabled for such system. >> >> The newly added page_pool_set_frag_count() is called to reserve >> the maximum frag count before any page frag is passed to the >> user. The page_pool_atomic_sub_frag_count_return() is called >> when user is done with the page frag. >> >> Signed-off-by: Yunsheng Lin >> --- >> include/linux/mm_types.h | 18 +++++++++++++----- >> include/net/page_pool.h | 41 ++++++++++++++++++++++++++++++++++------- >> net/core/page_pool.c | 4 ++++ >> 3 files changed, 51 insertions(+), 12 deletions(-) >> > > > >> +static inline long page_pool_atomic_sub_frag_count_return(struct page *page, >> + long nr) >> +{ >> + long frag_count = atomic_long_read(&page->pp_frag_count); >> + long ret; >> + >> + if (frag_count == nr) >> + return 0; >> + >> + ret = atomic_long_sub_return(nr, &page->pp_frag_count); >> + WARN_ON(ret < 0); >> + return ret; >> } >> > > So this should just be an atomic_long_sub_return call. You should get > rid of the atomic_long_read portion of this as it can cover up > reference count errors. atomic_long_sub_return() is used to avoid one possible cache bouncing and barrrier caused by the last user. You are right that that may cover up the reference count errors. How about something like below: static inline long page_pool_atomic_sub_frag_count_return(struct page *page, long nr) { #ifdef CONFIG_DEBUG_PAGE_REF long ret = atomic_long_sub_return(nr, &page->pp_frag_count); WARN_ON(ret < 0); return ret; #else if (atomic_long_read(&page->pp_frag_count) == nr) return 0; return atomic_long_sub_return(nr, &page->pp_frag_count); #end } Or any better suggestion? > . >