Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp3733767pxv; Tue, 13 Jul 2021 02:26:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxHxHXJhM5DLjv9jQrCsMgm1mzOIm5GcLdEnQXMBwM4zZRkiKkvt9oCTj0yWTZTOBW5VT5w X-Received: by 2002:a02:6382:: with SMTP id j124mr3179893jac.72.1626168364140; Tue, 13 Jul 2021 02:26:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626168364; cv=none; d=google.com; s=arc-20160816; b=fm0BGy08bLLpCdc4EbGqQ9CkjZw/Vixw0EuP2RFdTHXJ8kbUAu0J9Ju4xdrcruQd3v QdTYAYrar/6HKMNn+/eOqftR5BWTJh+fwnb07B6jzZh0Ef4sRKDB+JwiNmlq4osHbUkA tabupY/6av4YA62ksLfq+rQB5PC/ZjIzgMUW3g9d2G80TZp+ClpQ9DojJmP3SU5Fm/Bk 3B+Bvd7daJIcRB4P5Kmywe8XI6pr2mXPIBjCspJtcZ75eiJQXulM8vy2h4hidtqZQOst lVPbyIzsdKdcBtezI0sL5A/oWByjbfEeSLm/WW06sc3mr96IqtuP7jUCgBlHgU84F5kt Y2Hw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from; bh=O5GBYkLPmxqmr3yosE7hpO4aqlDokH7mCNeXoprQSSo=; b=rAqCfmyhLBCL3riaZIC5I2j0sAtQJMnyl5v4fIM34xahckZx+bD4Ynm1D1EcCRiztr xhIps70gXKUMVamRGhNKtpp8+NgbRQvyi6afTA2yaDBeFVsEVtNqtI7SM4Fp8Tmu/scM uPhBkkX++CSWgl5TfgTHZgJnHnxocIeowwD1MjtRHXWjMq/W7ymTQVhRxBmIhJUvSUR1 KibX5Dnpqd2rmlrRPXcHHZMjXxJyowBuCXGsrgA+2sXtk30J6UV/uAg+vnEBfaCoBbrt nGzNVyAwuYEjfpV6VkCnf/C9sKmy/QcUx2sMLiSbdq8cb6dzwoam0wdmY38225plhsi6 6Jgg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u5si8880262jad.124.2021.07.13.02.25.51; Tue, 13 Jul 2021 02:26:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235128AbhGMJ2G (ORCPT + 99 others); Tue, 13 Jul 2021 05:28:06 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:15008 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234855AbhGMJ2F (ORCPT ); Tue, 13 Jul 2021 05:28:05 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GPFWm3b2DzbbwG; Tue, 13 Jul 2021 17:21:56 +0800 (CST) Received: from dggpemm500005.china.huawei.com (7.185.36.74) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 13 Jul 2021 17:25:13 +0800 Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 13 Jul 2021 17:25:12 +0800 From: Yunsheng Lin To: , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH rfc v4 2/4] page_pool: add interface to manipulate bias in page pool Date: Tue, 13 Jul 2021 17:24:30 +0800 Message-ID: <1626168272-25622-3-git-send-email-linyunsheng@huawei.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1626168272-25622-1-git-send-email-linyunsheng@huawei.com> References: <1626168272-25622-1-git-send-email-linyunsheng@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500005.china.huawei.com (7.185.36.74) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org As suggested by Alexander, "A DMA mapping should be page aligned anyway so the lower 12 bits would be reserved 0", so it might make more sense to repurpose the lower 12 bits of the dma address to store the bias for frag page support in page pool for 32 bit systems with 64 bit dma, which should be rare those days. For normal system, the dma_addr[1] in 'struct page' is not used, so we can reuse the dma_addr[1] for storing bias. The PAGE_POOP_USE_DMA_ADDR_1 macro is used to decide where to store the bias, as the "sizeof(dma_addr_t) > sizeof( unsigned long)" is false for normal system, so hopefully the compiler will optimize out the unused code for those system. The newly added page_pool_set_bias() should be called before the page is passed to any user. Otherwise, call the newly added page_pool_atomic_sub_bias_return(). Signed-off-by: Yunsheng Lin --- include/net/page_pool.h | 70 ++++++++++++++++++++++++++++++++++++++++++++++--- net/core/page_pool.c | 10 +++++++ 2 files changed, 77 insertions(+), 3 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8d7744d..315b9f2 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -198,21 +198,85 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, page_pool_put_full_page(pool, page, true); } +#define PAGE_POOP_USE_DMA_ADDR_1 (sizeof(dma_addr_t) > sizeof(unsigned long)) + static inline dma_addr_t page_pool_get_dma_addr(struct page *page) { - dma_addr_t ret = page->dma_addr[0]; - if (sizeof(dma_addr_t) > sizeof(unsigned long)) + dma_addr_t ret; + + if (PAGE_POOP_USE_DMA_ADDR_1) { + ret = READ_ONCE(page->dma_addr[0]) & PAGE_MASK; ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; + } else { + ret = page->dma_addr[0]; + } + return ret; } static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) { page->dma_addr[0] = addr; - if (sizeof(dma_addr_t) > sizeof(unsigned long)) + if (PAGE_POOP_USE_DMA_ADDR_1) page->dma_addr[1] = upper_32_bits(addr); } +static inline int page_pool_atomic_sub_bias_return(struct page *page, int nr) +{ + int bias; + + if (PAGE_POOP_USE_DMA_ADDR_1) { + unsigned long *bias_ptr = &page->dma_addr[0]; + unsigned long old_bias = READ_ONCE(*bias_ptr); + unsigned long new_bias; + + do { + bias = (int)(old_bias & ~PAGE_MASK); + + /* Warn when page_pool_dev_alloc_pages() is called + * with PP_FLAG_PAGE_FRAG flag in driver. + */ + WARN_ON(!bias); + + /* already the last user */ + if (!(bias - nr)) + return 0; + + new_bias = old_bias - nr; + } while (!try_cmpxchg(bias_ptr, &old_bias, new_bias)); + + WARN_ON((new_bias & PAGE_MASK) != (old_bias & PAGE_MASK)); + + bias = new_bias & ~PAGE_MASK; + } else { + atomic_t *v = (atomic_t *)&page->dma_addr[1]; + + if (atomic_read(v) == nr) + return 0; + + bias = atomic_sub_return(nr, v); + WARN_ON(bias < 0); + } + + return bias; +} + +static inline void page_pool_set_bias(struct page *page, int bias) +{ + if (PAGE_POOP_USE_DMA_ADDR_1) { + unsigned long dma_addr_0 = READ_ONCE(page->dma_addr[0]); + + dma_addr_0 &= PAGE_MASK; + dma_addr_0 |= bias; + + WRITE_ONCE(page->dma_addr[0], dma_addr_0); + } else { + atomic_t *v = (atomic_t *)&page->dma_addr[1]; + + atomic_set(v, bias); + } +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 78838c6..6ac5b00 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -198,6 +198,16 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) if (dma_mapping_error(pool->p.dev, dma)) return false; + if (PAGE_POOP_USE_DMA_ADDR_1 && + WARN_ON(pool->p.flags & PP_FLAG_PAGE_FRAG && + dma & ~PAGE_MASK)) { + dma_unmap_page_attrs(pool->p.dev, dma, + PAGE_SIZE << pool->p.order, + pool->p.dma_dir, + DMA_ATTR_SKIP_CPU_SYNC); + return false; + } + page_pool_set_dma_addr(page, dma); if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) -- 2.7.4