Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp976514imm; Fri, 13 Jul 2018 09:21:37 -0700 (PDT) X-Google-Smtp-Source: AAOMgpck2AZZZ1kDk4UDj5W4xncbYXpaO+iPqThkederUJ5x3fkz8GddI0vofRU2qvzp/ku0ka7m X-Received: by 2002:a62:3cd7:: with SMTP id b84-v6mr7840420pfk.183.1531498897760; Fri, 13 Jul 2018 09:21:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531498897; cv=none; d=google.com; s=arc-20160816; b=FhnnUD/GfGRPz50hgEehRzDUSz+Lo6i/mwRuy9qk0dlJwffJYifTX2IPI2/qjdqkv2 C3emF7leLYi4sYv0HB0js/VMoU7Wm4+ktUJJWwBwInkocuHEox+6bacKrIMHXTt1lu6L FnsWMjoOGoTDlTvo1Ia2gI01a9tBxx8n3bY+zEqaUWJIwgjLZxpgsMmSShS38gY7PHp8 4YmwYte49xJV4moaz+uxHtiyuOJcEPxCbYJLIKyBiVfrov1P+knKkc+fMjF1tYDZ/NJZ NxilLasFy4Vei3875z4cB7gbMG40jB/oeDAWVVy/AMd0qYe5ZC/Nh6I2nSqGXYLFsl7p tBsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=btSsTCZr1VSHEWTEIayjoA+U3w1SrH7+xnHDShhI8M0=; b=X7TcZ3asA5dxqXSN8GT7hdcwW/PT6MkCgFwV9h0G9xbLVXFuEsMj8/n8mbZ8wJ2dnC 78rOFs492W2JKQOx57xIf7deiB13RtolmPWUZd3uNwe8gGq3eYJreYu82LD+XTS/NIX4 OO0YXg4yAK3CUHE8m4f+SlKzovVYZCQmSjV1kAQZbvQ/Ja+o4ZhqW361us5NWFFLY6Ok ItA6hWoy63U5oMmCGLee3iniOlLx8oUYaHfJLRVJIr8MRMzTEQRFbLYNpnalaDNBlcq0 S6zQCvOEZMojkOmo5QqeeJLbmnlI9/1zNl+L9h74r7JjjKRbqWAvf+OO5re2ySM81svz nVdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k125-v6si925255pgk.6.2018.07.13.09.21.23; Fri, 13 Jul 2018 09:21:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387866AbeGMQft (ORCPT + 99 others); Fri, 13 Jul 2018 12:35:49 -0400 Received: from mail.bootlin.com ([62.4.15.54]:38963 "EHLO mail.bootlin.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732663AbeGMQfr (ORCPT ); Fri, 13 Jul 2018 12:35:47 -0400 Received: by mail.bootlin.com (Postfix, from userid 110) id 3011920884; Fri, 13 Jul 2018 18:20:28 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.bootlin.com X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED,SHORTCIRCUIT, URIBL_BLOCKED shortcircuit=ham autolearn=disabled version=3.4.0 Received: from localhost (242.171.71.37.rev.sfr.net [37.71.171.242]) by mail.bootlin.com (Postfix) with ESMTPSA id 0FC6A208C4; Fri, 13 Jul 2018 18:20:09 +0200 (CEST) From: Gregory CLEMENT To: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Thomas Petazzoni , linux-arm-kernel@lists.infradead.org, Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Gregory CLEMENT , Yelena Krivosheev , Nadav Haklai , Marcin Wojtas , Dmitri Epshtein , Antoine Tenart , =?UTF-8?q?Miqu=C3=A8l=20Raynal?= , Maxime Chevallier Subject: [PATCH net-next v2 5/7] net: mvneta: Allocate page for the descriptor Date: Fri, 13 Jul 2018 18:18:39 +0200 Message-Id: <20180713161841.11202-6-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180713161841.11202-1-gregory.clement@bootlin.com> References: <20180713161841.11202-1-gregory.clement@bootlin.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Instead of trying to allocate the exact amount of memory for each descriptor use a page for each of them, it allows to simplify the allocation management and increase the performance of the driver. Based on the work of Yelena Krivosheev Signed-off-by: Gregory CLEMENT --- drivers/net/ethernet/marvell/mvneta.c | 66 ++++++++++-------------- drivers/net/ethernet/marvell/mvneta_bm.h | 3 -- 2 files changed, 26 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 196205c79995..c203ea061ab9 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -1795,47 +1795,30 @@ static void mvneta_txq_done(struct mvneta_port *pp, } } -void *mvneta_frag_alloc(unsigned int frag_size) -{ - if (likely(frag_size <= PAGE_SIZE)) - return netdev_alloc_frag(frag_size); - else - return kmalloc(frag_size, GFP_ATOMIC); -} -EXPORT_SYMBOL_GPL(mvneta_frag_alloc); - -void mvneta_frag_free(unsigned int frag_size, void *data) -{ - if (likely(frag_size <= PAGE_SIZE)) - skb_free_frag(data); - else - kfree(data); -} -EXPORT_SYMBOL_GPL(mvneta_frag_free); - /* Refill processing for SW buffer management */ -static int mvneta_rx_refill(struct mvneta_port *pp, - struct mvneta_rx_desc *rx_desc, - struct mvneta_rx_queue *rxq) - +/* Allocate page per descriptor */ +static inline int mvneta_rx_refill(struct mvneta_port *pp, + struct mvneta_rx_desc *rx_desc, + struct mvneta_rx_queue *rxq, + gfp_t gfp_mask) { dma_addr_t phys_addr; - void *data; + struct page *page; - data = mvneta_frag_alloc(pp->frag_size); - if (!data) + page = __dev_alloc_page(gfp_mask); + if (!page) return -ENOMEM; - phys_addr = dma_map_single(pp->dev->dev.parent, data, - MVNETA_RX_BUF_SIZE(pp->pkt_size), - DMA_FROM_DEVICE); + /* map page for use */ + phys_addr = dma_map_page(pp->dev->dev.parent, page, 0, PAGE_SIZE, + DMA_FROM_DEVICE); if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) { - mvneta_frag_free(pp->frag_size, data); + __free_page(page); return -ENOMEM; } phys_addr += pp->rx_offset_correction; - mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq); + mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq); return 0; } @@ -1901,7 +1884,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); - mvneta_frag_free(pp->frag_size, data); + __free_page(data); } } @@ -1928,6 +1911,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq); struct sk_buff *skb; unsigned char *data; + struct page *page; dma_addr_t phys_addr; u32 rx_status, frag_size; int rx_bytes, err, index; @@ -1936,7 +1920,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi, rx_status = rx_desc->status; rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); index = rx_desc - rxq->descs; - data = rxq->buf_virt_addr[index]; + page = (struct page *)rxq->buf_virt_addr[index]; + data = page_address(page); + /* Prefetch header */ + prefetch(data); phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction; if (!mvneta_rxq_desc_is_first_last(rx_status) || @@ -1979,7 +1966,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi, } /* Refill processing */ - err = mvneta_rx_refill(pp, rx_desc, rxq); + err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL); if (err) { netdev_err(dev, "Linux processing - Can't refill\n"); rxq->refill_err++; @@ -2773,9 +2760,11 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, for (i = 0; i < num; i++) { memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc)); - if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) { - netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs filled\n", - __func__, rxq->id, i, num); + if (mvneta_rx_refill(pp, rxq->descs + i, rxq, + GFP_KERNEL) != 0) { + netdev_err(pp->dev, + "%s:rxq %d, %d of %d buffs filled\n", + __func__, rxq->id, i, num); break; } } @@ -3189,8 +3178,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu) mvneta_bm_update_mtu(pp, mtu); pp->pkt_size = MVNETA_RX_PKT_SIZE(dev->mtu); - pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); ret = mvneta_setup_rxqs(pp); if (ret) { @@ -3678,8 +3665,7 @@ static int mvneta_open(struct net_device *dev) int ret; pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu); - pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) + - SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + pp->frag_size = PAGE_SIZE; ret = mvneta_setup_rxqs(pp); if (ret) diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h index 9358626e51ec..c8425d35c049 100644 --- a/drivers/net/ethernet/marvell/mvneta_bm.h +++ b/drivers/net/ethernet/marvell/mvneta_bm.h @@ -130,9 +130,6 @@ struct mvneta_bm_pool { }; /* Declarations and definitions */ -void *mvneta_frag_alloc(unsigned int frag_size); -void mvneta_frag_free(unsigned int frag_size, void *data); - #if IS_ENABLED(CONFIG_MVNETA_BM) struct mvneta_bm *mvneta_bm_get(struct device_node *node); void mvneta_bm_put(struct mvneta_bm *priv); -- 2.18.0