Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932467AbcK2KjT (ORCPT ); Tue, 29 Nov 2016 05:39:19 -0500 Received: from mail.free-electrons.com ([62.4.15.54]:49343 "EHLO mail.free-electrons.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756073AbcK2KjH (ORCPT ); Tue, 29 Nov 2016 05:39:07 -0500 From: Gregory CLEMENT To: Marcin Wojtas Cc: "David S. Miller" , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, Jisheng Zhang , Arnd Bergmann , Jason Cooper , Andrew Lunn , Sebastian Hesselbarth , Thomas Petazzoni , "linux-arm-kernel\@lists.infradead.org" , Nadav Haklai , Dmitri Epshtein , Yelena Krivosheev Subject: Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address References: <87shqafv07.fsf@free-electrons.com> Date: Tue, 29 Nov 2016 11:39:03 +0100 In-Reply-To: (Marcin Wojtas's message of "Tue, 29 Nov 2016 11:34:10 +0100") Message-ID: <87oa0yfu48.fsf@free-electrons.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4558 Lines: 124 Hi Marcin, On mar., nov. 29 2016, Marcin Wojtas wrote: > Gregory, > > 2016-11-29 11:19 GMT+01:00 Gregory CLEMENT : >> Hi Marcin, >> >> On mar., nov. 29 2016, Marcin Wojtas wrote: >> >>> Hi Gregory, >>> >>> Another remark below, sorry for noise. >>> >>> 2016-11-29 10:37 GMT+01:00 Gregory CLEMENT : >>>> Until now the virtual address of the received buffer were stored in the >>>> cookie field of the rx descriptor. However, this field is 32-bits only >>>> which prevents to use the driver on a 64-bits architecture. >>>> >>>> With this patch the virtual address is stored in an array not shared with >>>> the hardware (no more need to use the DMA API). Thanks to this, it is >>>> possible to use cache contrary to the access of the rx descriptor member. >>>> >>>> The change is done in the swbm path only because the hwbm uses the cookie >>>> field, this also means that currently the hwbm is not usable in 64-bits. >>>> >>>> Signed-off-by: Gregory CLEMENT >>>> --- >>>> drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++---- >>>> 1 file changed, 81 insertions(+), 12 deletions(-) >>>> >>>> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c >>>> index 1b84f746d748..32b142d0e44e 100644 >>>> --- a/drivers/net/ethernet/marvell/mvneta.c >>>> +++ b/drivers/net/ethernet/marvell/mvneta.c >>>> @@ -561,6 +561,9 @@ struct mvneta_rx_queue { >>>> u32 pkts_coal; >>>> u32 time_coal; >>>> >>>> + /* Virtual address of the RX buffer */ >>>> + void **buf_virt_addr; >>>> + >>>> /* Virtual address of the RX DMA descriptors array */ >>>> struct mvneta_rx_desc *descs; >>>> >>>> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp, >>>> >>>> /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */ >>>> static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc, >>>> - u32 phys_addr, u32 cookie) >>>> + u32 phys_addr, void *virt_addr, >>>> + struct mvneta_rx_queue *rxq) >>>> { >>>> - rx_desc->buf_cookie = cookie; >>>> + int i; >>>> + >>>> rx_desc->buf_phys_addr = phys_addr; >>>> + i = rx_desc - rxq->descs; >>>> + rxq->buf_virt_addr[i] = virt_addr; >>>> } >>>> >>>> /* Decrement sent descriptors counter */ >>>> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free); >>>> >>>> /* Refill processing for SW buffer management */ >>>> static int mvneta_rx_refill(struct mvneta_port *pp, >>>> - struct mvneta_rx_desc *rx_desc) >>>> + struct mvneta_rx_desc *rx_desc, >>>> + struct mvneta_rx_queue *rxq) >>>> >>>> { >>>> dma_addr_t phys_addr; >>>> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp, >>>> return -ENOMEM; >>>> } >>>> >>>> - mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data); >>>> + mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq); >>>> return 0; >>>> } >>>> >>>> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp, >>>> >>>> for (i = 0; i < rxq->size; i++) { >>>> struct mvneta_rx_desc *rx_desc = rxq->descs + i; >>>> - void *data = (void *)rx_desc->buf_cookie; >>>> + void *data; >>>> + >>>> + if (!pp->bm_priv) >>>> + data = rxq->buf_virt_addr[i]; >>>> + else >>>> + data = (void *)(uintptr_t)rx_desc->buf_cookie; >>> >>> Dropping packets for HWBM (in fact returning dropped buffers to the >>> pool) is done a couple of lines above. This point will never be >> >> indeed I changed the code at every place the buf_cookie was used and >> missed the fact that for HWBM this code was never reached. >> >>> reached with HWBM enabled (and it's also incorrect). >> >> What is incorrect? >> > > Possible dma_unmapping + mvneta_frag_free for buffers in HWBM, when > dropping packets. Yes sure, but as you mentioned this code is never reached when HWBM is enabled. I thought there was other part of the code to fix. Thanks, Gregory > > Thanks, > Marcin -- Gregory Clement, Free Electrons Kernel, drivers, real-time and embedded Linux development, consulting, training and support. http://free-electrons.com