Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757327AbcKXTEo (ORCPT ); Thu, 24 Nov 2016 14:04:44 -0500 Received: from mail-oi0-f65.google.com ([209.85.218.65]:34034 "EHLO mail-oi0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755236AbcKXTEl (ORCPT ); Thu, 24 Nov 2016 14:04:41 -0500 Subject: Re: [PATCH net-next 1/4] net: mvneta: Convert to be 64 bits compatible To: Gregory CLEMENT , Arnd Bergmann References: <20161122164844.19566-1-gregory.clement@free-electrons.com> <20161124163327.1cc261ab@xhacker> <21520380.oWTKcrq8DS@wuerfel> <8760ncly5s.fsf@free-electrons.com> Cc: Jisheng Zhang , Thomas Petazzoni , Jason Cooper , Andrew Lunn , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Marcin Wojtas , "David S. Miller" , linux-arm-kernel@lists.infradead.org, Sebastian Hesselbarth From: Florian Fainelli Message-ID: Date: Thu, 24 Nov 2016 11:04:38 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <8760ncly5s.fsf@free-electrons.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3211 Lines: 67 Le 24/11/2016 à 07:01, Gregory CLEMENT a écrit : > Hi Arnd, > > On jeu., nov. 24 2016, Arnd Bergmann wrote: > >> On Thursday, November 24, 2016 4:37:36 PM CET Jisheng Zhang wrote: >>> solB (a SW shadow cookie) perhaps gives a better performance: in hot path, >>> such as mvneta_rx(), the driver accesses buf_cookie and buf_phys_addr of >>> rx_desc which is allocated by dma_alloc_coherent, it's noncacheable if the >>> device isn't cache-coherent. I didn't measure the performance difference, >>> because in fact we take solA as well internally. From your experience, >>> can the performance gain deserve the complex code? >> >> Yes, a read from uncached memory is fairly slow, so if you have a chance >> to avoid that it will probably help. When adding complexity to the code, >> it probably makes sense to take a runtime profile anyway quantify how >> much it gains. >> >> On machines that have cache-coherent DMA, accessing the descriptor >> should be fine, as you already have to load the entire cache line >> to read the status field. >> >> Looking at this snippet: >> >> rx_status = rx_desc->status; >> rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE); >> data = (unsigned char *)rx_desc->buf_cookie; >> phys_addr = rx_desc->buf_phys_addr; >> pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc); >> bm_pool = &pp->bm_priv->bm_pools[pool_id]; >> >> if (!mvneta_rxq_desc_is_first_last(rx_status) || >> (rx_status & MVNETA_RXD_ERR_SUMMARY)) { >> err_drop_frame_ret_pool: >> /* Return the buffer to the pool */ >> mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool, >> rx_desc->buf_phys_addr); >> err_drop_frame: >> >> >> I think there is more room for optimizing if you start: you read >> the status field twice (the second one in MVNETA_RX_GET_BM_POOL_ID) >> and you can cache the buf_phys_addr along with the virtual address >> once you add that. > > I agree we can optimize this code but it is not related to the 64 bits > conversion. Indeed this part is running when we use the HW buffer > management, however currently this part is not ready at all for 64 > bits. The virtual address is directly handled by the hardware but it has > only 32 bits to store it in the cookie.So if we want to use the HWBM in > 64 bits we need to redesign the code, (maybe by storing the virtual > address in a array and pass the index in the cookie). Can't you make sure that skb->data is aligned to a value big enough that you can still cover the bit physical address space of the adapter within a 32-bit quantity if you drop the low bits that would be all zeroes? That way, even though you only have 32-bits of storage/cookie, these don't have to be the actual 32-bits of your original address, but could be addr >> 8 for instance? As you indicate using an index stored in the cookie might be a better scheme though, since you could attach a lot more metadata to an index in an local array (which could be in cached memory) as opposed to just an address. -- Florian