Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754266AbdCBQsk (ORCPT ); Thu, 2 Mar 2017 11:48:40 -0500 Received: from smtpout.microchip.com ([198.175.253.82]:35774 "EHLO email.microchip.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751628AbdCBQsW (ORCPT ); Thu, 2 Mar 2017 11:48:22 -0500 Subject: Re: [RFC PATCH 2/2] mtd: devices: m25p80: Enable spi-nor bounce buffer support To: Boris Brezillon , Vignesh R References: <20170227120839.16545-1-vigneshr@ti.com> <20170227120839.16545-3-vigneshr@ti.com> <8f999a27-c3ce-2650-452c-b21c3e44989d@ti.com> <20170301175506.202cb478@bbrezillon> <09ffe06d-565d-afe8-8b7d-d1a0b575595b@baylibre.com> <4cd22ddd-b108-f697-0bde-ad844a386e62@ti.com> <20170302152921.1c031b57@bbrezillon> CC: Frode Isaksen , Mark Brown , Richard Weinberger , David Woodhouse , Brian Norris , Marek Vasut , , , , From: Cyrille Pitchen Message-ID: <5e83b0c0-f1c3-399f-f4f5-afb92af7d7ae@atmel.com> Date: Thu, 2 Mar 2017 17:45:41 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <20170302152921.1c031b57@bbrezillon> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 8bit X-Brightmail-Tracker: =?Windows-1252?Q?H4sIAAAAAAAAC+NgFmpik+JIrShJLcpLzFFi42Lh4vJh0bXx2RFhcOzb?= =?Windows-1252?Q?sy52iwMvFrJYTH34hM3iyIW1zBYTV05mtnh66R+rxeVdc9gsdjctY7eY?= =?Windows-1252?Q?vaSfxaLx4012i6N77jFbTN75htHi/9kP7A48Hu9vtLJ7PNl0kdFj56y7?= =?Windows-1252?Q?7B6bV2h5bFrVyeaxeUm9x815hR7Hb2xn8vi8SS6AM4o1My8pvyKBNePE?= =?Windows-1252?Q?wp+sBVM0K3ou/GBuYHyn0MXIxSEksJxRYlnHb8YuRk4OYYFIiX+vHzOB?= =?Windows-1252?Q?2CIC4RIrF71ggSi6xiwxYetVdhCHWeApk8T/7vMsIFVsAoYSbx8cZQWx?= =?Windows-1252?Q?eQVsJKbsX88OYrMIqEgsP/OUDcQWFYiQmP90FRNEjaDEyZlPwHo5gXrP?= =?Windows-1252?Q?TO4Hs5kFDCSOLJrDCmHLSzRvnc0MYgsJqEksbFkBZksIBEoc/XGGDcJ2?= =?Windows-1252?Q?kvi/ciY7hG0ncXj6RSjbQeL+/RlwNQe3PoeytSW2v9rHCmHrSGw7CLFX?= =?Windows-1252?Q?QsBWYs+MiUwQtrvEg0fLoWxfiVkPG6BqoiRO9X9mncAoOQvJC7OQnD0L?= =?Windows-1252?Q?ydkLGJlXMUo7e/jpBofpukY4exiY6eUmZxTo5iZm5ukl5+duYoSkhuwd?= =?Windows-1252?Q?jEfmRxxilORgUhLlPW66I0KILyk/pTIjsTgjvqg0J7X4EKMMB4eSBO8b?= =?Windows-1252?Q?T6CcYFFqempFWmYOMEnBpJk4OA8xSnDwKInwsnkB1fAWFyTmFmemQ+RP?= =?Windows-1252?Q?MUpKifPWgzQLgCQySvPgei8xikoJ8362B8rxFKQW5WaWQMRvMYpxPGTi?= =?Windows-1252?Q?eMwkxJKXn5cqBXQqAxAYML5iFOdgVBLmlQLZw5OZVwK35hXQBUxAF7xQ?= =?Windows-1252?Q?2QpyQUkiQkqqgdFeacO+U1Zeb7PlnXVW3XQyeOLvu19ayTLr6d4Nll36?= =?Windows-1252?Q?8tNaY6+sjlpk4XSZeYO5o3Q276PE0//8Vigf5zj6/Hb6yaNfp8QdLLKI?= =?Windows-1252?Q?fLCPq7Z+ucrrjLKvv8WuHmNO43+zTVFWvaThKC9XvuZlCZeE556d3F90?= =?Windows-1252?Q?+jlutxfvL5rKUTnlV4MX4/J6GyWW4oxEQy3mouJEAFB+ELaVAwAA?= Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5150 Lines: 116 Le 02/03/2017 ? 15:29, Boris Brezillon a ?crit : > On Thu, 2 Mar 2017 19:24:43 +0530 > Vignesh R wrote: > >>>>>> >>>>> Not really, I am debugging another issue with UBIFS on DRA74 EVM (ARM >>>>> cortex-a15) wherein pages allocated by vmalloc are in highmem region >>>>> that are not addressable using 32 bit addresses and is backed by LPAE. >>>>> So, a 32 bit DMA cannot access these buffers at all. >>>>> When dma_map_sg() is called to map these pages by spi_map_buf() the >>>>> physical address is just truncated to 32 bit in pfn_to_dma() (as part of >>>>> dma_map_sg() call). This results in random crashes as DMA starts >>>>> accessing random memory during SPI read. >>>>> >>>>> IMO, there may be more undiscovered caveat with using dma_map_sg() for >>>>> non kmalloc'd buffers and its better that spi-nor starts handling these >>>>> buffers instead of relying on spi_map_msg() and working around every >>>>> time something pops up. >>>>> >>>> Ok, I had a closer look at the SPI framework, and it seems there's a >>>> way to tell to the core that a specific transfer cannot use DMA >>>> (->can_dam()). The first thing you should do is fix the spi-davinci >>>> driver: >>>> >>>> 1/ implement ->can_dma() >>>> 2/ patch davinci_spi_bufs() to take the decision to do DMA or not on a >>>> per-xfer basis and not on a per-device basis >>>> >> >> This would lead to poor perf defeating entire purpose of using DMA. > > Hm, that's not really true. For all cases where you have a DMA-able > buffer it would still use DMA. For other cases (like the UBI+SPI-NOR > case we're talking about here), yes, it will be slower, but slower is > still better than buggy. > So, in any case, I think the fixes pointed by Frode are needed. > >> >>>> Then we can start thinking about how to improve perfs by using a bounce >>>> buffer for large transfers, but I'm still not sure this should be done >>>> at the MTD level... >> >> If its at SPI level, then I guess each individual drivers which cannot >> handle vmalloc'd buffers will have to implement bounce buffer logic. > > Well, that's my opinion. The only one that can decide when to do > PIO, when to use DMA or when to use a bounce buffer+DMA is the SPI > controller. > If you move this logic to the SPI NOR layer, you'll have to guess what > is the best approach, and I fear the decision will be wrong on some > platforms (leading to perf degradation). > True. For instance, Atmel SAMA5* SoCs don't need this bounce buffer since their L1 data cache uses a PIPT scheme. http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0433c/CHDFAHBD.html """ 2.1.4. Data side memory system Data Cache Unit The Data Cache Unit (DCU) consists of the following sub-blocks: The Level 1 (L1) data cache controller, which generates the control signals for the associated embedded tag, data, and dirty memory (RAMs) and arbitrates between the different sources requesting access to the memory resources. The data cache is 4-way set associative and uses a Physically Indexed Physically Tagged (PIPT) scheme for lookup which enables unambiguous address management in the system. """ So for those SoCs, spi_map_msg() should be safe to handle vmalloc'ed buffers since they don't have to worry about the cache aliases issue or address truncation. That's why I don't think setting the SNOR_F_USE_BOUNCE_BUFFER in *all* cases in m25p80 is the right solution since it would not fair to degrade the performances of some devices when it's not needed hence not justified. I still agree with the idea of patch 1 but about patch 2, if m25p80 users want to take advantage of this new spi-nor bounce buffer, we have to agree on a reliable mechanism that clearly tells whether or not the SNOR_F_USE_BOUNCE_BUFFER is to be set from m25p80. > You're mentioning code duplication in each SPI controller, I agree, > this is far from ideal, but what you're suggesting is not necessarily > better. What if another SPI user starts passing vmalloc-ed buffers to > the SPI controller? You'll have to duplicate the bounce-buffer logic in > this user as well. > >> >> Or SPI core can be extended in a way similar to this RFC. That is, SPI >> master driver will set a flag to request SPI core to use of bounce >> buffer for vmalloc'd buffers. And spi_map_buf() just uses bounce buffer >> in case buf does not belong to kmalloc region based on the flag. > > That's a better approach IMHO. Note that the decision should not only > be based on the buffer type, but also on the transfer length and/or > whether the controller supports transferring non physically contiguous > buffers. > > Maybe we should just extend ->can_dma() to let the core know if it > should use a bounce buffer. > > Regarding the bounce buffer allocation logic, I'm not sure how it > should be done. The SPI user should be able to determine a max transfer > len (at least this is the case for SPI NORs) and inform the SPI layer > about this boundary so that the SPI core can allocate a bounce buffer > of this size. But we also have limitations at the SPI master level > (->max_transfer_size(), ->max_message_size()). > > >