Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp901737pxu; Fri, 4 Dec 2020 20:14:37 -0800 (PST) X-Google-Smtp-Source: ABdhPJysz9OfkrVrIkHTX/aTfz9r4bv0GuXf7CX1gQpSQaIYIim26ETYGdtGd4ePic3zTRFButB1 X-Received: by 2002:aa7:d54a:: with SMTP id u10mr10598676edr.168.1607141677730; Fri, 04 Dec 2020 20:14:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607141677; cv=none; d=google.com; s=arc-20160816; b=ERUhAJXuBDFNze20z9qu2LkvxTzxtVDyqrYrFMTCIqHqdZTJtQJVXdQgM7H+8zckzm To2COjwvRsXKlAy0dBU4CRIfP1x/A3jzqCE3gE+jGR/CNseZIAinzbyN1w2F8tAQPGKq O3F5+/M24aMkQbec0Ze84QBaoP993aitK7RoPobFp0sZd9QwgK2AgZSR/2LNDvjuX63j cSmU9eDtzIgxu3Thmu6RQ6e35UamEly5hZYl1U6/uo2lC+/7diBl5Jm+LgDA3XYNAKXs qQzqPmqHChS4WnP8uLvoPZVXfjUnXZoeroTDmp+Cyez0tzNscF5PHNH90tHwNR8xHZKi wNtw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:dkim-signature:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=EJn9bCc/bzceGa4mhUri+2vE6quib5lTTiAGB/7fag8=; b=A0wsUL6AiTbBwFQf7qBPjUfT7aRzMdlaKCv0wUPSKGskCgm+FMnTG9WHwXMjkSWsnJ wSkDs2MI6w1htYp6a0sg6Q0fWu62bzSQLk5hHD3DltxT7UrOkgDPRLemJVrxB6Gidqs6 3a1AZTu02juQc05RH+AjuY/vuxlhz8psCLU6Z24Spo9aEozuGDgTqX6ENiJ9bfhydFu0 ZLrg9ltbSXlqTv2sQjza0EnEEKmOLs5Xfv4QKkilQsUpNf/ufJVWYN7QAvm5bZRiFGPH nnKl/e2IZJXRiJPnd0p6udfiYiUJeZQrkmnmYnWRSEs3U8nSu+i6b6OrwIqeY2c8QU5R ff0A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=Yk7mlXv9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y17si4234683edu.312.2020.12.04.20.13.53; Fri, 04 Dec 2020 20:14:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=Yk7mlXv9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727064AbgLEEKv (ORCPT + 99 others); Fri, 4 Dec 2020 23:10:51 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:6095 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725300AbgLEEKu (ORCPT ); Fri, 4 Dec 2020 23:10:50 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Fri, 04 Dec 2020 20:10:10 -0800 Received: from [10.2.90.244] (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sat, 5 Dec 2020 04:10:09 +0000 Subject: Re: [PATCH v1 3/7] spi: qspi-tegra: Add support for Tegra210 QSPI controller To: Mark Brown CC: , , , , , , References: <1606857168-5839-1-git-send-email-skomatineni@nvidia.com> <1606857168-5839-4-git-send-email-skomatineni@nvidia.com> <20201202172721.GL5560@sirena.org.uk> <2257bc33-80ef-a6d8-8542-480defa32937@nvidia.com> <20201204185223.GF4558@sirena.org.uk> <20201204224648.GI4558@sirena.org.uk> From: Sowjanya Komatineni Message-ID: <77693de0-4e9e-79b9-0f73-0eaa35ace8e0@nvidia.com> Date: Fri, 4 Dec 2020 20:10:18 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20201204224648.GI4558@sirena.org.uk> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1607141410; bh=EJn9bCc/bzceGa4mhUri+2vE6quib5lTTiAGB/7fag8=; h=Subject:To:CC:References:From:Message-ID:Date:User-Agent: MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding: Content-Language:X-Originating-IP:X-ClientProxiedBy; b=Yk7mlXv9CQ0eUF0j3Nvh66ccS9mPKobhfaARlm/ng4TQRVylGlT1g+089+iGRB7BS dgeZtzfxZgN75a+XUUgIjReEYLmPntG+Scg/Y9IXmNjFpRiZgeP+RGZmBBurrwmANi h7UyvAnK3tbEwCVZw0p1byT8sIRGwKgliXB9i4GKwxH6XcJWro18Twq14U4IrmzmG/ kQAOfkRbSA70YOdw2zchD40QLKXBKfzo9RSUi+xEqElmYpMGAbxeiRKDsFxGpH+F+J 5mkbahiAiXp28kzycUP5xawfnxJgvu4d84JRmZcfnDTzj2rru6IJWRuUPdQNNhphdK RQMWCnOuhGNtQ== Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/4/20 2:46 PM, Mark Brown wrote: > On Fri, Dec 04, 2020 at 01:04:46PM -0800, Sowjanya Komatineni wrote: >> On 12/4/20 10:52 AM, Mark Brown wrote: >>> On Thu, Dec 03, 2020 at 04:22:54PM -0800, Sowjanya Komatineni wrote: >>>> Also unpack mode needs to manually put the bytes together from read data to >>>> SPI core rx buffer. >>> Could you be more explicit here, I don't know what "unpack mode" is? >> Tegra SPI/QSPI controller support packed mode and unpacked mode based on >> bits per word in a transfer. >> Packed Mode: When enabled, all 32-bits of data in FIFO contains valid data >> packets of 8-bit/16-bit/32-bit length. >> Non packed mode: For transfers like 24-bit data for example we disable >> packed mode and only 24-bits of FIFO data are valid and other bits are 0's. >> So during TX for FIFO filling and during receive when FIFO data is read, SW >> need to skip invalid bits and should align order from/to SPI core tx/rx >> buffers. > That's pretty surprising - is it really worth the overhead of using > non-packed mode compared to just doing the transfer in 8 bit mode? In > any case it seems better to only do the memcpy() stuff in the cases > where it's actually required since it looks like fairly obvious overhead > otherwise, and the code could use some comments explaining why we're > doing this. It may actually be that the implementation is already doing > the most sensible thing and it just needs more comments explaining why > that's the case. Understand the overhead but If any device specific transfers use/need 24 bits per word, without non-packed mode we should fail the transfer. Tegra HW has non-packed mode for such cases. OK. Will use dma_map/unmap for packed mode transfer and for non-packed mode will use dma buf for fifo data and then can fill SPI core rx_buf with valid bytes from dma buf contents. Sure will add comments for non-packed mode logic. >>> This is not a good idea, attempting to reverse engineer the message and >>> guess at the contents isn't going to be robust and if it's useful it >>> will if nothing else lead to a bunch of duplicated code in drivers as >>> every device that has this feature will need to reimplment it. Instead >>> we should extend the framework so there's explicit support for >>> specifying transfers that are padding bytes, then there's no guesswork >>> that can go wrong and no duplicated code between drivers. A flag in the >>> transfer struct might work? >> As per QSPI spec, Dummy bytes for initial read latency are always FF's. So >> its not like guessing the contents. > The guesswork I was thinking of was deciding to do this rather than the > pattern being output - the bit where the driver figures out that the > intent of that transfer is to provide dummy bytes. > >> Tegra QSPI controller HW supports transferring dummy bytes (sending FF's >> after address) based on dummy clock cycles programmed. >> To allow Tegra QSPI HW transfer dummy bytes directly, controller driver need >> number of dummy bytes / actual dummy clock cycles which core driver gets >> from flash sfdp read data. > Sure, the use case makes sense. > >> So, we can add flag to transfer and based on this flag if controller HW >> supports then we can ignore filling dummy bytes in spi_mem_exec_op but >> controller driver still needs dummy_cycles value. So probably along with >> flag, do you agree to have dummy_cycle as part of transfer struct which can >> be set to nor->read_dummy value? > Yeah, or given that perhaps just skip the flag and do this by specifying > dummy_cycles. Or if this is always a multiple of 8 (which I guess it > must be to do it using normal byte transfers) perhaps just have the flag > and use the existing length field to infer the number of cycles? I've > not actually looked at the details at all so one or both of those > suggestions may not actually make sense, please take them with a grain > of salt. > > I'd recommend doing this as a followup to introducing the base driver, > start off with the less efficient explicit writes and then add the API > and add the support in the driver - that way the new API can be > reviewed without it holding up the rest of the driver. ok I think adding dummy_cycles to transfer is enough without flag. If dummy cycles is 0, definitely no dummy bytes transfer. So will get rid of code that detects dummy bytes xfer phase from list of transfers. Thanks Mark. Regards, Sowjanya