Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp5038260ybv; Wed, 26 Feb 2020 07:23:47 -0800 (PST) X-Google-Smtp-Source: APXvYqxYiV2LHEzZxIEjeuCiYyG4TWfybvn5AxugY6RlV1feu7oOE5EtlWHBsnun/dnCh+2P7kg4 X-Received: by 2002:a05:6830:98:: with SMTP id a24mr3352136oto.115.1582730626989; Wed, 26 Feb 2020 07:23:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582730626; cv=none; d=google.com; s=arc-20160816; b=bOR+r+xA0fo0Le3s4j+uUHMgBgrgZI1WzaWNQO6f+X7ey+2LkmuDaTPV7GVOXPuunE ui86dFXhSEOqRq3CNyBaCzvCapZz2N+SU4hHhWIxx9WnoazargwcK16sZO4ghzDgMg8M fRxIku+y6rtzXbMtPDoDmaXzjFaFusHXEYRbbd48opm22Wr40fT9dxhuv8MCzhP8M8SR mRacBe3E+5qQ5oQ8JykUpvHKP0bPmO6RPNqwhZLlYygaX7MhIrQeoaIsxqrymzfs1kgv COT+ugEa0kcaN3Q9h9oIxxL1xN/73ym6iPCQ9SG9Q5RJJkQUeHf+Z1LocyktUPlYGq1L v21Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=V8nWkCMD00cXwz4vSIcrFn9i/BES9/eV0AoGbar9kSU=; b=WzzfoeDbN9hQwTyEMfGI/gKuaP+aLyX0P+pbpS7QfY+v26UWXUBAOwU/NRHDMM1D22 SZo1QnXH2qPouy2V0LhzZ24mqFdeKN7rga17bcyZuBFO0yhFLGS/jc851eVd13X5e06P 9T5w+Ww4GDdSbrw+rxYpXWKiXRDFbtVcmehUWrDaZDguxQfN+AqpEmpCNfVgcjxUDYRP yavv5iGVEJ+IIqjwi8E2idRpTrSCNm97OK43gtBXK3NJOwJ7XJEZqW9WtPTjO4D6bP6C 1jT7NBHKMATqiZzQCHApBaqsqyeC+uJBCxXTjA+MgJaBpjJamHsvqDLxCAQgKuLIKwzc 3/zA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mXNFScEP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w7si1487101otm.256.2020.02.26.07.23.34; Wed, 26 Feb 2020 07:23:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mXNFScEP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728128AbgBZPWU (ORCPT + 99 others); Wed, 26 Feb 2020 10:22:20 -0500 Received: from mail-vk1-f194.google.com ([209.85.221.194]:46342 "EHLO mail-vk1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728037AbgBZPWT (ORCPT ); Wed, 26 Feb 2020 10:22:19 -0500 Received: by mail-vk1-f194.google.com with SMTP id u6so881596vkn.13 for ; Wed, 26 Feb 2020 07:22:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=V8nWkCMD00cXwz4vSIcrFn9i/BES9/eV0AoGbar9kSU=; b=mXNFScEPEXTZ4Y03CyiK+/TjkHLqnAjQPcxs+JNs+IyUVBpbdLw2mdhfel0dmuKPLk W6OvSj/o0zdvXzCB9K+9m7F8SiVe1l183/R1TUoJcArgKBRRhoXq5crK7CUujf6EiXTA PsnQ/ubtPHHwkYOhMOt3/D+OFo4vIsqkU3GrZh1Flo4rA1OijEUNJkjPl9z2qKTlgJIY IE2tSomRkCv2tDlzf/anAOXlsMg+7A1wtK1hh+B9hCIcDrfBTNAYKxrpI8TS0hwqSio9 IQm2G/+ObyhefUuHtvD2xH8EEmoVoXNzjYR+CAhxeyhSzbk4rOgw2gQu8bs0ueut8kML IoaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=V8nWkCMD00cXwz4vSIcrFn9i/BES9/eV0AoGbar9kSU=; b=GnJ/zVlAZRJwhrtfMjsg61rNpV72SqWXRnbc+/5vr+lKkmZBnU5hEnhPbYvlbdeGlf 77kQc9Q+/05Ey71JhSXtH84Ph41JO6gMeXzUDCCMvgb5G7cQwXFhcY789BufoURowsdn rdX0zWKk5HUBw1O6+7mXO8PLJp2ngkc2dvqKsJzny1EJ3Os7llaf5vHTsxtmNhgEvOEn vaS9KAN41gqOrm5yACH9MbHEyvlr73DR+DrgKcV925TUmEQESuKkXII2vju561zOuJnl CXhIvAx8U5iDmHtj3oHhFQUZMgpBjZucsJRfXUw1iI2LjbhtWYVylmxF75mesdFb/ahv jF5g== X-Gm-Message-State: APjAAAVK4r51N1FadhiqlgsCDj8TLjAXaJB15io8Bavp3LsvRra0fMw/ EweECjAKv0l6jOuZfPffgFljANxZervjkCk48y29eg== X-Received: by 2002:a1f:914b:: with SMTP id t72mr4120155vkd.101.1582730538102; Wed, 26 Feb 2020 07:22:18 -0800 (PST) MIME-Version: 1.0 References: <6523119a-50ac-973a-d1cd-ab1569259411@nvidia.com> <0963b60f-15e7-4bc6-10df-6fc8003e4d42@nvidia.com> In-Reply-To: <0963b60f-15e7-4bc6-10df-6fc8003e4d42@nvidia.com> From: Ulf Hansson Date: Wed, 26 Feb 2020 16:21:42 +0100 Message-ID: Subject: Re: LKFT: arm x15: mmc1: cache flush error -110 To: Jon Hunter , Faiz Abbas , Bitan Biswas Cc: Sowjanya Komatineni , Adrian Hunter , Naresh Kamboju , Jens Axboe , Alexei Starovoitov , linux-block , lkft-triage@lists.linaro.org, open list , "linux-mmc@vger.kernel.org" , Arnd Bergmann , John Stultz , Thierry Reding , Anders Roxell , Kishon Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org + Anders, Kishon On Tue, 25 Feb 2020 at 17:24, Jon Hunter wrote: > > > On 25/02/2020 14:26, Ulf Hansson wrote: > > ... > > > However, from the core point of view, the response is still requested, > > only that we don't want the driver to wait for the card to stop > > signaling busy. Instead we want to deal with that via "polling" from > > the core. > > > > This is a rather worrying behaviour, as it seems like the host driver > > doesn't really follow this expectations from the core point of view. > > And mmc_flush_cache() is not the only case, as we have erase, bkops, > > sanitize, etc. Are all these working or not really well tested? > > I don't believe that they are well tested. We have a simple test to > mount an eMMC partition, create a file, check the contents, remove the > file and unmount. The timeouts always occur during unmounting. > > > Earlier, before my three patches, if the provided timeout_ms parameter > > to __mmc_switch() was zero, which was the case for > > mmc_mmc_flush_cache() - this lead to that __mmc_switch() simply > > ignored validating host->max_busy_timeout, which was wrong. In any > > case, this also meant that an R1B response was always used for > > mmc_flush_cache(), as you also indicated above. Perhaps this is the > > critical part where things can go wrong. > > > > BTW, have you tried erase commands for sdhci tegra driver? If those > > are working fine, do you have any special treatments for these? > > That I am not sure, but I will check. Great, thanks. Looking forward to your report. So, from my side, me and Anders Roxell, have been collaborating on testing the behaviour on a TI Beagleboard x15 (remotely with limited debug options), which is using the sdhci-omap variant. I am trying to get hold of an Nvidia jetson-TX2, but not found one yet. These are the conclusions from the observed behaviour on the Beagleboard for the CMD6 cache flush command. First, the reported host->max_busy_timeout is 2581 (ms) for the sdhci-omap driver in this configuration. 1. As we all know by now, the cache flush command (CMD6) fails with -110 currently. This is when MMC_CACHE_FLUSH_TIMEOUT_MS is set to 30 * 1000 (30s), which means __mmc_switch() drops the MMC_RSP_BUSY flag from the command. 2. Changing the MMC_CACHE_FLUSH_TIMEOUT_MS to 2000 (2s), means that the MMC_RSP_BUSY flag becomes set by __mmc_switch, because of the timeout_ms parameter is less than max_busy_timeout (2000 < 2581). Then everything works fine. 3. Updating the code to again use 30s as the MMC_CACHE_FLUSH_TIMEOUT_MS, but instead forcing the MMC_RSP_BUSY to be set, even when the timeout_ms becomes greater than max_busy_timeout. This also works fine. Clearly this indicates a problem that I think needs to be addressed in the sdhci driver. However, of course I can revert the three discussed patches to fix the problem, but that would only hide the issues and I am sure we would then get back to this issue, sooner or later. To fix the problem in the sdhci driver, I would appreciate if someone from TI and Nvidia can step in to help, as I don't have the HW on my desk. Comments or other ideas of how to move forward? Kind regards Uffe