Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753462AbcCGVvv (ORCPT ); Mon, 7 Mar 2016 16:51:51 -0500 Received: from mail-pa0-f68.google.com ([209.85.220.68]:34143 "EHLO mail-pa0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752778AbcCGVvn (ORCPT ); Mon, 7 Mar 2016 16:51:43 -0500 Date: Mon, 7 Mar 2016 13:51:40 -0800 From: Brian Norris To: linux-mtd@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Boris Brezillon , Richard Weinberger , Harvey Hunt , Alex Smith , Niklas Cassel , Alex Smith Subject: Re: [PATCH] mtd: nand: check status before reporting timeout Message-ID: <20160307215140.GB55664@google.com> References: <1457140763-67571-1-git-send-email-computersforpeace@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1457140763-67571-1-git-send-email-computersforpeace@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1382 Lines: 27 On Fri, Mar 04, 2016 at 05:19:23PM -0800, Brian Norris wrote: > In commit b70af9bef49b ("mtd: nand: increase ready wait timeout and > report timeouts"), we increased the likelihood of scheduling during > nand_wait(). This makes us more likely to hit the time_before(...) > condition, since a lot of time may pass before we get scheduled again. > > Now, the loop was already buggy, since we don't check if the NAND is > ready after exiting the loop; we simply print out a timeout warning. Fix > this by doing a final status check before printing a timeout message. > > This isn't actually a critical bug, since the only effect is a false > warning print. But too many prints never hurt anyone, did they? :) > > Side note: perhaps I'm not smart enough, but I'm not sure what the best > policy is for this kind of loop; do we busy loop (i.e., no > cond_resched()) to keep the lowest I/O latency (it's not great if the > resched is delaying Richard's system ~400ms)? Or do we allow > rescheduling, to play nice with the rest of the system (since some > operations can take quite a while)? > > Reported-by: Richard Weinberger > Signed-off-by: Brian Norris > Reviewed-by: Boris Brezillon > Reviewed-by: Richard Weinberger > Reviewed-by: Harvey Hunt Applied