Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751116AbdFYCts (ORCPT ); Sat, 24 Jun 2017 22:49:48 -0400 Received: from host.buserror.net ([209.198.135.123]:48985 "EHLO host.buserror.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750852AbdFYCtr (ORCPT ); Sat, 24 Jun 2017 22:49:47 -0400 Date: Sat, 24 Jun 2017 21:49:39 -0500 From: Scott Wood To: Karim Eshapa Cc: roy.pledge@nxp.com, linux-kernel@vger.kernel.org, claudiu.manoil@nxp.com, colin.king@canonical.com, linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Message-ID: <20170625024939.3adaysgmblwhyeyf@home.buserror.net> References: <1493971556-14918-1-git-send-email-karim.eshapa@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1493971556-14918-1-git-send-email-karim.eshapa@gmail.com> User-Agent: NeoMutt/20170113 (1.7.2) X-SA-Exim-Connect-IP: 68.46.28.146 X-SA-Exim-Rcpt-To: karim.eshapa@gmail.com, roy.pledge@nxp.com, linux-kernel@vger.kernel.org, claudiu.manoil@nxp.com, colin.king@canonical.com, linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org X-SA-Exim-Mail-From: oss@buserror.net X-Spam-Report: * -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP * -15 BAYES_00 BODY: Bayes spam probability is 0 to 1% * [score: 0.0000] Subject: Re: drivers:soc:fsl:qbman:qman.c: Change a comment for an entry check inside drain_mr_fqrni function X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:57:07 +0000) X-SA-Exim-Scanned: Yes (on host.buserror.net) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2362 Lines: 50 On Fri, May 05, 2017 at 10:05:56AM +0200, Karim Eshapa wrote: > Change the comment for an entry check inside function > drain_mr_fqrni() with sleep for sufficient period > of time instead of long time proccessor cycles. > > Signed-off-by: Karim Eshapa > --- > drivers/soc/fsl/qbman/qman.c | 25 +++++++++++++------------ > 1 file changed, 13 insertions(+), 12 deletions(-) > > diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c > index 18d391e..636a7d7 100644 > --- a/drivers/soc/fsl/qbman/qman.c > +++ b/drivers/soc/fsl/qbman/qman.c > @@ -1071,18 +1071,19 @@ static int drain_mr_fqrni(struct qm_portal *p) > msg = qm_mr_current(p); > if (!msg) { > /* > - * if MR was full and h/w had other FQRNI entries to produce, we > - * need to allow it time to produce those entries once the > - * existing entries are consumed. A worst-case situation > - * (fully-loaded system) means h/w sequencers may have to do 3-4 > - * other things before servicing the portal's MR pump, each of > - * which (if slow) may take ~50 qman cycles (which is ~200 > - * processor cycles). So rounding up and then multiplying this > - * worst-case estimate by a factor of 10, just to be > - * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume > - * one entry at a time, so h/w has an opportunity to produce new > - * entries well before the ring has been fully consumed, so > - * we're being *really* paranoid here. > + * if MR was full and h/w had other FQRNI entries to > + * produce, we need to allow it time to produce those > + * entries once the existing entries are consumed. > + * A worst-case situation (fully-loaded system) means > + * h/w sequencers may have to do 3-4 other things > + * before servicing the portal's MR pump, each of > + * which (if slow) may take ~50 qman cycles > + * (which is ~200 processor cycles). So sleep with > + * 1 ms would be very efficient, after this period > + * we can check if there is something produced. > + * NB, we consume one entry at a time, so h/w has > + * an opportunity to produce new entries well before > + * the ring has been fully consumed. Do you mean "sufficient" here rather than "efficient"? It's far less inefficient than what the code was previously doing, but still... Otherwise, looks good. -Scott