Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752874Ab1EPIi4 (ORCPT ); Mon, 16 May 2011 04:38:56 -0400 Received: from mga03.intel.com ([143.182.124.21]:25274 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752753Ab1EPIiz (ORCPT ); Mon, 16 May 2011 04:38:55 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.64,373,1301900400"; d="scan'208";a="436705286" Subject: Re: Perfromance drop on SCSI hard disk From: "Alex,Shi" To: "Li, Shaohua" Cc: Jens Axboe , "James.Bottomley@hansenpartnership.com" , "linux-kernel@vger.kernel.org" In-Reply-To: <1305533054.2375.45.camel@sli10-conroe> References: <1305009600.21534.587.camel@debian> <4DCC4340.6000407@fusionio.com> <1305247704.2373.32.camel@sli10-conroe> <1305255717.2373.38.camel@sli10-conroe> <1305533054.2375.45.camel@sli10-conroe> Content-Type: text/plain; charset="UTF-8" Date: Mon, 16 May 2011 16:37:51 +0800 Message-ID: <1305535071.21534.2122.camel@debian> Mime-Version: 1.0 X-Mailer: Evolution 2.28.1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 926 Lines: 24 > > what I mean is current sdev (other devices too) can still be added into > > starved list, so only does async execute for current q isn't enough, > > we'd better put whole __scsi_run_queue into workqueue. something like > > below on top of yours, untested. Not sure if there are other recursive > > cases. > verified the regression can be fully fixed by your patch (with my > suggested fix to avoid race). Can we put a formal patch upstream? Yes, we tested Jens patch alone and plus Shaohua's patch too. Both of them recovered SAS disk performance too. Now I am testing them on SSD disk with kbuild and fio cases, In theory, they will both work. > > Thanks, > Shaohua > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/