Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932778AbdGLKOc (ORCPT ); Wed, 12 Jul 2017 06:14:32 -0400 Received: from szxga02-in.huawei.com ([45.249.212.188]:9826 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932711AbdGLKO3 (ORCPT ); Wed, 12 Jul 2017 06:14:29 -0400 Subject: Re: [PATCH v3 1/7] libsas: Use static sas event pool to appease sas event lost To: wangyijing , , References: <1499670369-44143-1-git-send-email-wangyijing@huawei.com> <1499670369-44143-2-git-send-email-wangyijing@huawei.com> <5965840B.2000909@huawei.com> <0af2bdd0-90ce-6b04-bbf3-9b8ffbb34b38@huawei.com> <5965E22F.7020309@huawei.com> CC: , , , , , , , , , , , , , , , , , , Johannes Thumshirn , Linuxarm From: John Garry Message-ID: Date: Wed, 12 Jul 2017 11:13:38 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 In-Reply-To: <5965E22F.7020309@huawei.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.203.181.152] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.5965F66F.007D,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 9f9c08a72bdb570c3c20fb7981529063 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2828 Lines: 72 On 12/07/2017 09:47, wangyijing wrote: > > > 在 2017/7/12 16:17, John Garry 写道: >> On 12/07/2017 03:06, wangyijing wrote: >>>>> - unsigned long port_events_pending; >>>>> - unsigned long phy_events_pending; >>>>> + struct asd_sas_event port_events[PORT_POOL_SIZE]; >>>>> + struct asd_sas_event phy_events[PHY_POOL_SIZE]; >>>>> >>>>> int error; >>>> >>>> Hi Yijing, >>>> >>>> So now we are creating a static pool of events per PHY/port, instead of having 1 static work struct per event per PHY/port. So, for sure, this avoids the dynamic event issue of system memory exhaustion which we discussed in v1+v2 series. And it seems to possibly remove issue of losing SAS events. >>>> >>>> But how did you determine the pool size for a PHY/port? It would seem to be 5 * #phy events or #port events (which is also 5, I figure by coincidence). How does this deal with flutter of >25 events? >>> >>> There is no special meaning for the pool size, if flutter of > 25 events, notify sas events will return error, and the further step work is depending on LLDD drivers. >>> I hope libsas could do more work in this case, but now it seems a little difficult, this patch may be a interim fix, until we find a perfect solution. >> >> The principal of having a fixed-sized pool is ok, even though the pool size needs more consideration. >> >> However my issue is how to handle pool exhaustion. For a start, relaying info to the LLDD that the event notification failed is probably not the way to go. I only now noticed "scsi: sas: scsi_queue_work can fail, so make callers aware" made it into the kernel; as I mentioned in response to this patch, the LLDD does not know how to handle this (and no LLDDs do actually handle this). >> >> I would say it is better to shut down the PHY from libsas (As Dan mentioned in the v1 series) when the pool exhausts, under the assumption that the PHY has gone into some erroneous state. The user can later re-enable the PHY from sysfs, if required. > > I considered this suggestion, and what I am worried about are, first if we disable phy once the sas event pool exhausts, it may hurt the pending sas event process which has been queued, I don't see how it affects currently queued events - they should just be processed normally. As for LLDD reporting events when the pool is exhausted, they are just lost. > second, if phy was disabled, and no one trigger the reenable by sysfs, the LLDD has no way to post new sas phy events. For the extreme scenario of pool becoming exhausted and PHY being disabled, it should remain disabled until user takes some action to fix originating problem. > > Thanks! > Yijing. > >> >> Much appreciated, >> John >> >>> >>> Thanks! >>> Yijing. >>> >>>> >>>> Thanks, >>>> John >>>> >>>> >>>> . >>>> >>> >>> >>> . >>> >> >> >> >> . >> > > > . >