Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753009AbdGNBpZ (ORCPT ); Thu, 13 Jul 2017 21:45:25 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2101 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752298AbdGNBpX (ORCPT ); Thu, 13 Jul 2017 21:45:23 -0400 Subject: Re: [PATCH v3 7/7] libsas: release disco mutex during waiting in sas_ex_discover_end_dev To: John Garry , , References: <1499670369-44143-1-git-send-email-wangyijing@huawei.com> <1499670369-44143-8-git-send-email-wangyijing@huawei.com> <5dc7927b-27ac-ed09-9718-6cbe3ab53e19@huawei.com> CC: , , , , , , , , , , , , , , , , , , Johannes Thumshirn , Linuxarm From: wangyijing Message-ID: <596821E8.2000200@huawei.com> Date: Fri, 14 Jul 2017 09:44:08 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <5dc7927b-27ac-ed09-9718-6cbe3ab53e19@huawei.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.177.23.4] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.5968222A.0083,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 905ea6dc19458b3de1868ebdbd5e0f70 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5132 Lines: 117 在 2017/7/14 0:10, John Garry 写道: > On 10/07/2017 08:06, Yijing Wang wrote: >> Disco mutex was introudced to prevent domain rediscovery competing >> with ata error handling(87c8331). If we have already hold the lock >> in sas_revalidate_domain and sync executing probe, deadlock caused, >> because, sas_probe_sata() also need hold disco_mutex. Since disco mutex >> use to prevent revalidata domain happen during ata error handler, >> it should be safe to release disco mutex when sync probe, because >> no new revalidate domain event would be process until the sync return, >> and the current sas revalidate domain finish. >> > > So with your changes we have a chain of synchronised events, running in separate queues. In theory it sounds ok. > > Then, as you said in the commit message, sas_revalidate_domain() holds the disco mutex while *all* these chained events occur; so we will continue to hold the mutex until we have revalidated the domain, meaning until we have finished destroying or probing new devices. > > But in the domain revalidation when you discover a new ATA device, function sas_probe_sata() wants to grab the disco mutex and you just temporarily release it, even though adding a new ATA device kicks in EH. This defeats the principal of using a mutex at all, which is (according to 87c8331 commit message) to mutually exclude the domain re-discovery (which has not actually finished) and the ATA EH (and device destruction). > > Anyway, since we are synchronising this series of events (broadcast event, domain rediscovery, and device destruction), surely it should be possible to include the ATA EH as well, so we can actually get rid of the disco mutex finally, right? Yes, disco mutex make this issue complex, I checked the commit history, Dan introudce disco mutex and probe/destruct discovery event, so it seems to need a big rework to the libsas process logic, I am so sorry that I have no more time to deal with it, I will leave today, if you like, you could rework my patchset or add additional changes based this patchset. > > Note: I think that there is a problem which you have not seen. Consider removing a ATA disk with IO active conncted to an expander: > - LLDD sends brodcast event > - sas_revalidate_domain(), which grabs disco mutex > - revalidate finds dev is gone > - destruct device, which calls sas_rphy_delete > - this waits on command queue to drain > - commands time out and EH thread kicks in > - sas_ata_strategy_handler() called > - domain revalidation disable attempted > - try to grab disco mutex = Deadlock. Yes, it's a issue I haven't found. Thanks! Yijing. Hi John, I also agree to rework disco mutex > > Thanks, > John > >> Signed-off-by: Yijing Wang >> CC: John Garry >> CC: Johannes Thumshirn >> CC: Ewan Milne >> CC: Christoph Hellwig >> CC: Tomas Henzl >> CC: Dan Williams >> --- >> drivers/scsi/libsas/sas_expander.c | 10 ++++++++++ >> 1 file changed, 10 insertions(+) >> >> diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c >> index 9d26c28..077024e 100644 >> --- a/drivers/scsi/libsas/sas_expander.c >> +++ b/drivers/scsi/libsas/sas_expander.c >> @@ -776,6 +776,7 @@ static struct domain_device *sas_ex_discover_end_dev( >> struct ex_phy *phy = &parent_ex->ex_phy[phy_id]; >> struct domain_device *child = NULL; >> struct sas_rphy *rphy; >> + bool prev_lock; >> int res; >> >> if (phy->attached_sata_host || phy->attached_sata_ps) >> @@ -803,6 +804,7 @@ static struct domain_device *sas_ex_discover_end_dev( >> sas_ex_get_linkrate(parent, child, phy); >> sas_device_set_phy(child, phy->port); >> >> + prev_lock = mutex_is_locked(&child->port->ha->disco_mutex); >> #ifdef CONFIG_SCSI_SAS_ATA >> if ((phy->attached_tproto & SAS_PROTOCOL_STP) || phy->attached_sata_dev) { >> res = sas_get_ata_info(child, phy); >> @@ -832,7 +834,11 @@ static struct domain_device *sas_ex_discover_end_dev( >> SAS_ADDR(parent->sas_addr), phy_id, res); >> goto out_list_del; >> } >> + if (prev_lock) >> + mutex_unlock(&child->port->ha->disco_mutex); >> sas_disc_wait_completion(child->port, DISCE_PROBE); >> + if (prev_lock) >> + mutex_lock(&child->port->ha->disco_mutex); >> >> } else >> #endif >> @@ -861,7 +867,11 @@ static struct domain_device *sas_ex_discover_end_dev( >> SAS_ADDR(parent->sas_addr), phy_id, res); >> goto out_list_del; >> } >> + if (prev_lock) >> + mutex_unlock(&child->port->ha->disco_mutex); >> sas_disc_wait_completion(child->port, DISCE_PROBE); >> + if (prev_lock) >> + mutex_lock(&child->port->ha->disco_mutex); >> } else { >> SAS_DPRINTK("target proto 0x%x at %016llx:0x%x not handled\n", >> phy->attached_tproto, SAS_ADDR(parent->sas_addr), >> > > > > . >