Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp1194156ybn; Wed, 2 Oct 2019 12:14:36 -0700 (PDT) X-Google-Smtp-Source: APXvYqxGWV2UkAiY8B/8c7Ww9ez9CLGcwkmHjR0PhwchqCVeXEb7cxv0kyETNmaY6C1+SSXguMaC X-Received: by 2002:aa7:d692:: with SMTP id d18mr5415599edr.95.1570043676790; Wed, 02 Oct 2019 12:14:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570043676; cv=none; d=google.com; s=arc-20160816; b=t4sXt3+5HJ5Vj6AF7fXJTjTxG20yIx6dkYD2goFPWJubRLi32D9J1aTh045VscMo+j prH73by6ubq2C5TcVd5yF/IhlTuR8RkNGQhu+/W/ipQHGFU9CwrmqhsCbETEdWtLCZxg WrKAB9JV7jmATHZGFIiawN+6BnTKYxD2+9eegxpj3YUczdHjZ1Eh0z7HNfYF1e+ETqeP 9lL5Rayl0SMgIIEYKQDNneuxVGQidXUH5BiSenLHCAH8tIZZhY2N3hDwwfpEiIXIKCH4 TCyxNNuWiLjgXcApnAzh8C7xMmUFDQ2d+W4m4ydfuMlIipnxMocSEJAMeh46rDGj7lJu qMiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition; bh=3t3DX8prCfcqKDyIFiY9Z1heYxlva8uyVgo5cZHD+ew=; b=OBajthGLruXkG+2yKXCcPJq8fcGwXZr5DToMtw8T7a5Dhcnms9yNGgfj3TyYzuGb2o EAHCT1/G9MSYX9/7kVA7F1UkkuwXthhdIvNDdYIvRPRrQ4L06MshNJf50EVX/FhETgHl 8yWA3s1fGdFyKTmgcKo5G5woIBR9tC9bpW+aL+sdaWaa9Y6feTfTqQUtaANUmbBWzuS4 lDNSKnzWKUn6y1qJYl0o0K0WVVbsXUJllRcesRHeOGEBSsdehJiaFW54vwPWMyOuwl9U KB6dwp+Q7DPf6u3hKZ1e5k26ZyVLOO55Z2IQQQxPF7Shky44tWtxRLeovqHdYmkq6Wy7 NuEg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id si9si26059ejb.97.2019.10.02.12.14.12; Wed, 02 Oct 2019 12:14:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730144AbfJBTNA (ORCPT + 99 others); Wed, 2 Oct 2019 15:13:00 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:35372 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729094AbfJBTIK (ORCPT ); Wed, 2 Oct 2019 15:08:10 -0400 Received: from [192.168.4.242] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iFjyo-00036E-Sf; Wed, 02 Oct 2019 20:08:06 +0100 Received: from ben by deadeye with local (Exim 4.92.1) (envelope-from ) id 1iFjyo-0003cu-2Z; Wed, 02 Oct 2019 20:08:06 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, Denis Kirjanov , "Martin K. Petersen" , "Benjamin Block" , "Steffen Maier" Date: Wed, 02 Oct 2019 20:06:51 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) X-Patchwork-Hint: ignore Subject: [PATCH 3.16 38/87] scsi: zfcp: fix to prevent port_remove with pure auto scan LUNs (only sdevs) In-Reply-To: X-SA-Exim-Connect-IP: 192.168.4.242 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.75-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Steffen Maier commit ef4021fe5fd77ced0323cede27979d80a56211ca upstream. When the user tries to remove a zfcp port via sysfs, we only rejected it if there are zfcp unit children under the port. With purely automatically scanned LUNs there are no zfcp units but only SCSI devices. In such cases, the port_remove erroneously continued. We close the port and this implicitly closes all LUNs under the port. The SCSI devices survive with their private zfcp_scsi_dev still holding a reference to the "removed" zfcp_port (still allocated but invisible in sysfs) [zfcp_get_port_by_wwpn in zfcp_scsi_slave_alloc]. This is not a problem as long as the fc_rport stays blocked. Once (auto) port scan brings back the removed port, we unblock its fc_rport again by design. However, there is no mechanism that would recover (open) the LUNs under the port (no "ersfs_3" without zfcp_unit [zfcp_erp_strategy_followup_success]). Any pending or new I/O to such LUN leads to repeated: Done: NEEDS_RETRY Result: hostbyte=DID_IMM_RETRY driverbyte=DRIVER_OK See also v4.10 commit 6f2ce1c6af37 ("scsi: zfcp: fix rport unblock race with LUN recovery"). Even a manual LUN recovery (echo 0 > /sys/bus/scsi/devices/H:C:T:L/zfcp_failed) does not help, as the LUN links to the old "removed" port which remains to lack ZFCP_STATUS_COMMON_RUNNING [zfcp_erp_required_act]. The only workaround is to first ensure that the fc_rport is blocked (e.g. port_remove again in case it was re-discovered by (auto) port scan), then delete the SCSI devices, and finally re-discover by (auto) port scan. The port scan includes an fc_rport unblock, which in turn triggers a new scan on the scsi target to freshly get new pure auto scan LUNs. Fix this by rejecting port_remove also if there are SCSI devices (even without any zfcp_unit) under this port. Re-use mechanics from v3.7 commit d99b601b6338 ("[SCSI] zfcp: restore refcount check on port_remove"). However, we have to give up zfcp_sysfs_port_units_mutex earlier in unit_add to prevent a deadlock with scsi_host scan taking shost->scan_mutex first and then zfcp_sysfs_port_units_mutex now in our zfcp_scsi_slave_alloc(). Signed-off-by: Steffen Maier Fixes: b62a8d9b45b9 ("[SCSI] zfcp: Use SCSI device data zfcp scsi dev instead of zfcp unit") Fixes: f8210e34887e ("[SCSI] zfcp: Allow midlayer to scan for LUNs when running in NPIV mode") Reviewed-by: Benjamin Block Signed-off-by: Martin K. Petersen Signed-off-by: Ben Hutchings --- drivers/s390/scsi/zfcp_ext.h | 1 + drivers/s390/scsi/zfcp_scsi.c | 9 ++++++ drivers/s390/scsi/zfcp_sysfs.c | 54 ++++++++++++++++++++++++++++++---- drivers/s390/scsi/zfcp_unit.c | 8 ++++- 4 files changed, 65 insertions(+), 7 deletions(-) --- a/drivers/s390/scsi/zfcp_ext.h +++ b/drivers/s390/scsi/zfcp_ext.h @@ -160,6 +160,7 @@ extern const struct attribute_group *zfc extern struct mutex zfcp_sysfs_port_units_mutex; extern struct device_attribute *zfcp_sysfs_sdev_attrs[]; extern struct device_attribute *zfcp_sysfs_shost_attrs[]; +bool zfcp_sysfs_port_is_removing(const struct zfcp_port *const port); /* zfcp_unit.c */ extern int zfcp_unit_add(struct zfcp_port *, u64); --- a/drivers/s390/scsi/zfcp_scsi.c +++ b/drivers/s390/scsi/zfcp_scsi.c @@ -145,6 +145,15 @@ static int zfcp_scsi_slave_alloc(struct zfcp_sdev->erp_action.port = port; + mutex_lock(&zfcp_sysfs_port_units_mutex); + if (zfcp_sysfs_port_is_removing(port)) { + /* port is already gone */ + mutex_unlock(&zfcp_sysfs_port_units_mutex); + put_device(&port->dev); /* undo zfcp_get_port_by_wwpn() */ + return -ENXIO; + } + mutex_unlock(&zfcp_sysfs_port_units_mutex); + unit = zfcp_unit_find(port, zfcp_scsi_dev_lun(sdev)); if (unit) put_device(&unit->dev); --- a/drivers/s390/scsi/zfcp_sysfs.c +++ b/drivers/s390/scsi/zfcp_sysfs.c @@ -235,6 +235,53 @@ static ZFCP_DEV_ATTR(adapter, port_resca DEFINE_MUTEX(zfcp_sysfs_port_units_mutex); +static void zfcp_sysfs_port_set_removing(struct zfcp_port *const port) +{ + lockdep_assert_held(&zfcp_sysfs_port_units_mutex); + atomic_set(&port->units, -1); +} + +bool zfcp_sysfs_port_is_removing(const struct zfcp_port *const port) +{ + lockdep_assert_held(&zfcp_sysfs_port_units_mutex); + return atomic_read(&port->units) == -1; +} + +static bool zfcp_sysfs_port_in_use(struct zfcp_port *const port) +{ + struct zfcp_adapter *const adapter = port->adapter; + unsigned long flags; + struct scsi_device *sdev; + bool in_use = true; + + mutex_lock(&zfcp_sysfs_port_units_mutex); + if (atomic_read(&port->units) > 0) + goto unlock_port_units_mutex; /* zfcp_unit(s) under port */ + + spin_lock_irqsave(adapter->scsi_host->host_lock, flags); + __shost_for_each_device(sdev, adapter->scsi_host) { + const struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev); + + if (sdev->sdev_state == SDEV_DEL || + sdev->sdev_state == SDEV_CANCEL) + continue; + if (zsdev->port != port) + continue; + /* alive scsi_device under port of interest */ + goto unlock_host_lock; + } + + /* port is about to be removed, so no more unit_add or slave_alloc */ + zfcp_sysfs_port_set_removing(port); + in_use = false; + +unlock_host_lock: + spin_unlock_irqrestore(adapter->scsi_host->host_lock, flags); +unlock_port_units_mutex: + mutex_unlock(&zfcp_sysfs_port_units_mutex); + return in_use; +} + static ssize_t zfcp_sysfs_port_remove_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) @@ -257,16 +304,11 @@ static ssize_t zfcp_sysfs_port_remove_st else retval = 0; - mutex_lock(&zfcp_sysfs_port_units_mutex); - if (atomic_read(&port->units) > 0) { + if (zfcp_sysfs_port_in_use(port)) { retval = -EBUSY; - mutex_unlock(&zfcp_sysfs_port_units_mutex); put_device(&port->dev); /* undo zfcp_get_port_by_wwpn() */ goto out; } - /* port is about to be removed, so no more unit_add */ - atomic_set(&port->units, -1); - mutex_unlock(&zfcp_sysfs_port_units_mutex); write_lock_irq(&adapter->port_list_lock); list_del(&port->list); --- a/drivers/s390/scsi/zfcp_unit.c +++ b/drivers/s390/scsi/zfcp_unit.c @@ -122,7 +122,7 @@ int zfcp_unit_add(struct zfcp_port *port int retval = 0; mutex_lock(&zfcp_sysfs_port_units_mutex); - if (atomic_read(&port->units) == -1) { + if (zfcp_sysfs_port_is_removing(port)) { /* port is already gone */ retval = -ENODEV; goto out; @@ -166,8 +166,14 @@ int zfcp_unit_add(struct zfcp_port *port write_lock_irq(&port->unit_list_lock); list_add_tail(&unit->list, &port->unit_list); write_unlock_irq(&port->unit_list_lock); + /* + * lock order: shost->scan_mutex before zfcp_sysfs_port_units_mutex + * due to zfcp_unit_scsi_scan() => zfcp_scsi_slave_alloc() + */ + mutex_unlock(&zfcp_sysfs_port_units_mutex); zfcp_unit_scsi_scan(unit); + return retval; out: mutex_unlock(&zfcp_sysfs_port_units_mutex);