Received: by 10.192.165.156 with SMTP id m28csp273857imm; Tue, 17 Apr 2018 09:56:16 -0700 (PDT) X-Google-Smtp-Source: AIpwx490NcOrnJLp5dTi5+axWjRhjJoggKVXPENKdYjbsAhBJt+5G31d1YMP9IkdfCNDJ+3qkCBv X-Received: by 10.99.66.129 with SMTP id p123mr2377358pga.58.1523984176630; Tue, 17 Apr 2018 09:56:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523984176; cv=none; d=google.com; s=arc-20160816; b=qjUOubeXve1/aHFAhSw/r3d7LpSQM0IeUq9fi63bLXT366rewcFF9iwtM6AIRGWfyt wau5rsorurwBX6UP70TIGceOrf8fIcppk8lGOeuijPozG/BEHMjFIxtRS4NhQ73C1rQj FH03ERDKeBmoduZW1LGYsBaoINaIEaK6raRp7QdM0EQhPU+9jMu1Ys8mF7/MtcxiL48k DJI6WCLRInpF4hpTcMtHRSSrvphXI8KqkJ0EPy1EE597zIt2XQqTDPt5AlayZjg5zlxL 0Rm4uCxFSmPGXKJVZi6VKmfMQtxJSpafRSQT/iE9kxR+fW/ssEOuyo/3iCBMMu7aVw53 c77w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=ohtb1tpSyfEEOoJuZDKhhu0SkhV0SNnucB6MeUV44iA=; b=gaxGA6ieHNntUjweefs75CzLbHBAmSYPrIS4MM2SN+ZsFgJMnhUs4yXX4oN4F7zsYO H3dWZfhLbKXqsCmamVTTI4eTxOXxGHBRnX5sqfPtri+H50hb9buDU4XP2wYO1XQ13SBN nMF9EBx0w4G7n3Fyrt1/XyFHMx013GFcUfQyOjm+MqnO33hlX4+muyZBaXyFERNdXN4t D+4HdsSLWXv/fsYvtLdxJEaOg+LEYbZGEZKJTcNQqOL79AeVtqU0/9rreg6pd+zIWK5W lwA4TDv9IkasCjIwLjICUzm6ILtzmBjwckK9kT2FPJVAXJ/6A0dHCgV8vkX0k/3QEy2z uxeQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d2-v6si14841696pln.533.2018.04.17.09.56.02; Tue, 17 Apr 2018 09:56:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754427AbeDQQyq (ORCPT + 99 others); Tue, 17 Apr 2018 12:54:46 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:32984 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754350AbeDQQDg (ORCPT ); Tue, 17 Apr 2018 12:03:36 -0400 Received: from localhost (unknown [46.44.180.42]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 08C28DF6; Tue, 17 Apr 2018 16:03:36 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Adrian Suhov , Chris Valean , Dexuan Cui , Lorenzo Pieralisi , Michael Kelley , Haiyang Zhang , Vitaly Kuznetsov , Jack Morgenstein , Stephen Hemminger , "K. Y. Srinivasan" Subject: [PATCH 4.15 18/53] PCI: hv: Serialize the present and eject work items Date: Tue, 17 Apr 2018 17:58:43 +0200 Message-Id: <20180417155723.991592752@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180417155723.091120060@linuxfoundation.org> References: <20180417155723.091120060@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Dexuan Cui commit 021ad274d7dc31611d4f47f7dd4ac7a224526f30 upstream. When we hot-remove the device, we first receive a PCI_EJECT message and then receive a PCI_BUS_RELATIONS message with bus_rel->device_count == 0. The first message is offloaded to hv_eject_device_work(), and the second is offloaded to pci_devices_present_work(). Both the paths can be running list_del(&hpdev->list_entry), causing general protection fault, because system_wq can run them concurrently. The patch eliminates the race condition. Since access to present/eject work items is serialized, we do not need the hbus->enum_sem anymore, so remove it. Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs") Link: https://lkml.kernel.org/r/KL1P15301MB00064DA6B4D221123B5241CFBFD70@KL1P15301MB0006.APCP153.PROD.OUTLOOK.COM Tested-by: Adrian Suhov Tested-by: Chris Valean Signed-off-by: Dexuan Cui [lorenzo.pieralisi@arm.com: squashed semaphore removal patch] Signed-off-by: Lorenzo Pieralisi Reviewed-by: Michael Kelley Acked-by: Haiyang Zhang Cc: # v4.6+ Cc: Vitaly Kuznetsov Cc: Jack Morgenstein Cc: Stephen Hemminger Cc: K. Y. Srinivasan Signed-off-by: Greg Kroah-Hartman --- drivers/pci/host/pci-hyperv.c | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-) --- a/drivers/pci/host/pci-hyperv.c +++ b/drivers/pci/host/pci-hyperv.c @@ -457,7 +457,6 @@ struct hv_pcibus_device { spinlock_t device_list_lock; /* Protect lists below */ void __iomem *cfg_addr; - struct semaphore enum_sem; struct list_head resources_for_children; struct list_head children; @@ -471,6 +470,8 @@ struct hv_pcibus_device { struct retarget_msi_interrupt retarget_msi_interrupt_params; spinlock_t retarget_msi_interrupt_lock; + + struct workqueue_struct *wq; }; /* @@ -1600,12 +1601,8 @@ static struct hv_pci_dev *get_pcichild_w * It must also treat the omission of a previously observed device as * notification that the device no longer exists. * - * Note that this function is a work item, and it may not be - * invoked in the order that it was queued. Back to back - * updates of the list of present devices may involve queuing - * multiple work items, and this one may run before ones that - * were sent later. As such, this function only does something - * if is the last one in the queue. + * Note that this function is serialized with hv_eject_device_work(), + * because both are pushed to the ordered workqueue hbus->wq. */ static void pci_devices_present_work(struct work_struct *work) { @@ -1626,11 +1623,6 @@ static void pci_devices_present_work(str INIT_LIST_HEAD(&removed); - if (down_interruptible(&hbus->enum_sem)) { - put_hvpcibus(hbus); - return; - } - /* Pull this off the queue and process it if it was the last one. */ spin_lock_irqsave(&hbus->device_list_lock, flags); while (!list_empty(&hbus->dr_list)) { @@ -1647,7 +1639,6 @@ static void pci_devices_present_work(str spin_unlock_irqrestore(&hbus->device_list_lock, flags); if (!dr) { - up(&hbus->enum_sem); put_hvpcibus(hbus); return; } @@ -1734,7 +1725,6 @@ static void pci_devices_present_work(str break; } - up(&hbus->enum_sem); put_hvpcibus(hbus); kfree(dr); } @@ -1780,7 +1770,7 @@ static void hv_pci_devices_present(struc spin_unlock_irqrestore(&hbus->device_list_lock, flags); get_hvpcibus(hbus); - schedule_work(&dr_wrk->wrk); + queue_work(hbus->wq, &dr_wrk->wrk); } /** @@ -1858,7 +1848,7 @@ static void hv_pci_eject_device(struct h get_pcichild(hpdev, hv_pcidev_ref_pnp); INIT_WORK(&hpdev->wrk, hv_eject_device_work); get_hvpcibus(hpdev->hbus); - schedule_work(&hpdev->wrk); + queue_work(hpdev->hbus->wq, &hpdev->wrk); } /** @@ -2471,13 +2461,18 @@ static int hv_pci_probe(struct hv_device spin_lock_init(&hbus->config_lock); spin_lock_init(&hbus->device_list_lock); spin_lock_init(&hbus->retarget_msi_interrupt_lock); - sema_init(&hbus->enum_sem, 1); init_completion(&hbus->remove_event); + hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0, + hbus->sysdata.domain); + if (!hbus->wq) { + ret = -ENOMEM; + goto free_bus; + } ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0, hv_pci_onchannelcallback, hbus); if (ret) - goto free_bus; + goto destroy_wq; hv_set_drvdata(hdev, hbus); @@ -2546,6 +2541,8 @@ free_config: hv_free_config_window(hbus); close: vmbus_close(hdev->channel); +destroy_wq: + destroy_workqueue(hbus->wq); free_bus: free_page((unsigned long)hbus); return ret; @@ -2625,6 +2622,7 @@ static int hv_pci_remove(struct hv_devic irq_domain_free_fwnode(hbus->sysdata.fwnode); put_hvpcibus(hbus); wait_for_completion(&hbus->remove_event); + destroy_workqueue(hbus->wq); free_page((unsigned long)hbus); return 0; }