Received: by 10.192.165.156 with SMTP id m28csp1142562imm; Wed, 11 Apr 2018 13:13:44 -0700 (PDT) X-Google-Smtp-Source: AIpwx48UADLGEU5Q2jN8/gHloeYteVd5zdkSB8JmdUZvCs3SjtyFHcLhxUdq2IuZ0FqOeTWcCKJh X-Received: by 10.99.127.9 with SMTP id a9mr4509049pgd.347.1523477624022; Wed, 11 Apr 2018 13:13:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523477623; cv=none; d=google.com; s=arc-20160816; b=v1y6Wsrs12v51zY9gg4tSaph9vFzwzPs8GUI78wbnMTuGTIBQ6dEsQt247JMRDXgbB T89Gjv7FOmRZOuFt8WeGwWAxBKCHHoelMN/CIBRygxp+hAqcp0yxZT+kqoa8EzvY+oMs QSL14aXVhIDavIz2PuDX+Ibpwcr8KzqFl9NIH7p5TQvqt84BO8DKEwbtbIJuRUELYEyl FMkGjuH7XNK5nS0t+nZpsUjh4TuyeocWcubF1Yqy9Kx42OWu/Piw7JAQTnvBfvuNsAT/ tv7volTL4PGpjrt0K6yuVqn4y6QuS9piJjmoKQz42ViktKMRG5HNcnNwBr6kRQSb5KjS bHJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=8eZGYovNHuZ9CamBKL3IkL5KgowOus33NM5cSNm+3Xg=; b=IVScWRzgEXAxVByYD4LRUrZmE8qxF2fr+w/Bl5mzXeROyEDpHhvoo7vlo8HCQBBU3F VRFFkmisPk8Eq/NMG1JMXTTCtMFdEZOQqxThJzkciqP2zmlz4lMwrseAf77HTOsVgZx8 X++I3pyuKSBYa+pBpl0zdT++PzKz19SOuDiZNB7OXHfe9/iUcZIzL1VaQLoYe+fnIpPT eza4P4PyHZQf3R4damPJelThToM3auMyBkbuG0lS4Quf6kNvverNUJ2yS38z+ejZUc3j cjyz6tq0RBw8JJgAXfAQunVo3MqId2zcIBO6z2Hjqh0Uzf0co3Y5Z5b3L71e5W65jcJW L0hQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g61-v6si1678517plb.686.2018.04.11.13.13.06; Wed, 11 Apr 2018 13:13:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757268AbeDKUFL (ORCPT + 99 others); Wed, 11 Apr 2018 16:05:11 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:35484 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933745AbeDKSzf (ORCPT ); Wed, 11 Apr 2018 14:55:35 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 5D6DFC06; Wed, 11 Apr 2018 18:55:34 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Neil Horman , Shrikrishna Khare , "VMware, Inc." , "David S. Miller" , Sasha Levin Subject: [PATCH 4.9 037/310] vmxnet3: ensure that adapter is in proper state during force_close Date: Wed, 11 Apr 2018 20:32:56 +0200 Message-Id: <20180411183623.869665011@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180411183622.305902791@linuxfoundation.org> References: <20180411183622.305902791@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Neil Horman [ Upstream commit 1c4d5f51a812a82de97beee24f48ed05c65ebda5 ] There are several paths in vmxnet3, where settings changes cause the adapter to be brought down and back up (vmxnet3_set_ringparam among them). Should part of the reset operation fail, these paths call vmxnet3_force_close, which enables all napi instances prior to calling dev_close (with the expectation that vmxnet3_close will then properly disable them again). However, vmxnet3_force_close neglects to clear VMXNET3_STATE_BIT_QUIESCED prior to calling dev_close. As a result vmxnet3_quiesce_dev (called from vmxnet3_close), returns early, and leaves all the napi instances in a enabled state while the device itself is closed. If a device in this state is activated again, napi_enable will be called on already enabled napi_instances, leading to a BUG halt. The fix is to simply enausre that the QUIESCED bit is cleared in vmxnet3_force_close to allow quesence to be completed properly on close. Signed-off-by: Neil Horman CC: Shrikrishna Khare CC: "VMware, Inc." CC: "David S. Miller" Signed-off-by: David S. Miller Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- drivers/net/vmxnet3/vmxnet3_drv.c | 5 +++++ 1 file changed, 5 insertions(+) --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -2962,6 +2962,11 @@ vmxnet3_force_close(struct vmxnet3_adapt /* we need to enable NAPI, otherwise dev_close will deadlock */ for (i = 0; i < adapter->num_rx_queues; i++) napi_enable(&adapter->rx_queue[i].napi); + /* + * Need to clear the quiesce bit to ensure that vmxnet3_close + * can quiesce the device properly + */ + clear_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state); dev_close(adapter->netdev); }