Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp8440225ybn; Tue, 1 Oct 2019 08:09:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqysWQrJur4G+3BnhXvfA9ouR2wNmkkgqFh2G3lgS77rvyM1Fe8F+rcqNAsRAXWoBSri9yZo X-Received: by 2002:a50:9734:: with SMTP id c49mr26274891edb.93.1569942556857; Tue, 01 Oct 2019 08:09:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569942556; cv=none; d=google.com; s=arc-20160816; b=oIeedMUX0i6Jq5k9AAMeWfOeS5JaYjxoJjrfwnnHamZyve/kQ94P8QyT5QbzVbxKU6 RXbUod0RrL0l4WvDeTzrpP578HDWLvV3qAqUjxq2/fttCdfX/jcBE8WV8btUv2KVs4hk eHHfLmLNA3urzbZobabhq24BnVr79Y5tuciMmdnIACJ2yM+F5Dm0QpDc+3idWL3h8DwW 82herjVaI5PYXbdNVmmFECMHzDLkDNXXqruWghejIKmq8LoPFYdi/PiMmQn7L3UF4cIR 8eKRP/AF7zYDz7sRwfTknmsg0+2acGDc4aZYxuPLtthovAnsnd5xnF9e2xHAzOkz3FN5 sOuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=pPOMvPqEWFm/39rv4g5rKPZbfEbCgq30P2xdVw0wi1w=; b=C3M+cROf8IEitxmhHUHL+Hip1A4/TFATgLqMHXvKVZ7HBYbwH4tUQQ4BJcC8C5Fo5o CV2N6jqvYuGeL/P34s1pRFNF29ldCah/bO4q83426iLQdkU9PM8Ib8FKVU+oQnXbiHzp ZHjxr8dkwx1djREPhHMPi/EAu6A+i4VbOkCmHovjuzC+4itM8dw0SXc+O6QS+L/ivTeo WX6iGkbQGdnYyCo6WXiTjmYkxB+Q4iMOH5PNCdt6Pg3/b3p7AfUEPZRn2C2/Miocrr0l f8YyMOppTAjX1K30Mrq5DW5vxiyyoEnc9YbqxhGhVvNc03K9eknJSP6O1Z6jlWC7kfiB XDag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g20si8799200eje.364.2019.10.01.08.08.51; Tue, 01 Oct 2019 08:09:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389475AbfJAPEl (ORCPT + 99 others); Tue, 1 Oct 2019 11:04:41 -0400 Received: from mx2.suse.de ([195.135.220.15]:36704 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727185AbfJAPEk (ORCPT ); Tue, 1 Oct 2019 11:04:40 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 291F6AF8E; Tue, 1 Oct 2019 15:04:38 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , James Dingwall , stable@vger.kernel.org Subject: [PATCH] xen/xenbus: fix self-deadlock after killing user process Date: Tue, 1 Oct 2019 17:03:55 +0200 Message-Id: <20191001150355.25365-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In case a user process using xenbus has open transactions and is killed e.g. via ctrl-C the following cleanup of the allocated resources might result in a deadlock due to trying to end a transaction in the xenbus worker thread: [ 2551.474706] INFO: task xenbus:37 blocked for more than 120 seconds. [ 2551.492215] Tainted: P OE 5.0.0-29-generic #5 [ 2551.510263] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 2551.528585] xenbus D 0 37 2 0x80000080 [ 2551.528590] Call Trace: [ 2551.528603] __schedule+0x2c0/0x870 [ 2551.528606] ? _cond_resched+0x19/0x40 [ 2551.528632] schedule+0x2c/0x70 [ 2551.528637] xs_talkv+0x1ec/0x2b0 [ 2551.528642] ? wait_woken+0x80/0x80 [ 2551.528645] xs_single+0x53/0x80 [ 2551.528648] xenbus_transaction_end+0x3b/0x70 [ 2551.528651] xenbus_file_free+0x5a/0x160 [ 2551.528654] xenbus_dev_queue_reply+0xc4/0x220 [ 2551.528657] xenbus_thread+0x7de/0x880 [ 2551.528660] ? wait_woken+0x80/0x80 [ 2551.528665] kthread+0x121/0x140 [ 2551.528667] ? xb_read+0x1d0/0x1d0 [ 2551.528670] ? kthread_park+0x90/0x90 [ 2551.528673] ret_from_fork+0x35/0x40 Fix this by doing the cleanup via a workqueue instead. Reported-by: James Dingwall Fixes: fd8aa9095a95c ("xen: optimize xenbus driver for multiple concurrent xenstore accesses") Cc: # 4.11 Signed-off-by: Juergen Gross --- drivers/xen/xenbus/xenbus_dev_frontend.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c index 08adc590f631..597af455a522 100644 --- a/drivers/xen/xenbus/xenbus_dev_frontend.c +++ b/drivers/xen/xenbus/xenbus_dev_frontend.c @@ -55,6 +55,7 @@ #include #include #include +#include #include #include @@ -116,6 +117,8 @@ struct xenbus_file_priv { wait_queue_head_t read_waitq; struct kref kref; + + struct work_struct wq; }; /* Read out any raw xenbus messages queued up. */ @@ -300,14 +303,14 @@ static void watch_fired(struct xenbus_watch *watch, mutex_unlock(&adap->dev_data->reply_mutex); } -static void xenbus_file_free(struct kref *kref) +static void xenbus_worker(struct work_struct *wq) { struct xenbus_file_priv *u; struct xenbus_transaction_holder *trans, *tmp; struct watch_adapter *watch, *tmp_watch; struct read_buffer *rb, *tmp_rb; - u = container_of(kref, struct xenbus_file_priv, kref); + u = container_of(wq, struct xenbus_file_priv, wq); /* * No need for locking here because there are no other users, @@ -333,6 +336,18 @@ static void xenbus_file_free(struct kref *kref) kfree(u); } +static void xenbus_file_free(struct kref *kref) +{ + struct xenbus_file_priv *u; + + /* + * We might be called in xenbus_thread(). + * Use workqueue to avoid deadlock. + */ + u = container_of(kref, struct xenbus_file_priv, kref); + schedule_work(&u->wq); +} + static struct xenbus_transaction_holder *xenbus_get_transaction( struct xenbus_file_priv *u, uint32_t tx_id) { @@ -650,6 +665,7 @@ static int xenbus_file_open(struct inode *inode, struct file *filp) INIT_LIST_HEAD(&u->watches); INIT_LIST_HEAD(&u->read_buffers); init_waitqueue_head(&u->read_waitq); + INIT_WORK(&u->wq, xenbus_worker); mutex_init(&u->reply_mutex); mutex_init(&u->msgbuffer_mutex); -- 2.16.4