Received: by 2002:a25:824b:0:0:0:0:0 with SMTP id d11csp1896796ybn; Thu, 26 Sep 2019 03:59:42 -0700 (PDT) X-Google-Smtp-Source: APXvYqw6ri1Xva++aJGaOkC6PqDlDBQnwbdNL1BonPoV7MIERiNmWIF9GEIsb9HEL1wapwJeyNJt X-Received: by 2002:aa7:c657:: with SMTP id z23mr2852862edr.234.1569495582444; Thu, 26 Sep 2019 03:59:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1569495582; cv=none; d=google.com; s=arc-20160816; b=lYZ/VD1T2njNtqaACqFfa5CknHAZOWzcst1qO1gLoPmh2n6BTa6uc0y3z1KW+eA3wu a4OpusuMty4SxDbUlg5v0C+d47WmX7GqdgP4NNTvhDcnk7eQ+qd0HVRWJGVID9KE89DX /G/dk1k4Pu7Oyhd+WBGB2b0vr/L6CaKOmZ+cS33v03e62k+kCNrqv64WSN6Pj26LbeDZ /fDgcWDzuV6sqacVhI5fLqYMpHxBFhfbhCq1mGDEJAol6cWn3KIG+La6SSPqALgwGBPa +7Oqx3FLUXt6KizBmOZ3NaDF7tVygeO6J9LjCaxMosNXa5tI2/USR/74Xk+oqAENOqrl cOfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=N2DZb0BJXpZgcpAahgI3rSwj96ykbSLsI5Vmg6oXhCM=; b=Klp3c+0ZmRX+1jU5JeqKzQJPg0JYF6W2kPbpftue9ZRnAf91DrpUnXTy4ggjNkgwXB SpQpSatJgTdjIadpyv705azPbCNa+U8oRc4nkJvyvOlD6ADN39qPpy+XoK4w3m/pFGyD 7u5DYx1DAgB7otRpMX2epwbB88a6dQe+48pW3WPf44ezQovMirKHnnzsjXfXix94dSjx gbKgw0WqIPpkhnQ/9OjIGP7QCErWaDSfy1LejEGNd8ZPW6xRhDkhdn8gPS7VigeFLvgU 3K1mJu02Z/iWOqPL4HtYPrdAmnWzHnchc7b0D/5XNwj4RmOvukSe8NvKr8ttrypGaIZr l23g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n2si1095183edq.264.2019.09.26.03.59.18; Thu, 26 Sep 2019 03:59:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731560AbfIZHr6 (ORCPT + 99 others); Thu, 26 Sep 2019 03:47:58 -0400 Received: from mx1.redhat.com ([209.132.183.28]:61538 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729886AbfIZHrz (ORCPT ); Thu, 26 Sep 2019 03:47:55 -0400 Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id F2C2D7BDA1 for ; Thu, 26 Sep 2019 07:47:54 +0000 (UTC) Received: by mail-wr1-f72.google.com with SMTP id z17so574023wru.13 for ; Thu, 26 Sep 2019 00:47:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=N2DZb0BJXpZgcpAahgI3rSwj96ykbSLsI5Vmg6oXhCM=; b=QjtmLv+U7U+mmxEHgMSgoLH5lMbQntR9eLNSfriJj/I4IyiDWQIRfGV/t61cKUyyww uxpwO1KhMDMBjzSnfOHJhD/OEHSpH1psCNSlAuJYM5BpCIFvlX2Ey2TsJzvJzcAVr8hY iTzV3XAkY9N85B9JEDMIrCQM2YQWKYI1FeVAwOemL0F5ngudo0kBlM1Yc+Rl3+ZMtPqp UrNlozsN5HKDgngc3sqxtlaTXJZS/HgU7VqWm513rkblkWtf0tco6P7uPSJMw6ZXECd8 Z4zeO/8KzOIQxnBoBm9/QQFypw3dKfkBBO99vLO8FUWsEcpUAga3vvykoDbX9nGWjbOg 8wvQ== X-Gm-Message-State: APjAAAXIUAeSQ1OtnWBI6K7ubT0hIpjrLphdNYa2t9n9EiCnwjsl144M XGUvrSTWj28Lf7T0/BAKi5wWAcgWoiT/WnZt1gGLa65/bBXI3VnpxvrJC+dE40pw9WNHo95Qmd5 Rh+17YP9kFv7Fcpw24ZeyV588 X-Received: by 2002:a5d:4c48:: with SMTP id n8mr1704068wrt.192.1569484073625; Thu, 26 Sep 2019 00:47:53 -0700 (PDT) X-Received: by 2002:a5d:4c48:: with SMTP id n8mr1704043wrt.192.1569484073321; Thu, 26 Sep 2019 00:47:53 -0700 (PDT) Received: from steredhat.homenet.telecomitalia.it (host174-200-dynamic.52-79-r.retail.telecomitalia.it. [79.52.200.174]) by smtp.gmail.com with ESMTPSA id b7sm1269406wrj.28.2019.09.26.00.47.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Sep 2019 00:47:52 -0700 (PDT) Date: Thu, 26 Sep 2019 09:47:49 +0200 From: Stefano Garzarella To: Dexuan Cui Cc: "davem@davemloft.net" , KY Srinivasan , Haiyang Zhang , Stephen Hemminger , "sashal@kernel.org" , "stefanha@redhat.com" , "gregkh@linuxfoundation.org" , "arnd@arndb.de" , "deepa.kernel@gmail.com" , "ytht.net@gmail.com" , "tglx@linutronix.de" , "netdev@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-hyperv@vger.kernel.org" , "kvm@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , Michael Kelley , "jhansen@vmware.com" Subject: Re: [PATCH net v2] vsock: Fix a lockdep warning in __vsock_release() Message-ID: <20190926074749.sltehhkcgfduu7n2@steredhat.homenet.telecomitalia.it> References: <1569460241-57800-1-git-send-email-decui@microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1569460241-57800-1-git-send-email-decui@microsoft.com> User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Dexuan, On Thu, Sep 26, 2019 at 01:11:27AM +0000, Dexuan Cui wrote: > Lockdep is unhappy if two locks from the same class are held. > > Fix the below warning for hyperv and virtio sockets (vmci socket code > doesn't have the issue) by using lock_sock_nested() when __vsock_release() > is called recursively: > > ============================================ > WARNING: possible recursive locking detected > 5.3.0+ #1 Not tainted > -------------------------------------------- > server/1795 is trying to acquire lock: > ffff8880c5158990 (sk_lock-AF_VSOCK){+.+.}, at: hvs_release+0x10/0x120 [hv_sock] > > but task is already holding lock: > ffff8880c5158150 (sk_lock-AF_VSOCK){+.+.}, at: __vsock_release+0x2e/0xf0 [vsock] > > other info that might help us debug this: > Possible unsafe locking scenario: > > CPU0 > ---- > lock(sk_lock-AF_VSOCK); > lock(sk_lock-AF_VSOCK); > > *** DEADLOCK *** > > May be due to missing lock nesting notation > > 2 locks held by server/1795: > #0: ffff8880c5d05ff8 (&sb->s_type->i_mutex_key#10){+.+.}, at: __sock_release+0x2d/0xa0 > #1: ffff8880c5158150 (sk_lock-AF_VSOCK){+.+.}, at: __vsock_release+0x2e/0xf0 [vsock] > > stack backtrace: > CPU: 5 PID: 1795 Comm: server Not tainted 5.3.0+ #1 > Call Trace: > dump_stack+0x67/0x90 > __lock_acquire.cold.67+0xd2/0x20b > lock_acquire+0xb5/0x1c0 > lock_sock_nested+0x6d/0x90 > hvs_release+0x10/0x120 [hv_sock] > __vsock_release+0x24/0xf0 [vsock] > __vsock_release+0xa0/0xf0 [vsock] > vsock_release+0x12/0x30 [vsock] > __sock_release+0x37/0xa0 > sock_close+0x14/0x20 > __fput+0xc1/0x250 > task_work_run+0x98/0xc0 > do_exit+0x344/0xc60 > do_group_exit+0x47/0xb0 > get_signal+0x15c/0xc50 > do_signal+0x30/0x720 > exit_to_usermode_loop+0x50/0xa0 > do_syscall_64+0x24e/0x270 > entry_SYSCALL_64_after_hwframe+0x49/0xbe > RIP: 0033:0x7f4184e85f31 > > Signed-off-by: Dexuan Cui > --- > > NOTE: I only tested the code on Hyper-V. I can not test the code for > virtio socket, as I don't have a KVM host. :-( Sorry. > > @Stefan, @Stefano: please review & test the patch for virtio socket, > and let me know if the patch breaks anything. Thanks! Comment below, I'll test it ASAP! > > Changes in v2: > Avoid the duplication of code in v1: https://lkml.org/lkml/2019/8/19/1361 > Also fix virtio socket code. > > net/vmw_vsock/af_vsock.c | 19 +++++++++++++++---- > net/vmw_vsock/hyperv_transport.c | 2 +- > net/vmw_vsock/virtio_transport_common.c | 2 +- > 3 files changed, 17 insertions(+), 6 deletions(-) > > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > index ab47bf3ab66e..dbae4373cbab 100644 > --- a/net/vmw_vsock/af_vsock.c > +++ b/net/vmw_vsock/af_vsock.c > @@ -638,8 +638,10 @@ struct sock *__vsock_create(struct net *net, > } > EXPORT_SYMBOL_GPL(__vsock_create); > > -static void __vsock_release(struct sock *sk) > +static void __vsock_release(struct sock *sk, int level) > { > + WARN_ON(level != 1 && level != 2); > + > if (sk) { > struct sk_buff *skb; > struct sock *pending; > @@ -648,9 +650,18 @@ static void __vsock_release(struct sock *sk) > vsk = vsock_sk(sk); > pending = NULL; /* Compiler warning. */ > > + /* The release call is supposed to use lock_sock_nested() > + * rather than lock_sock(), if a sock lock should be acquired. > + */ > transport->release(vsk); > > - lock_sock(sk); > + /* When "level" is 2, use the nested version to avoid the > + * warning "possible recursive locking detected". > + */ > + if (level == 1) > + lock_sock(sk); Since lock_sock() calls lock_sock_nested(sk, 0), could we use directly lock_sock_nested(sk, level) with level = 0 in vsock_release() and level = SINGLE_DEPTH_NESTING here in the while loop? > + else > + lock_sock_nested(sk, SINGLE_DEPTH_NESTING); > sock_orphan(sk); > sk->sk_shutdown = SHUTDOWN_MASK; > > @@ -659,7 +670,7 @@ static void __vsock_release(struct sock *sk) > > /* Clean up any sockets that never were accepted. */ > while ((pending = vsock_dequeue_accept(sk)) != NULL) { > - __vsock_release(pending); > + __vsock_release(pending, 2); > sock_put(pending); > } > > @@ -708,7 +719,7 @@ EXPORT_SYMBOL_GPL(vsock_stream_has_space); > > static int vsock_release(struct socket *sock) > { > - __vsock_release(sock->sk); > + __vsock_release(sock->sk, 1); > sock->sk = NULL; > sock->state = SS_FREE; > > diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c > index 261521d286d6..c443db7af8d4 100644 > --- a/net/vmw_vsock/hyperv_transport.c > +++ b/net/vmw_vsock/hyperv_transport.c > @@ -559,7 +559,7 @@ static void hvs_release(struct vsock_sock *vsk) > struct sock *sk = sk_vsock(vsk); > bool remove_sock; > > - lock_sock(sk); > + lock_sock_nested(sk, SINGLE_DEPTH_NESTING); > remove_sock = hvs_close_lock_held(vsk); > release_sock(sk); > if (remove_sock) > diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c > index 5bb70c692b1e..a666ef8fc54e 100644 > --- a/net/vmw_vsock/virtio_transport_common.c > +++ b/net/vmw_vsock/virtio_transport_common.c > @@ -820,7 +820,7 @@ void virtio_transport_release(struct vsock_sock *vsk) > struct sock *sk = &vsk->sk; > bool remove_sock = true; > > - lock_sock(sk); > + lock_sock_nested(sk, SINGLE_DEPTH_NESTING); > if (sk->sk_type == SOCK_STREAM) > remove_sock = virtio_transport_close(vsk); > Thanks, Stefano