Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp1078968rwl; Fri, 24 Mar 2023 06:07:42 -0700 (PDT) X-Google-Smtp-Source: AKy350aToren4iotO2ejsXG1wbDLhVdt3m/4y5CIp/X5I9rinv2zNJje8/x/KOaSw9bygq6wj78l X-Received: by 2002:a17:906:2ecd:b0:931:636e:de5a with SMTP id s13-20020a1709062ecd00b00931636ede5amr2411423eji.31.1679663262234; Fri, 24 Mar 2023 06:07:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679663262; cv=none; d=google.com; s=arc-20160816; b=0o903rMptvgmuCPwHgbu2aKEQsq81Ak6Cb0wDgxbWwfxF7MRapiM6brBewy0r7985B lK4QHOomV/dE5bEejRLz4fQpeE+cJu+xOlGogQNGy0wh6FQlnU/UxlY2myCkg9pvYuNs gPty6S0WCtu77ziESCa0I1zliHtjBKegwgnI1plzviI8Aoasv3JjrMbjxEkhFAiuRrWJ D+2BkkY5NP/NGb/AcNLNdM53jNcVx2eGcHae7aVIc+cvV+N6y6L8FX5qyouyZSGtdZ8z /ILPTv25XtESEBsiOyPQsMf6QdUYcZIVKvq/Q5LDPAPJTwf7oEKn2GfeLlNV2VWpL8Ek 0vOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=TYAU8R6v5hMibdRbWUhIb7/NXDFhS4JxFF4SG4AWgoM=; b=QAl8mvRO63rw8judY4SsMt0CG3aY0n4ywKIh5sKCnJyk1NPoFd3z7NgOL+FOYIzHl7 RJebOGHI/OTTflesUiCtcfMQSNlX+rDB/QBh1X72Bb/RJzIPKs2YCsLDxvRWN/iSDtUL qQlZCvW9DBlTuDvM28cBOATkjkQm/DQWQtNoqA3sxUk7z2Q4Gm9hmEBkrGluFKNgeFa9 m1LMt7VvfbjlCE3LpwNccVOj862iVMIx3zoUIRe5/Deun8TwP1LEYXTgyRMjwkykfyMm CaUD5td6kfa/qaw38o9ZwTwQ90C6VNFaJZZdnAmtrDZa6gfqO9OCwHR8tUqK+WqquUeH KPoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sberdevices.ru header.s=mail header.b="k0IU/icP"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=sberdevices.ru Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id rs1-20020a170907036100b0093b46e853fdsi6793243ejb.771.2023.03.24.06.07.11; Fri, 24 Mar 2023 06:07:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@sberdevices.ru header.s=mail header.b="k0IU/icP"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=sberdevices.ru Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231461AbjCXM56 (ORCPT + 99 others); Fri, 24 Mar 2023 08:57:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231150AbjCXM55 (ORCPT ); Fri, 24 Mar 2023 08:57:57 -0400 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 933091E5DE; Fri, 24 Mar 2023 05:57:55 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id E09CA5FD72; Fri, 24 Mar 2023 15:57:52 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1679662672; bh=TYAU8R6v5hMibdRbWUhIb7/NXDFhS4JxFF4SG4AWgoM=; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; b=k0IU/icPvaHmJZilzJVvsdIu2VJ8MIGXfsbfsxAHGrlG+yOSETJmXRLd65i/WA9mQ /yczMu+peeDKpxk37VsivgItxQ6wl2VTCO3FTZoOps5rEVi4oDlLuKMgCcsB+LN+T4 CXUHgzo7kEGTkG8pYTrpaMvlhykLUS3DtRqtyU11g+0s0usbe5fzSuVb8WTBwpbjin CQbJ8q/plYY2br0H+QcGNqvWMMWP0QsOjoI1aiXKV7UaeGM0HjHfpHTBa09ZhEkHiP v/+ZEQ4c36oHFaFpKsJpAMFiKsX26Q4aVEN8S+Us2y/u+3GQ6HSzJWfz4j/3IfjLru CcLd6Wa3PL5lw== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Fri, 24 Mar 2023 15:57:46 +0300 (MSK) Message-ID: <94b58c20-9111-8ada-79fd-eced6a1ba2cc@sberdevices.ru> Date: Fri, 24 Mar 2023 15:54:33 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 Subject: Re: [PATCH net] vsock/loopback: use only sk_buff_head.lock to protect the packet queue Content-Language: en-US To: Stefano Garzarella , CC: Bobby Eshleman , "David S. Miller" , Paolo Abeni , Eric Dumazet , , Jakub Kicinski , , References: <20230324115450.11268-1-sgarzare@redhat.com> From: Arseniy Krasnov In-Reply-To: <20230324115450.11268-1-sgarzare@redhat.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [172.16.1.6] X-ClientProxiedBy: S-MS-EXCH01.sberdevices.ru (172.16.1.4) To S-MS-EXCH01.sberdevices.ru (172.16.1.4) X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/03/24 06:52:00 #21002836 X-KSMG-AntiVirus-Status: Clean, skipped X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 24.03.2023 14:54, Stefano Garzarella wrote: > pkt_list_lock was used before commit 71dc9ec9ac7d ("virtio/vsock: > replace virtio_vsock_pkt with sk_buff") to protect the packet queue. > After that commit we switched to sk_buff and we are using > sk_buff_head.lock in almost every place to protect the packet queue > except in vsock_loopback_work() when we call skb_queue_splice_init(). > > As reported by syzbot, this caused unlocked concurrent access to the > packet queue between vsock_loopback_work() and > vsock_loopback_cancel_pkt() since it is not holding pkt_list_lock. > > With the introduction of sk_buff_head, pkt_list_lock is redundant and > can cause confusion, so let's remove it and use sk_buff_head.lock > everywhere to protect the packet queue access. > > Fixes: 71dc9ec9ac7d ("virtio/vsock: replace virtio_vsock_pkt with sk_buff") > Cc: bobby.eshleman@bytedance.com > Reported-and-tested-by: syzbot+befff0a9536049e7902e@syzkaller.appspotmail.com > Signed-off-by: Stefano Garzarella > --- > net/vmw_vsock/vsock_loopback.c | 10 ++-------- > 1 file changed, 2 insertions(+), 8 deletions(-) Reviewed-by: Arseniy Krasnov > > diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c > index 671e03240fc5..89905c092645 100644 > --- a/net/vmw_vsock/vsock_loopback.c > +++ b/net/vmw_vsock/vsock_loopback.c > @@ -15,7 +15,6 @@ > struct vsock_loopback { > struct workqueue_struct *workqueue; > > - spinlock_t pkt_list_lock; /* protects pkt_list */ > struct sk_buff_head pkt_queue; > struct work_struct pkt_work; > }; > @@ -32,9 +31,7 @@ static int vsock_loopback_send_pkt(struct sk_buff *skb) > struct vsock_loopback *vsock = &the_vsock_loopback; > int len = skb->len; > > - spin_lock_bh(&vsock->pkt_list_lock); > skb_queue_tail(&vsock->pkt_queue, skb); > - spin_unlock_bh(&vsock->pkt_list_lock); > > queue_work(vsock->workqueue, &vsock->pkt_work); > > @@ -113,9 +110,9 @@ static void vsock_loopback_work(struct work_struct *work) > > skb_queue_head_init(&pkts); > > - spin_lock_bh(&vsock->pkt_list_lock); > + spin_lock_bh(&vsock->pkt_queue.lock); > skb_queue_splice_init(&vsock->pkt_queue, &pkts); > - spin_unlock_bh(&vsock->pkt_list_lock); > + spin_unlock_bh(&vsock->pkt_queue.lock); > > while ((skb = __skb_dequeue(&pkts))) { > virtio_transport_deliver_tap_pkt(skb); > @@ -132,7 +129,6 @@ static int __init vsock_loopback_init(void) > if (!vsock->workqueue) > return -ENOMEM; > > - spin_lock_init(&vsock->pkt_list_lock); > skb_queue_head_init(&vsock->pkt_queue); > INIT_WORK(&vsock->pkt_work, vsock_loopback_work); > > @@ -156,9 +152,7 @@ static void __exit vsock_loopback_exit(void) > > flush_work(&vsock->pkt_work); > > - spin_lock_bh(&vsock->pkt_list_lock); > virtio_vsock_skb_queue_purge(&vsock->pkt_queue); > - spin_unlock_bh(&vsock->pkt_list_lock); > > destroy_workqueue(vsock->workqueue); > }