Received: by 2002:ac0:e350:0:0:0:0:0 with SMTP id g16csp1943161imn; Mon, 1 Aug 2022 05:54:27 -0700 (PDT) X-Google-Smtp-Source: AGRyM1v/V8/qbnT2WXmUIZSnzW4/6jUZ0YKW1VLL4R8eOQ0tIOBTW0uxZxt/GDlV/iuvm/enzpHe X-Received: by 2002:a17:907:7214:b0:72b:8720:487f with SMTP id dr20-20020a170907721400b0072b8720487fmr12575599ejc.640.1659358467423; Mon, 01 Aug 2022 05:54:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1659358467; cv=none; d=google.com; s=arc-20160816; b=eHCOc/1o7T33dBhkZ2gpVrjcuSL2HRs6KZksTeohEcaAMMTIdPw4YrvUkaS7VCg7MA zmWUAIs8N+9THu/Dgk8/5+uHR7gSP7s4ikUGS8FXA0gVcU9oRV+wTEtvirdH43539TXs 8E/qQWf6sonhUmcYCwIUhjO0ygITuPRzJzL6co0kRu31W7W8k01m60vqaE4Zh5lA9KHe qOXRhO7QdQ8laElG8+mI8HKKDvTYOrPJayIEyuNf1kqjJX7ojoU7+GEG6EwahjQVTQRY CtXSpcZQrv0Xz7Qn/h+48Zam8R2m85uX6yq8BgdHb8xDwW+4jSDnrdrGDL70ZB9VOh3Q ptIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=F8Loa6Lto5xKi90pQoFaeA/2KbsJ67+eg5CHrDRrwHs=; b=vVY+TLUGG6XLO2hKT5e1fiRz16+/LSlNb9FT/TUSOMnPc3Od+9TYy8CGgxBsXeBwe+ muKsFHsDnEJGIiv15B7KWzYjS7RtHBpjGvg+mURmxL0H0zwLcruPlRanL8WEkgmBDf5G xgO7N2t/ylfU/d8bbukZ4EWUs0Dc0vHIeSFQgQmoKcnVFcuRcCyuKGgM2IfmEKftbPKn I4FZESlaaBB9WW4zOmFGnKZ4mXMaKXyb8FTuEyA3YjBCIwtR92Cn1o0/VAzOvaaacsVy jl20KImzn0ZXpOn172mJ5fVv6MOPPfSCvowci6QVDXcuCYcAStRxn+3carOS2X6sHE1S UInw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=R6ynqcje; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id la26-20020a170907781a00b007263a6115adsi475848ejc.893.2022.08.01.05.54.02; Mon, 01 Aug 2022 05:54:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=R6ynqcje; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232259AbiHAL4U (ORCPT + 99 others); Mon, 1 Aug 2022 07:56:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232375AbiHALyz (ORCPT ); Mon, 1 Aug 2022 07:54:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56443402CD; Mon, 1 Aug 2022 04:51:07 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 50085612C6; Mon, 1 Aug 2022 11:51:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61190C433D7; Mon, 1 Aug 2022 11:51:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1659354665; bh=lnR4Z5qigvZ19dEUK4wJZpu1tZuVOX5X07c1qp0wsPU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R6ynqcjeUb+J1fHpb6Yxx3HvvGjdq9/W06ijPx2zDBfKD8FcXPqMgu1BwNULmlSwj PuDUItwxHsBB+Z9X+QqHo8j770inMQuybE9Bi+zUpb9knD3/llTMXneEv5g6wRlrLR UIF3/xDGW67dXf85koW1pUbruYTKsEobWR4CtcdA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , Xuan Zhuo , "David S. Miller" , Sasha Levin Subject: [PATCH 5.10 43/65] virtio-net: fix the race between refill work and close Date: Mon, 1 Aug 2022 13:47:00 +0200 Message-Id: <20220801114135.522121844@linuxfoundation.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220801114133.641770326@linuxfoundation.org> References: <20220801114133.641770326@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jason Wang [ Upstream commit 5a159128faff151b7fe5f4eb0f310b1e0a2d56bf ] We try using cancel_delayed_work_sync() to prevent the work from enabling NAPI. This is insufficient since we don't disable the source of the refill work scheduling. This means an NAPI poll callback after cancel_delayed_work_sync() can schedule the refill work then can re-enable the NAPI that leads to use-after-free [1]. Since the work can enable NAPI, we can't simply disable NAPI before calling cancel_delayed_work_sync(). So fix this by introducing a dedicated boolean to control whether or not the work could be scheduled from NAPI. [1] ================================================================== BUG: KASAN: use-after-free in refill_work+0x43/0xd4 Read of size 2 at addr ffff88810562c92e by task kworker/2:1/42 CPU: 2 PID: 42 Comm: kworker/2:1 Not tainted 5.19.0-rc1+ #480 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 Workqueue: events refill_work Call Trace: dump_stack_lvl+0x34/0x44 print_report.cold+0xbb/0x6ac ? _printk+0xad/0xde ? refill_work+0x43/0xd4 kasan_report+0xa8/0x130 ? refill_work+0x43/0xd4 refill_work+0x43/0xd4 process_one_work+0x43d/0x780 worker_thread+0x2a0/0x6f0 ? process_one_work+0x780/0x780 kthread+0x167/0x1a0 ? kthread_exit+0x50/0x50 ret_from_fork+0x22/0x30 ... Fixes: b2baed69e605c ("virtio_net: set/cancel work on ndo_open/ndo_stop") Signed-off-by: Jason Wang Acked-by: Michael S. Tsirkin Reviewed-by: Xuan Zhuo Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/virtio_net.c | 37 ++++++++++++++++++++++++++++++++++--- 1 file changed, 34 insertions(+), 3 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 37178b078ee3..0a07c05a610d 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -213,9 +213,15 @@ struct virtnet_info { /* Packet virtio header size */ u8 hdr_len; - /* Work struct for refilling if we run low on memory. */ + /* Work struct for delayed refilling if we run low on memory. */ struct delayed_work refill; + /* Is delayed refill enabled? */ + bool refill_enabled; + + /* The lock to synchronize the access to refill_enabled */ + spinlock_t refill_lock; + /* Work struct for config space updates */ struct work_struct config_work; @@ -319,6 +325,20 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) return p; } +static void enable_delayed_refill(struct virtnet_info *vi) +{ + spin_lock_bh(&vi->refill_lock); + vi->refill_enabled = true; + spin_unlock_bh(&vi->refill_lock); +} + +static void disable_delayed_refill(struct virtnet_info *vi) +{ + spin_lock_bh(&vi->refill_lock); + vi->refill_enabled = false; + spin_unlock_bh(&vi->refill_lock); +} + static void virtqueue_napi_schedule(struct napi_struct *napi, struct virtqueue *vq) { @@ -1403,8 +1423,12 @@ static int virtnet_receive(struct receive_queue *rq, int budget, } if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) { - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) - schedule_delayed_work(&vi->refill, 0); + if (!try_fill_recv(vi, rq, GFP_ATOMIC)) { + spin_lock(&vi->refill_lock); + if (vi->refill_enabled) + schedule_delayed_work(&vi->refill, 0); + spin_unlock(&vi->refill_lock); + } } u64_stats_update_begin(&rq->stats.syncp); @@ -1523,6 +1547,8 @@ static int virtnet_open(struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int i, err; + enable_delayed_refill(vi); + for (i = 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) /* Make sure we have some buffers: if oom use wq. */ @@ -1893,6 +1919,8 @@ static int virtnet_close(struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int i; + /* Make sure NAPI doesn't schedule refill work */ + disable_delayed_refill(vi); /* Make sure refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); @@ -2390,6 +2418,8 @@ static int virtnet_restore_up(struct virtio_device *vdev) virtio_device_ready(vdev); + enable_delayed_refill(vi); + if (netif_running(vi->dev)) { err = virtnet_open(vi->dev); if (err) @@ -3092,6 +3122,7 @@ static int virtnet_probe(struct virtio_device *vdev) vdev->priv = vi; INIT_WORK(&vi->config_work, virtnet_config_changed_work); + spin_lock_init(&vi->refill_lock); /* If we can receive ANY GSO packets, we must allocate large ones. */ if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || -- 2.35.1