Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B62EC54EED for ; Mon, 30 Jan 2023 05:44:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235622AbjA3Fo3 (ORCPT ); Mon, 30 Jan 2023 00:44:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229744AbjA3Fo1 (ORCPT ); Mon, 30 Jan 2023 00:44:27 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0888811160 for ; Sun, 29 Jan 2023 21:43:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675057427; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2Kuu0GNI+KbyBCEudDzyW7Q6jFDRoiyBH652zdvdW9k=; b=JhgiZjb0Doyc6HiylZC3Aif29baE9aJO5skUSBl9AdQZE47FN0D9g+TcvZ/+0/3n3MAZnQ oo2w6h8WwqwHyzV1eAmZR2/Mc4xFvmBDECpuPM52BwThAlPWQCRYUq1dzRhmdaaKqfD5sl cPyxbefhwmHpG5q7grqawykIFMyFUlM= Received: from mail-vk1-f198.google.com (mail-vk1-f198.google.com [209.85.221.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-17-qlnimt_wNJ6xwfEA9XLe6A-1; Mon, 30 Jan 2023 00:43:41 -0500 X-MC-Unique: qlnimt_wNJ6xwfEA9XLe6A-1 Received: by mail-vk1-f198.google.com with SMTP id h85-20020a1f9e58000000b003e8d54eb923so3115966vke.5 for ; Sun, 29 Jan 2023 21:43:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2Kuu0GNI+KbyBCEudDzyW7Q6jFDRoiyBH652zdvdW9k=; b=E0llpgZYlRdhhKNimCTkp1xcMaZ0xu5fLE5uEDc5dzlL46GzYO90pgXKeTE5CVoqNh AVqrQAlUfPNtIv33b0FGDvxDkZUR5lGt4rcis/lv1iUrL2w3C9UfMQJZ6B4n1IcoqFp5 n9b5pu4peCfB1QKI+DJxtF6VfwySpgT/+XxlEG12/eAmsJaw+IHdC7Z97+GYXP+v2YiL 6e7jChGJdDfrlmfcNIJvSrEO0cajQ8IoIkQ4iF3I3hdSQkyd800mGzbnABX0tEJ2Akz4 gOmcLb6rs6QY8v9BPkXgyaWCAI2KnmtHj3DfmIbExK+ropsTbOcApVRa12VBpLLVplHY oNDA== X-Gm-Message-State: AFqh2kpzE5sDSipoIL5Iuv0S9i50L9gqmlHbmUsq6S2kZzncaRFfRbkj c3xqj+zH+evliHmnJ6qTJscerf7ROKR3iZRAfdiv61vAlqi+AcbTikBCPH69+AakWreTQz5n7VE Us97LRhplTq1my1fxlnVi2fda X-Received: by 2002:a05:6102:50a4:b0:3d0:dc9c:e82e with SMTP id bl36-20020a05610250a400b003d0dc9ce82emr30642657vsb.7.1675057420565; Sun, 29 Jan 2023 21:43:40 -0800 (PST) X-Google-Smtp-Source: AMrXdXujATUMfJc8yKVf9FXdTDbjwCk2aZr0JxpMeuUR+yFC7eaGn7bjMMHIAavl0DoVTfrNkLiTBw== X-Received: by 2002:a05:6102:50a4:b0:3d0:dc9c:e82e with SMTP id bl36-20020a05610250a400b003d0dc9ce82emr30642643vsb.7.1675057420187; Sun, 29 Jan 2023 21:43:40 -0800 (PST) Received: from redhat.com ([87.249.138.139]) by smtp.gmail.com with ESMTPSA id q9-20020a9f3409000000b004c6b53e0fadsm973239uab.25.2023.01.29.21.43.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 29 Jan 2023 21:43:39 -0800 (PST) Date: Mon, 30 Jan 2023 00:43:31 -0500 From: "Michael S. Tsirkin" To: Jason Wang Cc: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, maxime.coquelin@redhat.com, alvaro.karsz@solid-run.com, eperezma@redhat.com Subject: Re: [PATCH 3/4] virtio_ring: introduce a per virtqueue waitqueue Message-ID: <20230130003334-mutt-send-email-mst@kernel.org> References: <0d9f1b89-9374-747b-3fb0-b4b28ad0ace1@redhat.com> <20221229020553-mutt-send-email-mst@kernel.org> <20221229030633-mutt-send-email-mst@kernel.org> <20230127053112-mutt-send-email-mst@kernel.org> <20230129022809-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jan 30, 2023 at 10:53:54AM +0800, Jason Wang wrote: > On Sun, Jan 29, 2023 at 3:30 PM Michael S. Tsirkin wrote: > > > > On Sun, Jan 29, 2023 at 01:48:49PM +0800, Jason Wang wrote: > > > On Fri, Jan 27, 2023 at 6:35 PM Michael S. Tsirkin wrote: > > > > > > > > On Fri, Dec 30, 2022 at 11:43:08AM +0800, Jason Wang wrote: > > > > > On Thu, Dec 29, 2022 at 4:10 PM Michael S. Tsirkin wrote: > > > > > > > > > > > > On Thu, Dec 29, 2022 at 04:04:13PM +0800, Jason Wang wrote: > > > > > > > On Thu, Dec 29, 2022 at 3:07 PM Michael S. Tsirkin wrote: > > > > > > > > > > > > > > > > On Wed, Dec 28, 2022 at 07:53:08PM +0800, Jason Wang wrote: > > > > > > > > > On Wed, Dec 28, 2022 at 2:34 PM Jason Wang wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > 在 2022/12/27 17:38, Michael S. Tsirkin 写道: > > > > > > > > > > > On Tue, Dec 27, 2022 at 05:12:58PM +0800, Jason Wang wrote: > > > > > > > > > > >> 在 2022/12/27 15:33, Michael S. Tsirkin 写道: > > > > > > > > > > >>> On Tue, Dec 27, 2022 at 12:30:35PM +0800, Jason Wang wrote: > > > > > > > > > > >>>>> But device is still going and will later use the buffers. > > > > > > > > > > >>>>> > > > > > > > > > > >>>>> Same for timeout really. > > > > > > > > > > >>>> Avoiding infinite wait/poll is one of the goals, another is to sleep. > > > > > > > > > > >>>> If we think the timeout is hard, we can start from the wait. > > > > > > > > > > >>>> > > > > > > > > > > >>>> Thanks > > > > > > > > > > >>> If the goal is to avoid disrupting traffic while CVQ is in use, > > > > > > > > > > >>> that sounds more reasonable. E.g. someone is turning on promisc, > > > > > > > > > > >>> a spike in CPU usage might be unwelcome. > > > > > > > > > > >> > > > > > > > > > > >> Yes, this would be more obvious is UP is used. > > > > > > > > > > >> > > > > > > > > > > >> > > > > > > > > > > >>> things we should be careful to address then: > > > > > > > > > > >>> 1- debugging. Currently it's easy to see a warning if CPU is stuck > > > > > > > > > > >>> in a loop for a while, and we also get a backtrace. > > > > > > > > > > >>> E.g. with this - how do we know who has the RTNL? > > > > > > > > > > >>> We need to integrate with kernel/watchdog.c for good results > > > > > > > > > > >>> and to make sure policy is consistent. > > > > > > > > > > >> > > > > > > > > > > >> That's fine, will consider this. > > > > > > > > > > > > > > > > > > So after some investigation, it seems the watchdog.c doesn't help. The > > > > > > > > > only export helper is touch_softlockup_watchdog() which tries to avoid > > > > > > > > > triggering the lockups warning for the known slow path. > > > > > > > > > > > > > > > > I never said you can just use existing exporting APIs. You'll have to > > > > > > > > write new ones :) > > > > > > > > > > > > > > Ok, I thought you wanted to trigger similar warnings as a watchdog. > > > > > > > > > > > > > > Btw, I wonder what kind of logic you want here. If we switch to using > > > > > > > sleep, there won't be soft lockup anymore. A simple wait + timeout + > > > > > > > warning seems sufficient? > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > I'd like to avoid need to teach users new APIs. So watchdog setup to apply > > > > > > to this driver. The warning can be different. > > > > > > > > > > Right, so it looks to me the only possible setup is the > > > > > watchdog_thres. I plan to trigger the warning every watchdog_thres * 2 > > > > > second (as softlockup did). > > > > > > > > > > And I think it would still make sense to fail, we can start with a > > > > > very long timeout like 1 minutes and break the device. Does this make > > > > > sense? > > > > > > > > > > Thanks > > > > > > > > I'd say we need to make this manageable then. > > > > > > Did you mean something like sysfs or module parameters? > > > > No I'd say pass it with an ioctl. > > > > > > Can't we do it normally > > > > e.g. react to an interrupt to return to userspace? > > > > > > I didn't get the meaning of this. Sorry. > > > > > > Thanks > > > > Standard way to handle things that can timeout and where userspace > > did not supply the time is to block until an interrupt > > then return EINTR. > > Well this seems to be a huge change, ioctl(2) doesn't say it can > return EINTR now. the one on fedora 37 does not but it says: No single standard. Arguments, returns, and semantics of ioctl() vary according to the device driver in question (the call is used as a catch-all for operations that don't cleanly fit the UNIX stream I/O model). so it depends on the device e.g. for a streams device it does: https://pubs.opengroup.org/onlinepubs/9699919799/functions/ioctl.html has EINTR. > Actually, a driver timeout is used by other drivers when using > controlq/adminq (e.g i40e). Starting from a sane value (e.g 1 minutes > to avoid false negatives) seems to be a good first step. Well because it's specific hardware so timeout matches what it can promise. virtio spec does not give guarantees. One issue is with software implementations. At the moment I can set a breakpoint in qemu or vhost user backend and nothing bad happens in just continues. > > Userspace controls the timeout by > > using e.g. alarm(2). > > Not used in iproute2 after a git grep. > > Thanks No need for iproute2 to do it user can just do it from shell. Or user can just press CTRL-C. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > And before the patch, we end up with a real infinite loop which could > > > > > > > > > be caught by RCU stall detector which is not the case of the sleep. > > > > > > > > > What we can do is probably do a periodic netdev_err(). > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > Only with a bad device. > > > > > > > > > > > > > > > > > > >> > > > > > > > > > > >> > > > > > > > > > > >>> 2- overhead. In a very common scenario when device is in hypervisor, > > > > > > > > > > >>> programming timers etc has a very high overhead, at bootup > > > > > > > > > > >>> lots of CVQ commands are run and slowing boot down is not nice. > > > > > > > > > > >>> let's poll for a bit before waiting? > > > > > > > > > > >> > > > > > > > > > > >> Then we go back to the question of choosing a good timeout for poll. And > > > > > > > > > > >> poll seems problematic in the case of UP, scheduler might not have the > > > > > > > > > > >> chance to run. > > > > > > > > > > > Poll just a bit :) Seriously I don't know, but at least check once > > > > > > > > > > > after kick. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I think it is what the current code did where the condition will be > > > > > > > > > > check before trying to sleep in the wait_event(). > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >>> 3- suprise removal. need to wake up thread in some way. what about > > > > > > > > > > >>> other cases of device breakage - is there a chance this > > > > > > > > > > >>> introduces new bugs around that? at least enumerate them please. > > > > > > > > > > >> > > > > > > > > > > >> The current code did: > > > > > > > > > > >> > > > > > > > > > > >> 1) check for vq->broken > > > > > > > > > > >> 2) wakeup during BAD_RING() > > > > > > > > > > >> > > > > > > > > > > >> So we won't end up with a never woke up process which should be fine. > > > > > > > > > > >> > > > > > > > > > > >> Thanks > > > > > > > > > > > > > > > > > > > > > > BTW BAD_RING on removal will trigger dev_err. Not sure that is a good > > > > > > > > > > > idea - can cause crashes if kernel panics on error. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Yes, it's better to use __virtqueue_break() instead. > > > > > > > > > > > > > > > > > > > > But consider we will start from a wait first, I will limit the changes > > > > > > > > > > in virtio-net without bothering virtio core. > > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >>> > > > > > > > > > > > > > > > > > > > >