Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4693753pxj; Tue, 25 May 2021 14:08:41 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyFVHUkVESyVq0M9LNBy7MrSqxj8EXyqQYz2d7pM5WmLosspNqcF+WABk9wCIs53WpyqwgW X-Received: by 2002:a92:c8cc:: with SMTP id c12mr5874046ilq.117.1621976921182; Tue, 25 May 2021 14:08:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621976921; cv=none; d=google.com; s=arc-20160816; b=QoLp98n2iuF1eitwgts2J0o/r4VQZyQ8KnJWRHv1vKEyP4HIeklQgL55Ux90fbw2Hz SCx5hMQvPFbjT2y7tIMvhKDe0H15AT16Zw2PX19Vci26MifIPcygqisFZSuurt2z6pQA 9qcMyV2hvJ0HUEtbLRIKids8bNfnx7estSF4srg8tmRlClYiU+MeuLsY7ItjZTE0kal4 7w/ofK4beO57I15sOhVH8Is4h1h5Z1Q6xr9qI5v+agSQxDTWCc+BMLc50M2Hd9q6ho3U m7ZGDEcFJWOAnMtn+u1SoqKY63j71elC1Kj0bFoWawUTvHJavBbx7nr3mM1dczTl8K47 5nfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature:dkim-signature; bh=/8jZJtlCctlz4NX7pE+UiAv7dDWXJgGKttx44dP5P1I=; b=Yl6KcjOw6353DV8DqD99YOkKGzKsVoEAo3gZRg12XKby//ssh9hU4rOdj1aPjgqSSS to19xUH54yRNk4LsZvZZ89vFxX1rWgxz4Wc4NVJNWAdtD4r++xOBPqwBh9hbdAhqcKkS KmaMCYBzzvBV6d3Yig2+2iH0Hor5U1Zbq5E9BJ17bXyIiUXr2mV0eWHanwsTmhQc7o36 4OjtasURCjkEGsoVkqRnyPPtkZYY8pUBMBxKzLO48M2kTa7/CjiTMqmGR4GRMpwrgC+f oQFVjlhhq7wisPZUE5Jl1zn5ZDMw3lQi8o9gVKBaDdRnEEwDYGM8yPTR/ZxtYSsA0/ez Dg2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=ibvfbXRH; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519 header.b=ykSpQFyT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b5si17200510jav.78.2021.05.25.14.08.28; Tue, 25 May 2021 14:08:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.de header.s=susede2_rsa header.b=ibvfbXRH; dkim=neutral (no key) header.i=@suse.de header.s=susede2_ed25519 header.b=ykSpQFyT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233773AbhEYRZz (ORCPT + 99 others); Tue, 25 May 2021 13:25:55 -0400 Received: from mx2.suse.de ([195.135.220.15]:45274 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229845AbhEYRZy (ORCPT ); Tue, 25 May 2021 13:25:54 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1621963462; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/8jZJtlCctlz4NX7pE+UiAv7dDWXJgGKttx44dP5P1I=; b=ibvfbXRHCkAns9UDhXGZ5oxkv/wnoGPdUrZmYoyqkOnDHKJjxtsmP6Eg/zxpjLuveqoZbv 5h0zK9xCZ7UxOnTM247pXnIPuMS7/LRQsdPCrf6jFQhcBJY1J8lfIDLkMveB9dCrdgKm7n WUeTTqXNyCmBujAuDjqlNuLRY5i+vAk= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1621963462; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/8jZJtlCctlz4NX7pE+UiAv7dDWXJgGKttx44dP5P1I=; b=ykSpQFyTV26JJMAK4L/6sslYrzqbNv08eKFk3eQqZDSF66FgOvODqVfZPLoP16UoRy2RZz p5351AZNqoolDtBw== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id C4845AF33; Tue, 25 May 2021 17:24:22 +0000 (UTC) Subject: Re: [RFC] virtio_scsi: to poll and kick the virtqueue in timeout handler To: Stefan Hajnoczi , Dongli Zhang Cc: virtualization@lists.linux-foundation.org, linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, pbonzini@redhat.com, jejb@linux.ibm.com, martin.petersen@oracle.com, joe.jin@oracle.com, junxiao.bi@oracle.com, srinivas.eeda@oracle.com References: <20210523063843.1177-1-dongli.zhang@oracle.com> From: Hannes Reinecke Message-ID: <1184a5ac-bbb4-a89d-b5e2-ee0bf58cd1b8@suse.de> Date: Tue, 25 May 2021 19:24:21 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 5/25/21 6:47 PM, Stefan Hajnoczi wrote: > On Mon, May 24, 2021 at 11:33:33PM -0700, Dongli Zhang wrote: >> On 5/24/21 6:24 AM, Stefan Hajnoczi wrote: >>> On Sun, May 23, 2021 at 09:39:51AM +0200, Hannes Reinecke wrote: >>>> On 5/23/21 8:38 AM, Dongli Zhang wrote: >>>>> This RFC is to trigger the discussion about to poll and kick the >>>>> virtqueue on purpose in virtio-scsi timeout handler. >>>>> >>>>> The virtio-scsi relies on the virtio vring shared between VM and host. >>>>> The VM side produces requests to vring and kicks the virtqueue, while the >>>>> host side produces responses to vring and interrupts the VM side. >>>>> >>>>> By default the virtio-scsi handler depends on the host timeout handler >>>>> by BLK_EH_RESET_TIMER to give host a chance to perform EH. >>>>> >>>>> However, this is not helpful for the case that the responses are available >>>>> on vring but the notification from host to VM is lost. >>>>> >>>> How can this happen? >>>> If responses are lost the communication between VM and host is broken, and >>>> we should rather reset the virtio rings themselves. >>> >>> I agree. In principle it's fine to poll the virtqueue at any time, but I >>> don't understand the failure scenario here. It's not clear to me why the >>> device-to-driver vq notification could be lost. >>> >> >> One example is the CPU hotplug issue before the commit bf0beec0607d ("blk-mq: >> drain I/O when all CPUs in a hctx are offline") was available. The issue is >> equivalent to loss of interrupt. Without the CPU hotplug fix, while NVMe driver >> relies on the timeout handler to complete inflight IO requests, the PV >> virtio-scsi may hang permanently. >> >> In addition, as the virtio/vhost/QEMU are complex software, we are not able to >> guarantee there is no further lost of interrupt/kick issue in the future. It is >> really painful if we encounter such issue in production environment. > > Any number of hardware or software bugs might exist that we don't know > about, yet we don't pre-emptively add workarounds for them because where > do you draw the line? > > I checked other SCSI/block drivers and found it's rare to poll in the > timeout function so there does not seem to be a consensus that it's > useful to do this. > Not only this; it's downright dangerous attempting to do that in SCSI. In SCSI we don't have fixed lifetime guarantees that NVMe has, so there will be a race condition between timeout and command completion. Plus there is no interface in SCSI allowing to 'poll' for completions in a meaningful manner. > That said, it's technically fine to do it, the virtqueue APIs are there > and can be used like this. So if you and others think this is necessary, > then it's a pretty small change and I'm not against merging a patch like > this. > I would rather _not_ put more functionality into the virtio_scsi timeout handler; this only serves to assume that the timeout handler has some functionality in virtio. Which it patently hasn't, as the prime reason for a timeout handler is to _abort_ a command, which we can't on virtio. Well, we can on virtio, but qemu as the main user will re-route the I/O from virtio into doing async-I/O, and there is no way how we can abort outstanding asynchronous I/O. Or any other ioctl, for that matter. Cheers, Hannes -- Dr. Hannes Reinecke Kernel Storage Architect hare@suse.de +49 911 74053 688 SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 N?rnberg HRB 36809 (AG N?rnberg), Gesch?ftsf?hrer: Felix Imend?rffer