Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp6113182pxb; Thu, 27 Jan 2022 06:49:53 -0800 (PST) X-Google-Smtp-Source: ABdhPJwF0rNUue+pFtKk2sLaUHAVG1kZEKzoDA/ULV2HJfJemdmUVcwfVlxLUYNdaZxs6y19mqsn X-Received: by 2002:a17:902:c409:: with SMTP id k9mr3486584plk.125.1643294993614; Thu, 27 Jan 2022 06:49:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643294993; cv=none; d=google.com; s=arc-20160816; b=VFcYwTUfnsEWH3VAIjBXAhUMXnmouplq/jzLI2KWbKS2wnWqNp7Ny33cLPY+b8VuTy mF/rzv8D/P5C5tnYK0qK1U/c3wD3kjE4tALnxpWZGAhems+2ZOfHFgWW4h4RJiaE2doN N5IO/E44880z9ZcE+YHGbam0c6tK0Ii8BFyijWpbU1YLqN76tul/M6IP17soALfhV0sO jRIyiDdC7jyXzWKefMJMFwjmFuCT221QNKX10CGDRM/RnDA77wWpByLJ+hFf2j/XT3vn 75UCnB8OAUAcAj5EVtbmtItSeyEuMgcNK3UA3K7NVUdA8xiYopv02A3T3ope0MyrHfId Qm7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=8JOJ5Iniu2+nJQhrOMLoOG4HCI4FV2WXTuZZBARU96k=; b=VKfhLchwjowo811hePsahrv4DBqnEeINeWAIMlyxLGK8D7zT1rSHnNGqivkliSWSH6 MNBi5kp+Br3ZkRIllKqaLVZeO0e1RL8pMsGXVL0oZazAolgH4/pHa0XF8il5ydepw41k 78OOWZqo3FyXl9FYyYSug0isTR1uSmC9NNlIeDuEMbVe58mynVTiBBa1vNl9DNg+wGHj CjWJocaA2MLY6nfX5RbI/iHJze/WaeXwyXrRVUtv7BTD9PJkOpbu8NlIIuGaTMIQbJF4 Tp+tPg4OHbFft4YqJfdQHQ3JRqlLBdv8WjHIApdGHiU1IqO/6PlqokJJMLANpPcumcc0 +b4A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l10si2306369pfu.149.2022.01.27.06.49.40; Thu, 27 Jan 2022 06:49:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238211AbiA0JIK (ORCPT + 99 others); Thu, 27 Jan 2022 04:08:10 -0500 Received: from foss.arm.com ([217.140.110.172]:48898 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234522AbiA0JIJ (ORCPT ); Thu, 27 Jan 2022 04:08:09 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 91F781FB; Thu, 27 Jan 2022 01:08:07 -0800 (PST) Received: from e120937-lin (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D56AB3F7D8; Thu, 27 Jan 2022 01:08:05 -0800 (PST) Date: Thu, 27 Jan 2022 09:07:59 +0000 From: Cristian Marussi To: Peter Hilber Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, igor.skalkin@opensynergy.com, "Michael S. Tsirkin" , virtualization@lists.linux-foundation.org Subject: Re: [PATCH 1/6] firmware: arm_scmi: Add atomic mode support to virtio transport Message-ID: <20220127090759.GA5776@e120937-lin> References: <20220124100341.41191-1-cristian.marussi@arm.com> <20220124100341.41191-2-cristian.marussi@arm.com> <425e9a2b-a03f-a038-2598-33f28cd5f4e9@opensynergy.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <425e9a2b-a03f-a038-2598-33f28cd5f4e9@opensynergy.com> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 26, 2022 at 03:28:52PM +0100, Peter Hilber wrote: > On 24.01.22 11:03, Cristian Marussi wrote: > > Add support for .mark_txdone and .poll_done transport operations to SCMI > > VirtIO transport as pre-requisites to enable atomic operations. > > > > Add a Kernel configuration option to enable SCMI VirtIO transport polling > > and atomic mode for selected SCMI transactions while leaving it default > > disabled. > > > > Hi Cristian, > Hi Peter, > please see one remark below. > > Best regards, > > Peter > > > Cc: "Michael S. Tsirkin" > > Cc: Igor Skalkin > > Cc: Peter Hilber > > Cc: virtualization@lists.linux-foundation.org > > Signed-off-by: Cristian Marussi > > --- [snip] > > +/** > > + * virtio_poll_done - Provide polling support for VirtIO transport > > + * > > + * @cinfo: SCMI channel info > > + * @xfer: Reference to the transfer being poll for. > > + * > > + * VirtIO core provides a polling mechanism based only on last used indexes: > > + * this means that it is possible to poll the virtqueues waiting for something > > + * new to arrive from the host side but the only way to check if the freshly > > + * arrived buffer was what we were waiting for is to compare the newly arrived > > + * message descriptors with the one we are polling on. > > + * > > + * As a consequence it can happen to dequeue something different from the buffer > > + * we were poll-waiting for: if that is the case such early fetched buffers are > > + * then added to a the @pending_cmds_list list for later processing by a > > + * dedicated deferred worker. > > + * > > + * So, basically, once something new is spotted we proceed to de-queue all the > > + * freshly received used buffers until we found the one we were polling on, or, > > + * we have 'seemingly' emptied the virtqueue; if some buffers are still pending > > + * in the vqueue at the end of the polling loop (possible due to inherent races > > + * in virtqueues handling mechanisms), we similarly kick the deferred worker > > + * and let it process those, to avoid indefinitely looping in the .poll_done > > + * helper. > > + * > > + * Note that, since we do NOT have per-message suppress notification mechanism, > > + * the message we are polling for could be delivered via usual IRQs callbacks > > + * on another core which happened to have IRQs enabled: in such case it will be > > + * handled as such by scmi_rx_callback() and the polling loop in the > > + * SCMI Core TX path will be transparently terminated anyway. > > + * > > + * Return: True once polling has successfully completed. > > + */ > > +static bool virtio_poll_done(struct scmi_chan_info *cinfo, > > + struct scmi_xfer *xfer) > > +{ > > + bool pending, ret = false; > > + unsigned int length, any_prefetched = 0; > > + unsigned long flags; > > + struct scmi_vio_msg *next_msg, *msg = xfer->priv; > > + struct scmi_vio_channel *vioch = cinfo->transport_info; > > + > > + if (!msg) > > + return true; > > + > > + spin_lock_irqsave(&vioch->lock, flags); > > If now acquiring vioch->lock here, I see no need to virtqueue_poll() any more. > After checking msg->poll_status, we could just directly try virtqueue_get_buf(). > > On the other hand, always taking the vioch->lock in a busy loop might better be > avoided (I assumed before that taking it was omitted on purpose), since it > might hamper tx channel progress in other cores (but I'm not sure about the > actual impact). > > Also, I don't yet understand why the vioch->lock would need to be taken here. There was a race I could reproduce between the below check against VIO_MSG_POLL_DONE and the poll_idx later update near the end of the poll loop where another thread could have set VIO_MSG_POLL_DONE after this thread had checked it and then this same thread would have cleared it rewriting the new poll_idx; so at first I needlessly enlanrged the spinlocked section (even though I knew was subtopimal given virtqueue_poll does not need serialization) and then forget to properly review this thing. BUT now that, following your suggestion, I introduced a dedicated poll_status that race is gone, so I shrinked back the spinlocked section as before and works fine (even poll_idx itself does not need to be protected really given it can be accessed only here) I'll post the fix in -rc2 together with the core change in the virtio-core I proposed last week to Michael (if not too costly as perfs) Thanks, Cristian