Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp3609022pxv; Mon, 19 Jul 2021 04:38:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxQxzqOejyvqTR441qT4ny413XuQjmnzfx3P4eDDCmGEWR9ZVz/0KR2pfNp/ntPajTVvQv/ X-Received: by 2002:a92:dcc5:: with SMTP id b5mr16988789ilr.234.1626694733655; Mon, 19 Jul 2021 04:38:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626694733; cv=none; d=google.com; s=arc-20160816; b=RMy3KH6rGe8hlbIVv8meelppCJuuf7JuLs8QWCFURzGze+KeYTiTgm45Y3BlE0XaVG w6myRyyxAoFfThQSHnbSQr8MuKn3DvBvMVQU2dSy6tPS6eUJvK7I3zEl/xrUY/aS5l3r PABh9fTjxQpimu8yjXZ2mDLPMaCPQhdvP6r3B7srDOBD5DJn78R2lfamfrfzDWd6I8Hm o0HNDrHJ2VDAn6aTpgiOZrp+m/o3aYJFy9JUvXNAg/XZiaXWqaxauFfgTzVChWf3wfl4 Ct2iwSAUJpkJ6XF+1WVvbYZaZThfd5BPA63WMApWBi1E4XQWEQb1yalSPFWBZAKT8HhJ o63g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=SSfHZ2Gg2xldlkYiPygDCCRR9pn0inHe5KC1FxBEbxM=; b=o6UfM0Hulm8Erqj/5VFUxcytrrZ2GWv00fPX0zDL2lc0ReK38JGTkOuRqh+ztaHaGd KccGhjv2XLcSzHpmJyrl/RxdTS6oIIZXO2Rp9HtDuDbRUHXQpv72+kVilbMW4uWYwc85 R+O1gduiuvyOaKrG26pbyz1AIYJScvzglDFYH1AygtFi6Tz/MkqRbTsLiBb2XpX+hxjY TVDlgfGNRsvs4qKABhrUkmx5ORYITb2RZJ7SxzVPtTLPJYmADQc/vvjb9XGKLkc3FZkL Sj06wlbG0yht+qxhdSLls6/J3Oqq+L05HS/RqxW+Muv0kLEbWsbEAOhNfFCZanqW2irK xx9A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r4si18677221ioh.93.2021.07.19.04.38.40; Mon, 19 Jul 2021 04:38:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236314AbhGSK4b (ORCPT + 99 others); Mon, 19 Jul 2021 06:56:31 -0400 Received: from foss.arm.com ([217.140.110.172]:56452 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232156AbhGSK43 (ORCPT ); Mon, 19 Jul 2021 06:56:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD9236D; Mon, 19 Jul 2021 04:37:07 -0700 (PDT) Received: from e120937-lin (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C6C63F73D; Mon, 19 Jul 2021 04:37:04 -0700 (PDT) Date: Mon, 19 Jul 2021 12:36:57 +0100 From: Cristian Marussi To: Peter Hilber Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, virtualization@lists.linux-foundation.org, virtio-dev@lists.oasis-open.org, sudeep.holla@arm.com, james.quinlan@broadcom.com, Jonathan.Cameron@Huawei.com, f.fainelli@gmail.com, etienne.carriere@linaro.org, vincent.guittot@linaro.org, souvik.chakravarty@arm.com, igor.skalkin@opensynergy.com, alex.bennee@linaro.org, jean-philippe@linaro.org, mikhail.golubev@opensynergy.com, anton.yakovlev@opensynergy.com, Vasyl.Vavrychuk@opensynergy.com, Andriy.Tryshnivskyy@opensynergy.com Subject: Re: [PATCH v6 00/17] Introduce SCMI transport based on VirtIO Message-ID: <20210719113657.GI49078@e120937-lin> References: <20210712141833.6628-1-cristian.marussi@arm.com> <314d75d6-ae5c-6aaf-b796-a424c195aee4@opensynergy.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <314d75d6-ae5c-6aaf-b796-a424c195aee4@opensynergy.com> User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 15, 2021 at 06:35:38PM +0200, Peter Hilber wrote: > On 12.07.21 16:18, Cristian Marussi wrote: > > Hi all, > > > > Hi Cristian, > > thanks for your update. Please find some additional comments in this reply > and the following. > > Best regards, > > Peter Hi Peter, thanks for the feedback. > > > While reworking this series starting from the work done up to V3 by > > OpenSynergy, I am keeping the original autorship and list distribution > > unchanged. > > > > The main aim of this rework, as said, is to simplify where possible the > > SCMI VirtIO support added in V3 by adding at first some new general > > mechanisms in the SCMI Transport layer. > > > > Indeed, after some initial small fixes, patches 05/06/07/08 add such new > > additional mechanisms to the SCMI core to ease implementation of more > > complex transports like virtio, while also addressing a few general issues > > already potentially affecting existing transports. > > > > In terms of rework I dropped original V3 patches 05/06/07/08/12 as no more > > needed, and modified where needed the remaining original patches to take > > advantage of the above mentioned new SCMI transport features. > > > > DT bindings patch has been ported on top of freshly YAML converted arm,scmi > > bindings. > > > > Moreover, since V5 I dropped support for polling mode from the virtio-scmi > > transport, since it is an optional general mechanism provided by the core > > to allow transports lacking a completion IRQ to work and it seemed a > > needless addition/complication in the context of virtio transport. > > > > Just for correctness, in my understanding polling is not completely optional > ATM. Polling would be required by scmi_cpufreq_fast_switch(). But that > requirement might be irrelevant for now. > Cpufreq core can use .fast_switch (scmi_cpufreq_fast_switch) op only if policy->fast_switch_enabled is true which in turn reported as true by the SCMI cpufreq driver iff SCMI FastChannels are supported by Perf implementation server side, but the SCMI Device VirtIO spec (5.17) explicitly does NOT support SCMI FastChannels as of now. Anyway, even though we should support in the future SCMI FastChannels on VirtIO SCMI transport, fastchannels are by defintion per-protocol/per-command/ per-domain-id specific, based on sharedMem or MMIO, unidirectional and do not even allow for a response from the platform (SCMIV3.0 4.1.1 5.3) so polling won't be a thing anyway unless I'm missing something. BUT you made a good point in fact anyway, because the generic perf->freq_set/get API CAN be indeed invoked in polling mode, and, even though we do not use them in polling as of now (if not in the FastChannel scenario above) this could be a potential problem in general if when the underlying transport do not support poll the core just drop any poll_completion=true messages. So, while I still think it is not sensible to enable poll mode in SCMI Virtio, because would be a sort of faked polling and increases complexity, I'm now considering the fact that maybe the right behaviour of the SCMI core in such a scenario would be to warn the user as it does now AND then fallback to use non-polling, probably better if such a behavior is made condtional on some transport config desc flag that allow such fallback behavior. Any thought ? > > Additionally, in V5 I could also simplify a bit the virtio transport > > probing sequence starting from the observation that, by the VirtIO spec, > > in fact, only one single SCMI VirtIO device can possibly exist on a system. > > > > I wouldn't say that the virtio spec restricts the # of virtio-scmi devices > to one. But I do think the one device limitation in the kernel is > acceptable. > Indeed it is not that VirtIO spec explicitly forbids multiple devices, but the fact that SCMI devices are not identifiable from VirtIO layer (if not by vendor mmaybe) nor from DT config (as of now) leads to the fact that only one single device can be possibly used; if you define multiple MMIO devices (or multiple PCI are discovered) there is now way to tell which is which and, say, assign those to distinct channels per different protocols. Having said that, this upcoming series by Viresh, may be useful in the future: https://lore.kernel.org/lkml/aa4bf68fdd13b885a6dc1b98f88834916d51d97d.1626173013.git.viresh.kumar@linaro.org/ since it would add the capability to 'link' an SCMI device user (SCMI virtio transport instance or specific channel) to a specific virtio-mmio node. Once this is in, I could try to multi SCMI VirtIO device support, but not within this series probably. Thanks, Cristian