Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp4350968pxk; Wed, 30 Sep 2020 00:07:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzfyNvc9w28L11OFpMwny6dUSZCOrUEl1jI85ogI7SN0Eezyg85kKx3/OzQ/eMzE+4AO/TF X-Received: by 2002:a17:906:7013:: with SMTP id n19mr1393516ejj.388.1601449645373; Wed, 30 Sep 2020 00:07:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601449645; cv=none; d=google.com; s=arc-20160816; b=Bu+h01U9V79YRNZKleOJMVWfIfU97GBw6it6nlxUOGO1u5rQFvnmAyNa/f4q3WBTub QNvRD7xhnygE1hlOTZTNiOweLL7E6Mggci2Tp0pTxxn+dZw0UnTe7uyEGvoASEfAWKqF wPEEdFn/ecL289XwwVeVipH/DZq2pc/rhkA/JWbBH00l6myTArOq6yDCrdqmL17dryag +x0m4dN1kYzwnsX4i2ZUfcDbAeGR/FByoV/IlwvTkZITlOO3dQfqG63FvQ6cBsYcDTdm zaNutvPoeD16OQewIc3/aApiY8KjvrHlBE4cuxJzWkdT8JkKauwnqMGS+mZuJqoXSyso wU6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :ironport-sdr:ironport-sdr; bh=c4EC9SzGVz5JqO8+OyL6RpOhgnlW/uoltLpJvVf8SvM=; b=THfTiQGZjPlInCtaZb/wYFuAzzbKIT9KgI4KAWkB+kzu4m35ztnFzAjkcbanFF8v4G ZO190PubLWgUuf80JylJ3c0W4X0LeHbWC8Cemp3LUG2S7hQIOVC/u1e4cqZg+9Bt4v/k bUKiqp5rWTluTR7agM+BiNaPKt/YrOnR5cT48jRp833mi/9QvY0LX2bRam1mm72f+HC5 HjjJyQCS4gS1PXDmYAXZbJq4yPf9kv9Lp0dFN9NlrxlT8q2ZLuX2ftA/romIeTxmI1jf 41p3jCWEEI5VY1TyfX17fFsmlxKMFjXJxFz7xbYvUfl/jRroqQsjgJW0KHkZMClJYwjU dbvw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id pw9si536560ejb.624.2020.09.30.00.07.00; Wed, 30 Sep 2020 00:07:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725798AbgI3HDx (ORCPT + 99 others); Wed, 30 Sep 2020 03:03:53 -0400 Received: from mga11.intel.com ([192.55.52.93]:57638 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725535AbgI3HDx (ORCPT ); Wed, 30 Sep 2020 03:03:53 -0400 IronPort-SDR: 2Jc76T9mMbLYd1mM/HHYYOCNqWzw/3Pcrxh3HUxYB9YaFmSO++C13T1LFn1GAfGiMABN+GiNHh eBbajRqV78WQ== X-IronPort-AV: E=McAfee;i="6000,8403,9759"; a="159716103" X-IronPort-AV: E=Sophos;i="5.77,321,1596524400"; d="scan'208";a="159716103" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2020 00:03:52 -0700 IronPort-SDR: hFa+CCle1o2YtDp6k8YhnmRjAtjVzeyCzmNEPglJjYq6RfZAMBI7rUUp6NuagthWPANtz0QPPQ whbLoIeRwFNQ== X-IronPort-AV: E=Sophos;i="5.77,321,1596524400"; d="scan'208";a="294520650" Received: from gliakhov-mobl2.ger.corp.intel.com (HELO ubuntu) ([10.252.32.32]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2020 00:03:50 -0700 Date: Wed, 30 Sep 2020 09:03:45 +0200 From: Guennadi Liakhovetski To: Mathieu Poirier Cc: ohad@wizery.com, bjorn.andersson@linaro.org, loic.pallardy@st.com, linux-remoteproc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 05/10] rpmsg: virtio: Move virtio RPMSG structures to private header Message-ID: <20200930070345.GD20683@ubuntu> References: <20200922001000.899956-1-mathieu.poirier@linaro.org> <20200922001000.899956-6-mathieu.poirier@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200922001000.899956-6-mathieu.poirier@linaro.org> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 21, 2020 at 06:09:55PM -0600, Mathieu Poirier wrote: > Move structure virtiproc_info and virtio_rpmsg_channel to rpmsg_internal.h > so that they can be used by rpmsg_ns.c > > Signed-off-by: Mathieu Poirier > --- > drivers/rpmsg/rpmsg_internal.h | 62 ++++++++++++++++++++++++++++++++ > drivers/rpmsg/virtio_rpmsg_bus.c | 57 ----------------------------- > 2 files changed, 62 insertions(+), 57 deletions(-) > > diff --git a/drivers/rpmsg/rpmsg_internal.h b/drivers/rpmsg/rpmsg_internal.h > index 587f723757d4..3ea9cec26fc0 100644 > --- a/drivers/rpmsg/rpmsg_internal.h > +++ b/drivers/rpmsg/rpmsg_internal.h > @@ -12,12 +12,74 @@ > #ifndef __RPMSG_INTERNAL_H__ > #define __RPMSG_INTERNAL_H__ > > +#include > +#include > #include > +#include > +#include I think it would be better to not add any VirtIO dependencies here even temporarily. > +#include > #include > > #define to_rpmsg_device(d) container_of(d, struct rpmsg_device, dev) > #define to_rpmsg_driver(d) container_of(d, struct rpmsg_driver, drv) > > +/** > + * struct virtproc_info - virtual remote processor state This struct shouldn't be here, let's first prepare the NS callback by removing any VirtIO dependencies and only then move it to the generic driver. Thanks Guennadi > + * @vdev: the virtio device > + * @rvq: rx virtqueue > + * @svq: tx virtqueue > + * @rbufs: kernel address of rx buffers > + * @sbufs: kernel address of tx buffers > + * @num_bufs: total number of buffers for rx and tx > + * @buf_size: size of one rx or tx buffer > + * @last_sbuf: index of last tx buffer used > + * @bufs_dma: dma base addr of the buffers > + * @tx_lock: protects svq, sbufs and sleepers, to allow concurrent senders. > + * sending a message might require waking up a dozing remote > + * processor, which involves sleeping, hence the mutex. > + * @endpoints: idr of local endpoints, allows fast retrieval > + * @endpoints_lock: lock of the endpoints set > + * @sendq: wait queue of sending contexts waiting for a tx buffers > + * @sleepers: number of senders that are waiting for a tx buffer > + * @ns_ept: the bus's name service endpoint > + * > + * This structure stores the rpmsg state of a given virtio remote processor > + * device (there might be several virtio proc devices for each physical > + * remote processor). > + */ > +struct virtproc_info { > + struct virtio_device *vdev; > + struct virtqueue *rvq, *svq; > + void *rbufs, *sbufs; > + unsigned int num_bufs; > + unsigned int buf_size; > + int last_sbuf; > + dma_addr_t bufs_dma; > + struct mutex tx_lock; > + struct idr endpoints; > + struct mutex endpoints_lock; > + wait_queue_head_t sendq; > + atomic_t sleepers; > + struct rpmsg_endpoint *ns_ept; > +}; > + > +/** > + * struct virtio_rpmsg_channel - rpmsg channel descriptor > + * @rpdev: the rpmsg channel device > + * @vrp: the virtio remote processor device this channel belongs to > + * > + * This structure stores the channel that links the rpmsg device to the virtio > + * remote processor device. > + */ > +struct virtio_rpmsg_channel { > + struct rpmsg_device rpdev; > + > + struct virtproc_info *vrp; > +}; > + > +#define to_virtio_rpmsg_channel(_rpdev) \ > + container_of(_rpdev, struct virtio_rpmsg_channel, rpdev) > + > /** > * struct rpmsg_device_ops - indirection table for the rpmsg_device operations > * @create_channel: create backend-specific channel, optional > diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c > index eaf3b2c012c8..0635d86d490f 100644 > --- a/drivers/rpmsg/virtio_rpmsg_bus.c > +++ b/drivers/rpmsg/virtio_rpmsg_bus.c > @@ -32,63 +32,6 @@ > > #include "rpmsg_internal.h" > > -/** > - * struct virtproc_info - virtual remote processor state > - * @vdev: the virtio device > - * @rvq: rx virtqueue > - * @svq: tx virtqueue > - * @rbufs: kernel address of rx buffers > - * @sbufs: kernel address of tx buffers > - * @num_bufs: total number of buffers for rx and tx > - * @buf_size: size of one rx or tx buffer > - * @last_sbuf: index of last tx buffer used > - * @bufs_dma: dma base addr of the buffers > - * @tx_lock: protects svq, sbufs and sleepers, to allow concurrent senders. > - * sending a message might require waking up a dozing remote > - * processor, which involves sleeping, hence the mutex. > - * @endpoints: idr of local endpoints, allows fast retrieval > - * @endpoints_lock: lock of the endpoints set > - * @sendq: wait queue of sending contexts waiting for a tx buffers > - * @sleepers: number of senders that are waiting for a tx buffer > - * @ns_ept: the bus's name service endpoint > - * > - * This structure stores the rpmsg state of a given virtio remote processor > - * device (there might be several virtio proc devices for each physical > - * remote processor). > - */ > -struct virtproc_info { > - struct virtio_device *vdev; > - struct virtqueue *rvq, *svq; > - void *rbufs, *sbufs; > - unsigned int num_bufs; > - unsigned int buf_size; > - int last_sbuf; > - dma_addr_t bufs_dma; > - struct mutex tx_lock; > - struct idr endpoints; > - struct mutex endpoints_lock; > - wait_queue_head_t sendq; > - atomic_t sleepers; > - struct rpmsg_endpoint *ns_ept; > -}; > - > -/** > - * struct virtio_rpmsg_channel - rpmsg channel descriptor > - * @rpdev: the rpmsg channel device > - * @vrp: the virtio remote processor device this channel belongs to > - * > - * This structure stores the channel that links the rpmsg device to the virtio > - * remote processor device. > - */ > -struct virtio_rpmsg_channel { > - struct rpmsg_device rpdev; > - > - struct virtproc_info *vrp; > -}; > - > -#define to_virtio_rpmsg_channel(_rpdev) \ > - container_of(_rpdev, struct virtio_rpmsg_channel, rpdev) > - > /* > * Local addresses are dynamically allocated on-demand. > * We do not dynamically assign addresses from the low 1024 range, > -- > 2.25.1 >