Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp743125rwi; Thu, 27 Oct 2022 07:08:10 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5spsqGpYMGuRKDQ6+5LduQuYywuUXUEZc5vgLd5bjtGkZe6LFIQ654+WVJbrneFMt1USbs X-Received: by 2002:a63:f20a:0:b0:464:8c6:f27e with SMTP id v10-20020a63f20a000000b0046408c6f27emr42289457pgh.411.1666879689924; Thu, 27 Oct 2022 07:08:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666879689; cv=none; d=google.com; s=arc-20160816; b=fXnuM/q1Qhq4TA/i5kiedE1p+Nc2lFHdR0z2bknyIbq/9btC2WA96701W7H3JrkVPf x2HLpgtfV2Gb33wXexompv+L9IX6T8z45Y1jQcIWFv4wiWNf9qmYsTYX8Q5v7g1W+8io KnkSahWzK87fgPXR3/Ooe0yQTmlGH7LQzm15SukoDYndDPe0eO/nU5EZqNIdxlE+pZ3S EfwwgPXYMv5lt4y/exue6q3+GAgUnveODhd+bX8X3nlRkKRjKp4GQqvL+vNEoxgB2049 iAI36rjDKlaxox2aBquLh2+E8a52hWgVYajpAUP4w7MGxBVT42cKq/sIcDWOG2ho8Sxq EiVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=6eoSFxYy6jIuBmVaTt7/4bfAhx47SN1tdUZ3yINCRyI=; b=Orh/4vSGVAxsgQ0YApgEQy1Bp9RlFDRIxctDf10aJskIWsCfBYMKUKGi0eYdIFTLv+ sNVYoTsZz+ejbVm7QHv7bMu0xDSLvYYReUzBE67wDRwlnzlc4WBlGP/ca2JvZfNM9k4D WrzX60Q/S22oZKSQKCm0AWSYmdioJvYt5wIGG+uDFRAVqNe+Xx/xPVK9K/615sixsi+y H63Cif8M+1kAlTlNNxmoPT5uivDKpqVsx1VSuWztm9IzghU1ij4ATjB+kX3S8ZZqzSi7 pu+S326Ecz/jIFd403rWkN+K31Y5fxuynbTg+MDpFlAYNtxCtHzAyK42dhTgVaxZtMTx ovjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcppdkim1 header.b="RQF/t6Ky"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x5-20020a654145000000b0046b1a7c6679si1828784pgp.414.2022.10.27.07.07.56; Thu, 27 Oct 2022 07:08:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcppdkim1 header.b="RQF/t6Ky"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235132AbiJ0Nzy (ORCPT + 99 others); Thu, 27 Oct 2022 09:55:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235591AbiJ0Nzu (ORCPT ); Thu, 27 Oct 2022 09:55:50 -0400 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 825FB61736; Thu, 27 Oct 2022 06:55:47 -0700 (PDT) Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 29RDgNpL024640; Thu, 27 Oct 2022 13:55:24 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=qcppdkim1; bh=6eoSFxYy6jIuBmVaTt7/4bfAhx47SN1tdUZ3yINCRyI=; b=RQF/t6KyKvtrDv71AjpXg/3zsybbnUH+ufx+yrwMSiHP/dvN99tB3ySEYzoQOel7QNmR DcTKfrNfS21ohkmAsg9dUygbuJg4Oc0e3/iJ3aS7mAS6O8z8QBDiM05je2ycM9SX+8sl uo3/6AJo3XCXhyqENmK2DralGUHn05Sel4wldOGViko+a9dOj2REKRGUpd8uhf/qLN+n QCTnje7SI43PoZWJ+X0ZYrBpZirlIr5jYitVdYSghVagODzC5tBdw/LlCNGPlIgKlP08 3h2uW0W4UKReHnyLel5AebCd6SFTuljU6/uyLEsVWn8aZgCh3iA9o7PGX3AzbjIlrnB4 lA== Received: from nalasppmta03.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3kfahvt808-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 27 Oct 2022 13:55:24 +0000 Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com [10.47.209.196]) by NALASPPMTA03.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 29RDtNR7001469 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 27 Oct 2022 13:55:23 GMT Received: from hu-pkondeti-hyd.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.29; Thu, 27 Oct 2022 06:55:14 -0700 Date: Thu, 27 Oct 2022 19:25:10 +0530 From: Pavan Kondeti To: Elliot Berman CC: Bjorn Andersson , Jassi Brar , Murali Nalajala , Trilok Soni , Srivatsa Vaddagiri , Carl van Schaik , Prakruthi Deepak Heragu , Andy Gross , Dmitry Baryshkov , , Mark Rutland , Lorenzo Pieralisi , Sudeep Holla , Marc Zyngier , Rob Herring , Krzysztof Kozlowski , Jonathan Corbet , "Will Deacon" , Catalin Marinas , "Arnd Bergmann" , Greg Kroah-Hartman , Srinivas Kandagatla , Amol Maheshwari , Kalle Valo , , , , Subject: Re: [PATCH v6 09/21] mailbox: Add Gunyah message queue mailbox Message-ID: <20221027135510.GA29032@hu-pkondeti-hyd.qualcomm.com> References: <20221026185846.3983888-1-quic_eberman@quicinc.com> <20221026185846.3983888-10-quic_eberman@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20221026185846.3983888-10-quic_eberman@quicinc.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nalasex01a.na.qualcomm.com (10.47.209.196) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: 7ydSxs6gV9pkueAE7AI0JeQivgcrmx7V X-Proofpoint-ORIG-GUID: 7ydSxs6gV9pkueAE7AI0JeQivgcrmx7V X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-10-27_07,2022-10-27_01,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=516 malwarescore=0 impostorscore=0 lowpriorityscore=0 mlxscore=0 clxscore=1011 suspectscore=0 spamscore=0 priorityscore=1501 adultscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2210270075 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Elliot, On Wed, Oct 26, 2022 at 11:58:34AM -0700, Elliot Berman wrote: > Gunyah message queues are a unidirectional inter-VM pipe for messages up > to 1024 bytes. This driver supports pairing a receiver message queue and > a transmitter message queue to expose a single mailbox channel. > > Signed-off-by: Elliot Berman > +static irqreturn_t gh_msgq_tx_irq_handler(int irq, void *data) > +{ > + struct gh_msgq *msgq = data; > + > + mbox_chan_txdone(gh_msgq_chan(msgq), 0); > + > + return IRQ_HANDLED; > +} > + > +static void gh_msgq_txdone_tasklet(unsigned long data) > +{ > + struct gh_msgq *msgq = (struct gh_msgq *)data; > + > + mbox_chan_txdone(gh_msgq_chan(msgq), msgq->last_status); > +} > + > +static int gh_msgq_send_data(struct mbox_chan *chan, void *data) > +{ > + struct gh_msgq *msgq = mbox_chan_to_msgq(chan); > + struct gh_msgq_tx_data *msgq_data = data; > + u64 tx_flags = 0; > + unsigned long ret; > + bool ready; > + > + if (msgq_data->push) > + tx_flags |= GH_HYPERCALL_MSGQ_TX_FLAGS_PUSH; > + > + ret = gh_hypercall_msgq_send(msgq->tx_ghrsc->capid, msgq_data->length, > + (uintptr_t)msgq_data->data, tx_flags, &ready); > + > + /** > + * unlikely because Linux tracks state of msgq and should not try to > + * send message when msgq is full. > + */ > + if (unlikely(ret == GH_ERROR_MSGQUEUE_FULL)) > + return -EAGAIN; > + > + /** > + * Propagate all other errors to client. If we return error to mailbox > + * framework, then no other messages can be sent and nobody will know > + * to retry this message. > + */ > + msgq->last_status = gh_remap_error(ret); > + > + /** > + * This message was successfully sent, but message queue isn't ready to > + * receive more messages because it's now full. Mailbox framework > + * requires that we only report that message was transmitted only when > + * we're ready to transmit another message. We'll get that in the form > + * of tx IRQ once the other side starts to drain the msgq. > + */ > + if (ret == GH_ERROR_OK && !ready) > + return 0; > + > + /** > + * We can send more messages. Mailbox framework requires that tx done > + * happens asynchronously to sending the message. Gunyah message queues > + * tell us right away on the hypercall return whether we can send more > + * messages. To work around this, defer the txdone to a tasklet. > + */ > + tasklet_schedule(&msgq->txdone_tasklet); > + Nice comments. irq_work would be a better choice. > + return 0; > +} > + > +struct mbox_chan_ops gh_msgq_ops = { > + .send_data = gh_msgq_send_data, > +}; > + > +/** > + * gh_msgq_init() - Initialize a Gunyah message queue with an mbox_client > + * @parent: optional, device parent used for the mailbox controller > + * @msgq: Pointer to the gh_msgq to initialize > + * @cl: A mailbox client to bind to the mailbox channel that the message queue creates > + * @tx_ghrsc: optional, the transmission side of the message queue > + * @rx_ghrsc: optional, the receiving side of the message queue > + * > + * At least one of tx_ghrsc and rx_ghrsc should be not NULL. Most message queue use cases come with > + * a pair of message queues to facilitiate bidirectional communication. When tx_ghrsc is set, > + * the client can send messages with mbox_send_message(gh_msgq_chan(msgq), msg). When rx_ghrsc > + * is set, the mbox_client should register an .rx_callback() and the message queue driver will > + * push all available messages upon receiving the RX ready interrupt. The messages should be > + * consumed or copied by the client right away as the gh_msgq_rx_data will be replaced/destroyed > + * after the callback. > + * > + * Returns - 0 on success, negative otherwise > + */ > +int gh_msgq_init(struct device *parent, struct gh_msgq *msgq, struct mbox_client *cl, > + struct gunyah_resource *tx_ghrsc, struct gunyah_resource *rx_ghrsc) > +{ > + int ret; > + > + /* Must have at least a tx_ghrsc or rx_ghrsc and that they are the right device types */ > + if ((!tx_ghrsc && !rx_ghrsc) || > + (tx_ghrsc && tx_ghrsc->type != GUNYAH_RESOURCE_TYPE_MSGQ_TX) || > + (rx_ghrsc && rx_ghrsc->type != GUNYAH_RESOURCE_TYPE_MSGQ_RX)) > + return -EINVAL; > + > + msgq->tx_ghrsc = tx_ghrsc; > + msgq->rx_ghrsc = rx_ghrsc; > + > + msgq->mbox.dev = parent; > + msgq->mbox.ops = &gh_msgq_ops; > + msgq->mbox.chans = kcalloc(1, sizeof(*msgq->mbox.chans), GFP_KERNEL); Error handling missing. minor nit pick: If you initialize num_chans to 1 before, then you can use that as the first argument to kcalloc() which makes it more readable since you opted for kcalloc() instead of kzalloc() there. > + msgq->mbox.num_chans = 1; > + msgq->mbox.txdone_irq = true; > + > + if (gh_msgq_has_tx(msgq)) { > + ret = request_irq(msgq->tx_ghrsc->irq, gh_msgq_tx_irq_handler, 0, "gh_msgq_tx", > + msgq); > + if (ret) > + goto err_chans; > + } > + > + if (gh_msgq_has_rx(msgq)) { > + ret = request_threaded_irq(msgq->rx_ghrsc->irq, NULL, gh_msgq_rx_irq_handler, > + IRQF_ONESHOT, "gh_msgq_rx", msgq); > + if (ret) > + goto err_tx_irq; > + } > + > + tasklet_init(&msgq->txdone_tasklet, gh_msgq_txdone_tasklet, (unsigned long)msgq); If you wish to use tasklets, use tasklet_setup(). > + > + ret = mbox_controller_register(&msgq->mbox); > + if (ret) > + goto err_rx_irq; > + > + ret = mbox_bind_client(gh_msgq_chan(msgq), cl); > + if (ret) > + goto err_mbox; > + > + return 0; > +err_mbox: > + mbox_controller_unregister(&msgq->mbox); > +err_rx_irq: > + if (gh_msgq_has_rx(msgq)) > + free_irq(msgq->rx_ghrsc->irq, msgq); > +err_tx_irq: > + if (gh_msgq_has_tx(msgq)) > + free_irq(msgq->tx_ghrsc->irq, msgq); > +err_chans: > + kfree(msgq->mbox.chans); > + return ret; > +} > +EXPORT_SYMBOL_GPL(gh_msgq_init); > + > +void gh_msgq_remove(struct gh_msgq *msgq) > +{ > + if (gh_msgq_has_rx(msgq)) > + free_irq(msgq->rx_ghrsc->irq, msgq); > + > + if (gh_msgq_has_tx(msgq)) > + free_irq(msgq->tx_ghrsc->irq, msgq); > + > + kfree(msgq->mbox.chans); > +} > +EXPORT_SYMBOL_GPL(gh_msgq_remove); > + Is gh_msgq_remove() supposed to undo every thing done in gh_msgq_init()? ex: mbox controller and channel are not unregistered. Thanks, Pavan