Received: by 2002:a05:6358:f14:b0:e5:3b68:ec04 with SMTP id b20csp5885409rwj; Wed, 21 Dec 2022 08:10:45 -0800 (PST) X-Google-Smtp-Source: AMrXdXtHqudo/YJX0+sx/6C9j6fns+Pzfq5AwwymhzzzZ0ScX3x2fujTYN4j/JSNJKC3NRYeI5y9 X-Received: by 2002:a17:90a:1bc7:b0:223:b680:d78b with SMTP id r7-20020a17090a1bc700b00223b680d78bmr2531746pjr.37.1671639045418; Wed, 21 Dec 2022 08:10:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671639045; cv=none; d=google.com; s=arc-20160816; b=gY6ehOfiSCZlIOiR0zre3vq2+tt0Mu99u3IcZCKYWxiMj8FSn9m7TUm6JznA6pDZ6O dUkqL8oAgSf7Pq1GW/DKDMfwTAxWqlXtkaO1HYzexwRzCGcB/kazIde6DR83U+/RsVo6 ffxFsoo3Ov2HgrzmwcZFZ+0ni01BiPt62OdHU3RfyzRrEfkEZJPGTQihWSChPXTUf+QF 9RdU1OgGkTs45xRqNlmjzwXaaMlz/BY9dZfh6vl2lG7QTFi1inymkUhN8jssHYnCUU0t nDRAuclquWrAxudPb3GUi55bHLpLEwQkxpGI4i/3bYIt73Ppb6vTTMoahRFJtYLHyWHM /omQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=N+abbzg6/8T1rvIqDbVeefSqVLPTDNvtOHgiLP5dPwU=; b=VMRHi8Zccqw/grk/UbZhsq5Q+TxxmjbR1dmWyC8CHcHJifep8A2fdpT6yMwoe86dfp HlgxuN1OLrs2t2vvEXXGJ/GhWoFQ+cZfT+c/McPrn42RvvOLwsdEBcLWx9n9nK8fjGXz vqGMIoTyfJhQA7pPCn6DWtCRZa9iq7j8rqrR/++Qc069gY3lCSC+2S45qMZh/1n48WXg vvYmIz0XcEN9tO4BvLkt+rdS78CVp4ls39T+OlNcn5SkY2f/xKdJPV/OPvfeXN5Hy/4H AglldK4xyLHNSBceP59enfoYibhif/tVhJ2+vP9PQw3NlQ+AyGp6T3BUUYeMtfDP8GRB Faow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@baylibre-com.20210112.gappssmtp.com header.s=20210112 header.b=hP5VvX9f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bn12-20020a056a02030c00b00478c5d37cdcsi13774565pgb.743.2022.12.21.08.10.34; Wed, 21 Dec 2022 08:10:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@baylibre-com.20210112.gappssmtp.com header.s=20210112 header.b=hP5VvX9f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234860AbiLUP1P (ORCPT + 68 others); Wed, 21 Dec 2022 10:27:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234714AbiLUP0L (ORCPT ); Wed, 21 Dec 2022 10:26:11 -0500 Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com [IPv6:2a00:1450:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48EFD248E7 for ; Wed, 21 Dec 2022 07:25:55 -0800 (PST) Received: by mail-ej1-x62c.google.com with SMTP id n20so37745114ejh.0 for ; Wed, 21 Dec 2022 07:25:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=N+abbzg6/8T1rvIqDbVeefSqVLPTDNvtOHgiLP5dPwU=; b=hP5VvX9fWLt/WRAXjhsjYhJwQOWhlwNJeWD+NVpUxZfCfPVo5zvX8+PDx5f1bD1J00 Lz5DuqK8zaqVWkhl0MyxPlfLl6hqqWrIiC7ySWisEDg2MApVfTJm6D1lwvL2dd3C+lHc Kt51Aovgjh6mWBOsiQz+yYZ9KIOgKQD2ftzuH5XwBeD4L/zMvLyAgUsIi9bX2D5kZtsl ZYVYymx4I3l5aC4RltMlRutDyicRo+l6EH5WtHt7dFbAFKPdcO5MRjQfiQZ+AsW6ZfGZ sRb6Uij6WU6ObAczUcxYZXVfIHUxIchG7HD3lv0wsX5zzCS5DN7KEOpnAwMgMtwoW204 TuuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=N+abbzg6/8T1rvIqDbVeefSqVLPTDNvtOHgiLP5dPwU=; b=7hNFfRFqEUOR0+XrGsRh6ud+lvmdEPJ/2rCuTz/1n9NHjvVyr0DIH8FSoxCgSOQsUM kiTwjjkSswnzNw7TLiQWbtSqcqKdXxRaaNhaxQRRqdDiN2ZHwoJedGeIv3tphxr9WAAc pSb30iOf3eyfDY06GCb9JrCWb5IsBN7owXYRnrK7beuD1VXPKZDPN4W+YAaRYXAEOv99 eisKKJCpbvWTRfK+DIecpdEYd3zwoaT6RksSf93vqtlq13pq43vxs8jd+aVNtFGxnmIr b/vNsTzL1uSBrhfFgoT6QpMrfEBXITIiJa0mhAOH8bYhlIP/SGtuuMdP/dRRBqALp0A6 Mppw== X-Gm-Message-State: AFqh2kqc7D+4M7F3iymTfmu8DoJBrunh5vAvDpz5we/E1zaKH5Kz/k1I vhyxwAooVx2gq2EKqS+ddObNdw== X-Received: by 2002:a17:906:29cc:b0:7c1:727f:7c70 with SMTP id y12-20020a17090629cc00b007c1727f7c70mr1507227eje.46.1671636353861; Wed, 21 Dec 2022 07:25:53 -0800 (PST) Received: from blmsp.fritz.box ([2001:4091:a245:805c:8713:84e4:2a9e:cbe8]) by smtp.gmail.com with ESMTPSA id n19-20020aa7c793000000b0045cf4f72b04sm7105428eds.94.2022.12.21.07.25.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Dec 2022 07:25:53 -0800 (PST) From: Markus Schneider-Pargmann To: Marc Kleine-Budde , Chandrasekar Ramakrishnan , Wolfgang Grandegger Cc: Vincent MAILHOL , linux-can@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Markus Schneider-Pargmann Subject: [PATCH 14/18] can: m_can: Use the workqueue as queue Date: Wed, 21 Dec 2022 16:25:33 +0100 Message-Id: <20221221152537.751564-15-msp@baylibre.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221221152537.751564-1-msp@baylibre.com> References: <20221221152537.751564-1-msp@baylibre.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The current implementation uses the workqueue for peripheral chips to submit work. Only a single work item is queued and used at any time. To be able to keep more than one transmit in flight at a time, prepare the workqueue to support multiple transmits at the same time. Each work item now has a separate storage for a skb and a pointer to cdev. This assures that each workitem can be processed individually. The workqueue is replaced by an ordered workqueue which makes sure that only a single worker processes the items queued on the workqueue. Also items are ordered by the order they were enqueued. This removes most of the concurrency the workqueue normally offers. It is not necessary for this driver. The cleanup functions have to be adopted a bit to handle this new mechanism. Signed-off-by: Markus Schneider-Pargmann --- drivers/net/can/m_can/m_can.c | 109 ++++++++++++++++++++-------------- drivers/net/can/m_can/m_can.h | 12 +++- 2 files changed, 74 insertions(+), 47 deletions(-) diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c index 5b882c2fec52..42cde31fc0a8 100644 --- a/drivers/net/can/m_can/m_can.c +++ b/drivers/net/can/m_can/m_can.c @@ -442,17 +442,16 @@ static void m_can_clean(struct net_device *net) { struct m_can_classdev *cdev = netdev_priv(net); - if (cdev->tx_skb) { - int putidx = 0; + for (int i = 0; i != cdev->nr_tx_ops; ++i) { + if (!cdev->tx_ops[i].skb) + continue; net->stats.tx_errors++; - if (cdev->version > 30) - putidx = FIELD_GET(TXFQS_TFQPI_MASK, - m_can_read(cdev, M_CAN_TXFQS)); - - can_free_echo_skb(cdev->net, putidx, NULL); - cdev->tx_skb = NULL; + cdev->tx_ops[i].skb = NULL; } + + for (int i = 0; i != cdev->can.echo_skb_max; ++i) + can_free_echo_skb(cdev->net, i, NULL); } /* For peripherals, pass skb to rx-offload, which will push skb from @@ -1632,8 +1631,9 @@ static int m_can_close(struct net_device *dev) m_can_clk_stop(cdev); free_irq(dev->irq, dev); + m_can_clean(dev); + if (cdev->is_peripheral) { - cdev->tx_skb = NULL; destroy_workqueue(cdev->tx_wq); cdev->tx_wq = NULL; can_rx_offload_disable(&cdev->offload); @@ -1660,19 +1660,17 @@ static int m_can_next_echo_skb_occupied(struct net_device *dev, int putidx) return !!cdev->can.echo_skb[next_idx]; } -static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev) +static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev, + struct sk_buff *skb) { - struct canfd_frame *cf = (struct canfd_frame *)cdev->tx_skb->data; + struct canfd_frame *cf = (struct canfd_frame *)skb->data; struct net_device *dev = cdev->net; - struct sk_buff *skb = cdev->tx_skb; struct id_and_dlc fifo_header; u32 cccr, fdflags; u32 txfqs; int err; int putidx; - cdev->tx_skb = NULL; - /* Generate ID field for TX buffer Element */ /* Common to all supported M_CAN versions */ if (cf->can_id & CAN_EFF_FLAG) { @@ -1796,10 +1794,36 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev) static void m_can_tx_work_queue(struct work_struct *ws) { - struct m_can_classdev *cdev = container_of(ws, struct m_can_classdev, - tx_work); + struct m_can_tx_op *op = container_of(ws, struct m_can_tx_op, work); + struct m_can_classdev *cdev = op->cdev; + struct sk_buff *skb = op->skb; - m_can_tx_handler(cdev); + op->skb = NULL; + m_can_tx_handler(cdev, skb); +} + +static void m_can_tx_queue_skb(struct m_can_classdev *cdev, struct sk_buff *skb) +{ + cdev->tx_ops[cdev->next_tx_op].skb = skb; + queue_work(cdev->tx_wq, &cdev->tx_ops[cdev->next_tx_op].work); + + ++cdev->next_tx_op; + if (cdev->next_tx_op >= cdev->nr_tx_ops) + cdev->next_tx_op = 0; +} + +static netdev_tx_t m_can_start_peripheral_xmit(struct m_can_classdev *cdev, + struct sk_buff *skb) +{ + if (cdev->can.state == CAN_STATE_BUS_OFF) { + m_can_clean(cdev->net); + return NETDEV_TX_OK; + } + + netif_stop_queue(cdev->net); + m_can_tx_queue_skb(cdev, skb); + + return NETDEV_TX_OK; } static netdev_tx_t m_can_start_xmit(struct sk_buff *skb, @@ -1810,30 +1834,10 @@ static netdev_tx_t m_can_start_xmit(struct sk_buff *skb, if (can_dev_dropped_skb(dev, skb)) return NETDEV_TX_OK; - if (cdev->is_peripheral) { - if (cdev->tx_skb) { - netdev_err(dev, "hard_xmit called while tx busy\n"); - return NETDEV_TX_BUSY; - } - - if (cdev->can.state == CAN_STATE_BUS_OFF) { - m_can_clean(dev); - } else { - /* Need to stop the queue to avoid numerous requests - * from being sent. Suggested improvement is to create - * a queueing mechanism that will queue the skbs and - * process them in order. - */ - cdev->tx_skb = skb; - netif_stop_queue(cdev->net); - queue_work(cdev->tx_wq, &cdev->tx_work); - } - } else { - cdev->tx_skb = skb; - return m_can_tx_handler(cdev); - } - - return NETDEV_TX_OK; + if (cdev->is_peripheral) + return m_can_start_peripheral_xmit(cdev, skb); + else + return m_can_tx_handler(cdev, skb); } static int m_can_open(struct net_device *dev) @@ -1861,15 +1865,17 @@ static int m_can_open(struct net_device *dev) /* register interrupt handler */ if (cdev->is_peripheral) { - cdev->tx_skb = NULL; - cdev->tx_wq = alloc_workqueue("mcan_wq", - WQ_FREEZABLE | WQ_MEM_RECLAIM, 0); + cdev->tx_wq = alloc_ordered_workqueue("mcan_wq", + WQ_FREEZABLE | WQ_MEM_RECLAIM); if (!cdev->tx_wq) { err = -ENOMEM; goto out_wq_fail; } - INIT_WORK(&cdev->tx_work, m_can_tx_work_queue); + for (int i = 0; i != cdev->nr_tx_ops; ++i) { + cdev->tx_ops[i].cdev = cdev; + INIT_WORK(&cdev->tx_ops[i].work, m_can_tx_work_queue); + } err = request_threaded_irq(dev->irq, NULL, m_can_isr, IRQF_ONESHOT, @@ -2153,6 +2159,19 @@ int m_can_class_register(struct m_can_classdev *cdev) { int ret; + if (cdev->is_peripheral) { + cdev->nr_tx_ops = min(cdev->mcfg[MRAM_TXB].num, + cdev->mcfg[MRAM_TXE].num); + cdev->tx_ops = + devm_kzalloc(cdev->dev, + cdev->nr_tx_ops * sizeof(*cdev->tx_ops), + GFP_KERNEL); + if (!cdev->tx_ops) { + dev_err(cdev->dev, "Failed to allocate tx_ops for workqueue\n"); + return -ENOMEM; + } + } + if (cdev->pm_clock_support) { ret = m_can_clk_start(cdev); if (ret) diff --git a/drivers/net/can/m_can/m_can.h b/drivers/net/can/m_can/m_can.h index 185289a3719c..bf2d710c982f 100644 --- a/drivers/net/can/m_can/m_can.h +++ b/drivers/net/can/m_can/m_can.h @@ -70,6 +70,12 @@ struct m_can_ops { int (*init)(struct m_can_classdev *cdev); }; +struct m_can_tx_op { + struct m_can_classdev *cdev; + struct work_struct work; + struct sk_buff *skb; +}; + struct m_can_classdev { struct can_priv can; struct can_rx_offload offload; @@ -80,8 +86,6 @@ struct m_can_classdev { struct clk *cclk; struct workqueue_struct *tx_wq; - struct work_struct tx_work; - struct sk_buff *tx_skb; struct phy *transceiver; struct hrtimer irq_timer; @@ -105,6 +109,10 @@ struct m_can_classdev { // Store this internally to avoid fetch delays on peripheral chips int tx_fifo_putidx; + struct m_can_tx_op *tx_ops; + int nr_tx_ops; + int next_tx_op; + struct mram_cfg mcfg[MRAM_CFG_NUM]; }; -- 2.38.1