Received: by 2002:a05:7412:798b:b0:fc:a2b0:25d7 with SMTP id fb11csp409439rdb; Thu, 22 Feb 2024 07:29:28 -0800 (PST) X-Forwarded-Encrypted: i=2; AJvYcCVDLaGdhzHdA63SqbsKTomVdX10ONjU7Ow6E/7Mc5SRc6qD8qxUV2LYJI41uyORcKFx9QI1rAijjOD2TaWATz7a6DC1kKbDd/2iKeGe4g== X-Google-Smtp-Source: AGHT+IGSwkaFA0ZJucKOEPxCvkubfuY81OuVOpIB2ne/ZXTqgu1rxucVL4cZP7c9wjs42VYhqnIX X-Received: by 2002:a17:906:a445:b0:a3f:9d6a:65ec with SMTP id cb5-20020a170906a44500b00a3f9d6a65ecmr961109ejb.28.1708615767931; Thu, 22 Feb 2024 07:29:27 -0800 (PST) Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id hq15-20020a1709073f0f00b00a3ecd477515si3444392ejc.992.2024.02.22.07.29.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 07:29:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-76775-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@intel.com header.s=Intel header.b=cpUCkWx6; arc=fail (body hash mismatch); spf=pass (google.com: domain of linux-kernel+bounces-76775-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-76775-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 8566D1F23128 for ; Thu, 22 Feb 2024 15:29:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AB3B414D441; Thu, 22 Feb 2024 15:29:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cpUCkWx6" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81D341474D4; Thu, 22 Feb 2024 15:29:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708615757; cv=none; b=i/MnibTJF+cOQByoYXTNoM/bDMqVtWqTR193YizoCvu2XVwwUkgLi5nc+921oRXUwbLjFNjXiY2zj5uAmGAA177+fUkYDaFCSU/sCzTvdDEMhDW+XoEg0eKPnHGwa+9WPU4BaF0DbBA3fWgqHsmt0GdbasE0dfh7xnsbdmli/jg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708615757; c=relaxed/simple; bh=UMGrEK87Ud+pMVRA+mW3KlN70QQqpXNjVjwMAipmIVY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=j0w8ZC5V0wKsjgGD6wRzaVmjBdTk4p7evtEpxrGStC4lRK0yHvCI50TXS7zE0vU/zTeSHwLqigCDBhPN5mMzn5cjdZ7wq87z5CB4J/9fg8qqDyS+b+31WrQzCAK2fHXRtZg1fB1aQ3zUx9CwLHBl163MxR3ZP5EfpSMq9y1uRlA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=cpUCkWx6; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708615756; x=1740151756; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=UMGrEK87Ud+pMVRA+mW3KlN70QQqpXNjVjwMAipmIVY=; b=cpUCkWx6lg09KbQUaZ4Dg1BfaKowsCZszc3i5KAm4nHFdOPEack4EwXp 71AUQBxnnYaKRw9Oip8MhZfiD0eiaTY7BwrcXCUrS0tDpQ7qsv72MiPl8 lK3dQ3w4srdmbuCwrvaAB5xJGI8qHi+fPnOFfOdBENPDYYDF1ucMhql7q Pcbbx4bdaQz1ZTD46mrXUCVVhsAU0QbDX2WNlydQNDWP7kLBu2JM7PzIC Xjv8vc0O5pcQM6HPf1IN5E+4deZotFoM0QbL8VL5m8D8TCm78wHO/CpCr nekXeBDveJHIS+8Uj0l/YsWx/rDFMbvu18oEJPVrWPS+ImZ3XuskE0pwE Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10992"; a="2984394" X-IronPort-AV: E=Sophos;i="6.06,179,1705392000"; d="scan'208";a="2984394" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2024 07:29:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10992"; a="913543141" X-IronPort-AV: E=Sophos;i="6.06,179,1705392000"; d="scan'208";a="913543141" Received: from smile.fi.intel.com ([10.237.72.54]) by fmsmga002.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Feb 2024 07:29:09 -0800 Received: from andy by smile.fi.intel.com with local (Exim 4.97) (envelope-from ) id 1rdB0U-00000006fE8-0mA7; Thu, 22 Feb 2024 17:29:06 +0200 Date: Thu, 22 Feb 2024 17:29:05 +0200 From: Andy Shevchenko To: Herve Codina Cc: Vadim Fedorenko , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Yury Norov , Rasmus Villemoes , linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Andrew Lunn , Mark Brown , Christophe Leroy , Thomas Petazzoni Subject: Re: [PATCH v4 1/5] net: wan: Add support for QMC HDLC Message-ID: References: <20240222142219.441767-1-herve.codina@bootlin.com> <20240222142219.441767-2-herve.codina@bootlin.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240222142219.441767-2-herve.codina@bootlin.com> Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo On Thu, Feb 22, 2024 at 03:22:14PM +0100, Herve Codina wrote: > The QMC HDLC driver provides support for HDLC using the QMC (QUICC > Multichannel Controller) to transfer the HDLC data. .. > +struct qmc_hdlc { > + struct device *dev; > + struct qmc_chan *qmc_chan; > + struct net_device *netdev; > + bool is_crc32; > + spinlock_t tx_lock; /* Protect tx descriptors */ Just wondering if above tx/rx descriptors should be aligned on a cacheline for DMA? > + struct qmc_hdlc_desc tx_descs[8]; > + unsigned int tx_out; > + struct qmc_hdlc_desc rx_descs[4]; > +}; .. > +#define QMC_HDLC_RX_ERROR_FLAGS (QMC_RX_FLAG_HDLC_OVF | \ > + QMC_RX_FLAG_HDLC_UNA | \ > + QMC_RX_FLAG_HDLC_ABORT | \ > + QMC_RX_FLAG_HDLC_CRC) Wouldn't be slightly better to have it as #define QMC_HDLC_RX_ERROR_FLAGS \ (QMC_RX_FLAG_HDLC_OVF | QMC_RX_FLAG_HDLC_UNA | \ QMC_RX_FLAG_HDLC_CRC | QMC_RX_FLAG_HDLC_ABORT) ? .. > + ret = qmc_chan_write_submit(qmc_hdlc->qmc_chan, desc->dma_addr, desc->dma_size, > + qmc_hdlc_xmit_complete, desc); > + if (ret) { > + dev_err(qmc_hdlc->dev, "qmc chan write returns %d\n", ret); > + dma_unmap_single(qmc_hdlc->dev, desc->dma_addr, desc->dma_size, DMA_TO_DEVICE); > + return ret; I would do other way around, i.e. release resource followed up by printing a message. Printing a message is a slow operation and may prevent the (soon freed) resources to be re-used earlier. > + } .. > + spin_lock_irqsave(&qmc_hdlc->tx_lock, flags); Why not using cleanup.h from day 1? > +end: This label, in particular, will not be needed with above in place. > + spin_unlock_irqrestore(&qmc_hdlc->tx_lock, flags); > + return ret; > +} .. > + /* Queue as many recv descriptors as possible */ > + for (i = 0; i < ARRAY_SIZE(qmc_hdlc->rx_descs); i++) { > + desc = &qmc_hdlc->rx_descs[i]; > + > + desc->netdev = netdev; > + ret = qmc_hdlc_recv_queue(qmc_hdlc, desc, chan_param.hdlc.max_rx_buf_size); > + if (ret) { > + if (ret == -EBUSY && i != 0) > + break; /* We use all the QMC chan capability */ > + goto free_desc; > + } Can be unfolded to if (ret == -EBUSY && i) break; /* We use all the QMC chan capability */ if (ret) goto free_desc; Easy to read and understand. > + } .. > +static int qmc_hdlc_probe(struct platform_device *pdev) > +{ With struct device *dev = &pdev->dev; the below code will be neater (see other comments for the examples). > + struct device_node *np = pdev->dev.of_node; It is used only once, drop it (see below). > + struct qmc_hdlc *qmc_hdlc; > + struct qmc_chan_info info; > + hdlc_device *hdlc; > + int ret; > + > + qmc_hdlc = devm_kzalloc(&pdev->dev, sizeof(*qmc_hdlc), GFP_KERNEL); > + if (!qmc_hdlc) > + return -ENOMEM; > + > + qmc_hdlc->dev = &pdev->dev; > + spin_lock_init(&qmc_hdlc->tx_lock); > + > + qmc_hdlc->qmc_chan = devm_qmc_chan_get_bychild(qmc_hdlc->dev, np); qmc_hdlc->qmc_chan = devm_qmc_chan_get_bychild(dev, dev->of_node); > + if (IS_ERR(qmc_hdlc->qmc_chan)) { > + ret = PTR_ERR(qmc_hdlc->qmc_chan); > + return dev_err_probe(qmc_hdlc->dev, ret, "get QMC channel failed\n"); return dev_err_probe(dev, PTR_ERR(qmc_hdlc->qmc_chan), "get QMC channel failed\n"); > + } > + > + ret = qmc_chan_get_info(qmc_hdlc->qmc_chan, &info); > + if (ret) { > + dev_err(qmc_hdlc->dev, "get QMC channel info failed %d\n", ret); > + return ret; Why not using same message pattern everywhere, i.e. dev_err_probe()? return dev_err_probe(dev, ret, "get QMC channel info failed\n"); (and so on...) > + } > + > + if (info.mode != QMC_HDLC) { > + dev_err(qmc_hdlc->dev, "QMC chan mode %d is not QMC_HDLC\n", > + info.mode); > + return -EINVAL; > + } > + > + qmc_hdlc->netdev = alloc_hdlcdev(qmc_hdlc); > + if (!qmc_hdlc->netdev) { > + dev_err(qmc_hdlc->dev, "failed to alloc hdlc dev\n"); > + return -ENOMEM; We do not issue a message for -ENOMEM. > + } > + > + hdlc = dev_to_hdlc(qmc_hdlc->netdev); > + hdlc->attach = qmc_hdlc_attach; > + hdlc->xmit = qmc_hdlc_xmit; > + SET_NETDEV_DEV(qmc_hdlc->netdev, qmc_hdlc->dev); > + qmc_hdlc->netdev->tx_queue_len = ARRAY_SIZE(qmc_hdlc->tx_descs); > + qmc_hdlc->netdev->netdev_ops = &qmc_hdlc_netdev_ops; > + ret = register_hdlc_device(qmc_hdlc->netdev); > + if (ret) { > + dev_err(qmc_hdlc->dev, "failed to register hdlc device (%d)\n", ret); > + goto free_netdev; > + } > + > + platform_set_drvdata(pdev, qmc_hdlc); > + > + return 0; > + > +free_netdev: > + free_netdev(qmc_hdlc->netdev); > + return ret; > +} -- With Best Regards, Andy Shevchenko