Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp1827517ybi; Thu, 4 Jul 2019 00:12:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqxa9dgrpvjlkm6gGIbVOeGKFOJPhaqrIFjFpz/Ij2xkj0/xVKpvtr+7FvTJHhR1Csg80WTw X-Received: by 2002:a63:5b52:: with SMTP id l18mr13842786pgm.21.1562224325759; Thu, 04 Jul 2019 00:12:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1562224325; cv=none; d=google.com; s=arc-20160816; b=qYZBhBMSOmTBAemV70jeu4PBnuM1ZY3QnQWfgTcMWM2ncdLpKBVqeZJrIEz2VaOd3F NeE9pYRwzkOjoN51/Q466kXBdyKma42Lb8QZOLJoZ853gOa6SyVa2IndGv1i06M0aZ74 w7DL2sPvAPOUo9dLebQViq7I/7LVwEUtyZXo0BievqUkDS8CT/v/RhQ6hbjgVieMCW3L xMJi/cm9XkksM4ou30icp9aihtvHqW0LZsZNvhQlEydh4NY8GOZxPJKNXBROWy8ZibUj 2yq/84sLkMPUCABIw37CrLRcAHFEZp4rvIIVh75+8aAZQNiHIXQ0vN2ZMPC4HvOCpL7o cgbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:dkim-signature:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=HFvEppzQiuf+Rpk94O6Fz5EJJgEXcRKXyHp2FnphP64=; b=c5N77D1AaIzL+vmu3SFybQ8I+JdUStKkmkNeADCY+7w/683Un9yf31ZGLdvNLgBrgK g2wcADDmltP7LciTOt2B8icOyhcGOafLgZ9v1XhTSlejcBw1G1fPBpNvYs2i7sdmcLf9 wTIBgUM8+6u1CJRlk7ie/VM9KGg/vdvwRA73XGe/dR3OiGmGWj0JeApeuw7IL7ybTpUF mKLSF/gLg1xlsED8bWijJ5AeHrHjHkLDjCe7vJaN8NCSV8uKUXyxg4k4+IKRYEpYqgZs TJ3w4JFz1RKaBQI9Z1BCMgZqSFezsltLYwyNW1HU832h/W/DSgMDCjY3q1wXqCy5hEwv VpDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=hdpGHill; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cl13si4676039plb.97.2019.07.04.00.11.50; Thu, 04 Jul 2019 00:12:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=hdpGHill; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727358AbfGDHKP (ORCPT + 99 others); Thu, 4 Jul 2019 03:10:15 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:8082 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725920AbfGDHKO (ORCPT ); Thu, 4 Jul 2019 03:10:14 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 04 Jul 2019 00:10:11 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 04 Jul 2019 00:10:12 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 04 Jul 2019 00:10:12 -0700 Received: from [10.21.132.148] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 4 Jul 2019 07:10:10 +0000 Subject: Re: [PATCH v4] dmaengine: tegra-apb: Support per-burst residue granularity To: Dmitry Osipenko , Laxman Dewangan , Vinod Koul , Thierry Reding , Ben Dooks CC: , , References: <20190703012836.16568-1-digetx@gmail.com> From: Jon Hunter Message-ID: <55d402ad-6cb9-9e91-a8a4-b89d37674f4d@nvidia.com> Date: Thu, 4 Jul 2019 08:10:08 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL108.nvidia.com (172.18.146.13) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1562224211; bh=HFvEppzQiuf+Rpk94O6Fz5EJJgEXcRKXyHp2FnphP64=; h=X-PGP-Universal:Subject:To:CC:References:From:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:X-Originating-IP: X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=hdpGHilltHnCm5394tMblEnGAxo0kfVO4SdPbbMTTGJRkzmDSB4Ch0l9AVjvW4Ijg zz0pI5l24pC/TgIYE9GzHRger/ObTf6wHSCL6T7m/7q7terFcYSs8W6nuIUP13Tcwe mUr+VgFARpPJkZ4YCFdaXUv9w09KUQZoLQgyceo3u98CusRQaCdfkRK8U5V9ZpRXrf WwJqq31mzeaKl7T0DD28eH0GOXoI/0KiYzkma79ncoqEKBds28dkW3Lc/KKxqYXQQH 3loJQJzlraY4H7116nB/zoyW2LZ03YgChWgycN4U1OEUGxMHTktCL3Fb7asRZP/Zx5 nHtSQA4izfMZA== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/07/2019 18:00, Dmitry Osipenko wrote: > 03.07.2019 19:37, Jon Hunter =D0=BF=D0=B8=D1=88=D0=B5=D1=82: >> >> On 03/07/2019 02:28, Dmitry Osipenko wrote: >>> Tegra's APB DMA engine updates words counter after each transferred bur= st >>> of data, hence it can report transfer's residual with more fidelity whi= ch >>> may be required in cases like audio playback. In particular this fixes >>> audio stuttering during playback in a chromium web browser. The patch i= s >>> based on the original work that was made by Ben Dooks and a patch from >>> downstream kernel. It was tested on Tegra20 and Tegra30 devices. >>> >>> Link: https://lore.kernel.org/lkml/20190424162348.23692-1-ben.dooks@cod= ethink.co.uk/ >>> Link: https://nv-tegra.nvidia.com/gitweb/?p=3Dlinux-4.4.git;a=3Dcommit;= h=3Dc7bba40c6846fbf3eaad35c4472dcc7d8bbc02e5 >>> Inspired-by: Ben Dooks >>> Reviewed-by: Jon Hunter >>> Signed-off-by: Dmitry Osipenko >>> --- >>> >>> Changelog: >>> >>> v4: The words_xferred is now also reset on a new iteration of a cyclic >>> transfer by ISR, so that dmaengine_tx_status() won't produce a >>> misleading warning splat on TX status re-checking after a cycle >>> completion when cyclic transfer consists of a single SG. >>> >>> v3: Added workaround for a hardware design shortcoming that results >>> in a words counter wraparound before end-of-transfer bit is set >>> in a cyclic mode. >>> >>> v2: Addressed review comments made by Jon Hunter to v1. We won't try >>> to get words count if dma_desc is on free list as it will result >>> in a NULL dereference because this case wasn't handled properly. >>> >>> The residual value is now updated properly, avoiding potential >>> integer overflow by adding the "bytes" to the "bytes_transferred" >>> instead of the subtraction. >>> >>> drivers/dma/tegra20-apb-dma.c | 72 +++++++++++++++++++++++++++++++---- >>> 1 file changed, 65 insertions(+), 7 deletions(-) >>> >>> diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dm= a.c >>> index 79e9593815f1..148d136191d7 100644 >>> --- a/drivers/dma/tegra20-apb-dma.c >>> +++ b/drivers/dma/tegra20-apb-dma.c >>> @@ -152,6 +152,7 @@ struct tegra_dma_sg_req { >>> bool last_sg; >>> struct list_head node; >>> struct tegra_dma_desc *dma_desc; >>> + unsigned int words_xferred; >>> }; >>> =20 >>> /* >>> @@ -496,6 +497,7 @@ static void tegra_dma_configure_for_next(struct teg= ra_dma_channel *tdc, >>> tdc_write(tdc, TEGRA_APBDMA_CHAN_CSR, >>> nsg_req->ch_regs.csr | TEGRA_APBDMA_CSR_ENB); >>> nsg_req->configured =3D true; >>> + nsg_req->words_xferred =3D 0; >>> =20 >>> tegra_dma_resume(tdc); >>> } >>> @@ -511,6 +513,7 @@ static void tdc_start_head_req(struct tegra_dma_cha= nnel *tdc) >>> typeof(*sg_req), node); >>> tegra_dma_start(tdc, sg_req); >>> sg_req->configured =3D true; >>> + sg_req->words_xferred =3D 0; >>> tdc->busy =3D true; >>> } >>> =20 >>> @@ -638,6 +641,8 @@ static void handle_cont_sngl_cycle_dma_done(struct = tegra_dma_channel *tdc, >>> list_add_tail(&dma_desc->cb_node, &tdc->cb_desc); >>> dma_desc->cb_count++; >>> =20 >>> + sgreq->words_xferred =3D 0; >>> + >>> /* If not last req then put at end of pending list */ >>> if (!list_is_last(&sgreq->node, &tdc->pending_sg_req)) { >>> list_move_tail(&sgreq->node, &tdc->pending_sg_req); >>> @@ -797,6 +802,62 @@ static int tegra_dma_terminate_all(struct dma_chan= *dc) >>> return 0; >>> } >>> =20 >>> +static unsigned int tegra_dma_sg_bytes_xferred(struct tegra_dma_channe= l *tdc, >>> + struct tegra_dma_sg_req *sg_req) >>> +{ >>> + unsigned long status, wcount =3D 0; >>> + >>> + if (!list_is_first(&sg_req->node, &tdc->pending_sg_req)) >>> + return 0; >>> + >>> + if (tdc->tdma->chip_data->support_separate_wcount_reg) >>> + wcount =3D tdc_read(tdc, TEGRA_APBDMA_CHAN_WORD_TRANSFER); >>> + >>> + status =3D tdc_read(tdc, TEGRA_APBDMA_CHAN_STATUS); >>> + >>> + if (!tdc->tdma->chip_data->support_separate_wcount_reg) >>> + wcount =3D status; >>> + >>> + if (status & TEGRA_APBDMA_STATUS_ISE_EOC) >>> + return sg_req->req_len; >>> + >>> + wcount =3D get_current_xferred_count(tdc, sg_req, wcount); >>> + >>> + if (!wcount) { >>> + /* >>> + * If wcount wasn't ever polled for this SG before, then >>> + * simply assume that transfer hasn't started yet. >>> + * >>> + * Otherwise it's the end of the transfer. >>> + * >>> + * The alternative would be to poll the status register >>> + * until EOC bit is set or wcount goes UP. That's so >>> + * because EOC bit is getting set only after the last >>> + * burst's completion and counter is less than the actual >>> + * transfer size by 4 bytes. The counter value wraps around >>> + * in a cyclic mode before EOC is set(!), so we can't easily >>> + * distinguish start of transfer from its end. >>> + */ >>> + if (sg_req->words_xferred) >>> + wcount =3D sg_req->req_len - 4; >>> + >>> + } else if (wcount < sg_req->words_xferred) { >>> + /* >>> + * This case shall not ever happen because EOC bit >>> + * must be set once next cyclic transfer is started. >> >> Should this still be cyclic here? >=20 > Do you mean the "comment" by "here"? >=20 > It will be absolutely terrible if this case happens for oneshot transfer,= assume > kernel/hardware is on fire. Or more likely a SW bug :-) Yes should never happen for either sg or cyclic, but there is no mention of sg transfers. Maybe the sg case is more obvious but in general this case should never happen for any transfer. Jon --=20 nvpublic