Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp2225057pxb; Fri, 25 Mar 2022 13:23:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwD2A0lsFebCT4NzD//DfiffshLpfgAI1snCCtPR0MG08DInIR1j68CdyHan8ntDiRdlKyz X-Received: by 2002:a17:902:d652:b0:153:ad02:741c with SMTP id y18-20020a170902d65200b00153ad02741cmr13512717plh.135.1648239830243; Fri, 25 Mar 2022 13:23:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648239830; cv=none; d=google.com; s=arc-20160816; b=KcvPvgH0lxN5peWOxaJH7jGbhpHwY5uvW1g9qaYLWb5cfnslyYPb5vMGG3WxNKv/GA 04ok1kHlR9IW3/W+BYRq8p9qRl+AxdWTeDJKtP0CddB6qYEW1Pg/JEZYYD6xdsItC/n1 D+C/AOHHXA5bCR9lNMouVs6G3zB8/7QGiowM0z/NjKsj+yTQTjpXI4jNVEO1lNLPs+KM eI1TylerzxaIpEefyBoqMpVZChTuDsElYnHnJY5ztfhWB11Le3aucshk6eEspPCBIjcW 8jytQ+wsuus0Lq0EGbc0JRVwLaKZsZSBBev/908uI+sLkI2T5l7aVVw9KLrJMBogKUxH 3YwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:references:in-reply-to:subject:cc:to:dkim-signature :from; bh=HBjxYFeWL3a55xMsfpv8YDCyfapY37D8qHD9/1uA6FE=; b=YCVoHepzWxBywBWssRj+XrnxhVvsBuPeLRuhx5GCG+nxGf34nODkLOX3mJqPhC0tXz zNcmkmcaqo4wGnaJOVUVfYvJrqg/42UBQxwfCkV0gqVr1X1PkOedpgFKShBOVObidRzX EQ8qnCGfSH0VAC5EyoDEjrR+qBBjqTL5R5jtO5xIYm5/vy06Y5//k1AsyP7l7kd0XEVG AFbzlsSy8/IYLfrdxOtVqSi7YcSPeFOmOdgsyNiCI/XBV70wF8ILJ6G/Ben6WRPvIXdo Qo470mkggMMXdyWrt+bfRe8/eOR2Bu/fN0T2Rp+pPw2RTrxOjyByfEIEGrLzHX1y48fk Qwcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@toke.dk header.s=20161023 header.b=u6q8TNoD; spf=pass (google.com: domain of linux-wireless-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-wireless-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=toke.dk Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id z15-20020a056a001d8f00b004fa3a8e0040si3090949pfw.247.2022.03.25.13.23.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 25 Mar 2022 13:23:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-wireless-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@toke.dk header.s=20161023 header.b=u6q8TNoD; spf=pass (google.com: domain of linux-wireless-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-wireless-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=toke.dk Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AFA92410F6F; Fri, 25 Mar 2022 12:35:20 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230364AbiCYTgb (ORCPT + 70 others); Fri, 25 Mar 2022 15:36:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230231AbiCYTgT (ORCPT ); Fri, 25 Mar 2022 15:36:19 -0400 Received: from mail.toke.dk (mail.toke.dk [45.145.95.4]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 462102E5188; Fri, 25 Mar 2022 12:09:37 -0700 (PDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1648232029; bh=2/6Vw2ASBAiN11k5yojKv5+vCVhPImMu3UVtt56378s=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=u6q8TNoDqnTn5TUskazRYnwgeG+FCVCycn+e4KdE8c61ipZ71vDDXvjHlHXaTSufc /7zUjA3cna3B1x8HpC7xOnRsqbkf3oxSIjhlvc2TBhwwVaooYqMMVK90FuenhMHD/2 QS79/isfJKz7ViC1uu+Wdx+J3znSWRwUjvnqrUahIJiVwHX51aJj1/gj9kScR5ZAJe SUnOve0P5jpqAuBha431w1dhO5VMokHhVBKZvf300guwM7zTQJRRqrDCikWFvCLTNy weUjZRns/h9tE3SQyZ1YXoNzvSRKC+ij1BEfuTN4pybtBu+xQp3jtpNx0kJHo92MC0 9CDkrzVDsS39Q== To: Robin Murphy , mbizon@freebox.fr, Linus Torvalds Cc: Christoph Hellwig , Oleksandr Natalenko , Halil Pasic , Marek Szyprowski , Kalle Valo , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Olha Cherevyk , iommu , linux-wireless , Netdev , Linux Kernel Mailing List , Greg Kroah-Hartman , stable Subject: Re: [REGRESSION] Recent swiotlb DMA_FROM_DEVICE fixes break ath9k-based AP In-Reply-To: References: <1812355.tdWV9SEqCh@natalenko.name> <20220324055732.GB12078@lst.de> <4386660.LvFx2qVVIh@natalenko.name> <81ffc753-72aa-6327-b87b-3f11915f2549@arm.com> <878rsza0ih.fsf@toke.dk> <4be26f5d8725cdb016c6fdd9d05cfeb69cdd9e09.camel@freebox.fr> <20220324163132.GB26098@lst.de> <871qyr9t4e.fsf@toke.dk> <31434708dcad126a8334c99ee056dcce93e507f1.camel@freebox.fr> <87a6de80em.fsf@toke.dk> Date: Fri, 25 Mar 2022 19:13:49 +0100 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <871qyp99ya.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-wireless@vger.kernel.org Robin Murphy writes: > On 2022-03-25 16:25, Toke H=C3=B8iland-J=C3=B8rgensen wrote: >> Maxime Bizon writes: >>=20 >>> On Thu, 2022-03-24 at 12:26 -0700, Linus Torvalds wrote: >>> >>>> >>>> It's actually very natural in that situation to flush the caches from >>>> the CPU side again. And so dma_sync_single_for_device() is a fairly >>>> reasonable thing to do in that situation. >>>> >>> >>> In the non-cache-coherent scenario, and assuming dma_map() did an >>> initial cache invalidation, you can write this: >>> >>> rx_buffer_complete_1(buf) >>> { >>> invalidate_cache(buf, size) >>> if (!is_ready(buf)) >>> return; >>> >>> } >>> >>> or >>> >>> rx_buffer_complete_2(buf) >>> { >>> if (!is_ready(buf)) { >>> invalidate_cache(buf, size) >>> return; >>> } >>> >>> } >>> >>> The latter is preferred for performance because dma_map() did the >>> initial invalidate. >>> >>> Of course you could write: >>> >>> rx_buffer_complete_3(buf) >>> { >>> invalidate_cache(buf, size) >>> if >>> (!is_ready(buf)) { >>> invalidate_cache(buf, size) >>> return; >>> } >>>=20=09 >>> >>> } >>> >>> >>> but it's a waste of CPU cycles >>> >>> So I'd be very cautious assuming sync_for_cpu() and sync_for_device() >>> are both doing invalidation in existing implementation of arch DMA ops, >>> implementers may have taken some liberty around DMA-API to avoid >>> unnecessary cache operation (not to blame them). >>=20 >> I sense an implicit "and the driver can't (or shouldn't) influence >> this" here, right? > > Right, drivers don't get a choice of how a given DMA API implementation=20 > works. > >>> For example looking at arch/arm/mm/dma-mapping.c, for DMA_FROM_DEVICE >>> >>> sync_single_for_device() >>> =3D> __dma_page_cpu_to_dev() >>> =3D> dma_cache_maint_page(op=3Ddmac_map_area) >>> =3D> cpu_cache.dma_map_area() >>> >>> sync_single_for_cpu() >>> =3D> __dma_page_dev_to_cpu() >>> =3D> >>> __dma_page_cpu_to_dev(op=3Ddmac_unmap_area) >>> =3D> >>> cpu_cache.dma_unmap_area() >>> >>> dma_map_area() always does cache invalidate. >>> >>> But for a couple of CPU variant, dma_unmap_area() is a noop, so >>> sync_for_cpu() does nothing. >>> >>> Toke's patch will break ath9k on those platforms (mostly silent >>> breakage, rx corruption leading to bad performance) >>=20 >> Okay, so that would be bad obviously. So if I'm reading you correctly >> (cf my question above), we can't fix this properly from the driver side, >> and we should go with the partial SWIOTLB revert instead? > > Do you have any other way of telling if DMA is idle, or temporarily > pausing it before the sync_for_cpu, such that you could honour the > notion of ownership transfer properly? I'll go check with someone who has a better grasp of how the hardware works, but I don't think so... > As mentioned elsewhere I suspect the only "real" fix if you really do > need to allow concurrent access is to use the coherent DMA API for > buffers rather than streaming mappings, but that's obviously some far > more significant surgery. That would imply copying the packet data out of that (persistent) coherent mapping each time we do a recv operation, though, right? That would be quite a performance hit... If all we need is a way to make dma_sync_single_for_cpu() guarantee a cache invalidation, why can't we just add a separate version that does that (dma_sync_single_for_cpu_peek() or something)? Using that with the patch I posted earlier should be enough to resolve the issue, AFAICT? -Toke