Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2298343rwd; Mon, 15 May 2023 09:41:36 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ57PoH7eE27m6D0vuQrT9tsVjQhdMFY5YOcg0wVtzZk2NKWNk5idByPb5vWvMVtryOijLUo X-Received: by 2002:a17:902:8348:b0:1ac:9890:1c49 with SMTP id z8-20020a170902834800b001ac98901c49mr24896104pln.15.1684168896487; Mon, 15 May 2023 09:41:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684168896; cv=none; d=google.com; s=arc-20160816; b=mmhPwnq2M4eiI6vFatf0r3wQWNNr+hN9xgRaJYf6Na3bpUG+dZ2rmJlTyvYp2kiaP3 2vVUIMI4DTwnCvtG18m9QLdZkBII/gSBrCJ4jzqU+ev5UQnmxjWOH+ff+r1D8FcIhixq XmVvOONfNwbUG4yl07Cp/bPHZeZf+BKG7Lox47rNmyRFccyl6b6/I8sqvMoDY4DRy0JA CpzGQQqI5z2f1bCXl3ism5BpXPDt2r1DG+mXgRh77Q1Mk8XZemVPJMjFnLhgyvC6YGi6 Q08nFBGcuLfE4CdnnQWZk0ELrpXw0IujBFGESEapw/3KPBhf+8qGDqiAGvNPaxhUZ7Ac BBfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=c6HLcdTzJv14PMR7KS+lsclJ7/NF+op/Lhl4+UC2DxQ=; b=CzSCdYMB/iCj3qrMUbDkVm85rOEfFgxteXma8v64ExsqzOaarkljzf3nhqd0FuuC3B jISVK+cEb5QMIdzXUAXPAe3jTDw8YtBA2Bam8zbjuN5xPm4XX3G9Zw2yN9gNYMufdahT tHSwCtKfasN1aSND287DtmaiPRDF+kKLBwiUyjGcSLKmXyoPOAp4859mZn+OZ6ZrhQWe knG3+5MPNC5VG106JVhygAZnwpGjxvFcR2CgFBlQDcpi4bOjjvKpOyUuZ/5C1ebBOaBc HU+8i8OpgR/MVieCnPTGsC7bo1avWOqIkP7BBMboL7Si0XZMRIWOkXnXAqY0gIHdg6VL vM/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h64-20020a638343000000b005342b6178d1si1282780pge.29.2023.05.15.09.41.23; Mon, 15 May 2023 09:41:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242689AbjEOQ2v (ORCPT + 99 others); Mon, 15 May 2023 12:28:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232283AbjEOQ2t (ORCPT ); Mon, 15 May 2023 12:28:49 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B92C095; Mon, 15 May 2023 09:28:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4ECE9620A8; Mon, 15 May 2023 16:28:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2382C433D2; Mon, 15 May 2023 16:28:41 +0000 (UTC) Date: Mon, 15 May 2023 17:28:38 +0100 From: Catalin Marinas To: Petr =?utf-8?B?VGVzYcWZw61r?= Cc: Petr Tesarik , Jonathan Corbet , Greg Kroah-Hartman , "Rafael J. Wysocki" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , Christoph Hellwig , Marek Szyprowski , Robin Murphy , "Paul E. McKenney" , Borislav Petkov , Randy Dunlap , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Andy Shevchenko , Hans de Goede , Jason Gunthorpe , Kees Cook , Thomas Gleixner , "open list:DOCUMENTATION" , open list , "open list:DRM DRIVERS" , "open list:DMA MAPPING HELPERS" , Roberto Sassu , Kefeng Wang Subject: Re: [PATCH v2 RESEND 7/7] swiotlb: per-device flag if there are dynamically allocated buffers Message-ID: References: <69f9e058bb1ad95905a62a4fc8461b064872af97.1683623618.git.petr.tesarik.ext@huawei.com> <20230515104847.6dfdf31b@meshulam.tesarici.cz> <20230515120054.0115a4eb@meshulam.tesarici.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20230515120054.0115a4eb@meshulam.tesarici.cz> X-Spam-Status: No, score=-6.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org (some of you replies may have been filtered to various of my mailboxes, depending on which lists you cc'ed; replying here) On Mon, May 15, 2023 at 12:00:54PM +0200, Petr Tesařík wrote: > On Mon, 15 May 2023 10:48:47 +0200 > Petr Tesařík wrote: > > On Sun, 14 May 2023 19:54:27 +0100 > > Catalin Marinas wrote: > > > Now, thinking about the list_head access and the flag ordering, since it > > > doesn't matter, you might as well not bother with the flag at all and > > > rely on list_add() and list_empty() ordering vs the hypothetical 'blah' > > > access. Both of these use READ/WRITE_ONCE() for setting > > > dma_io_tlb_dyn_slots.next. You only need an smp_wmb() after the > > > list_add() and an smp_rmb() before a list_empty() check in > ^^^^^^^^^ > Got it, finally. Well, that's exactly something I don't want to do. > For example, on arm64 (seeing your email address), smp_rmb() translates > to a "dsb ld" instruction. I would expect that this is more expensive > than a "ldar", generated by smp_load_acquire(). It translates to a dmb ishld which is on par with ldar (dsb is indeed a lot more expensive but that's not generated here). > > > is_swiotlb_buffer(), no dma_iotlb_have_dyn variable. > > > > Wait, let me check that I understand you right. Do you suggest that I > > convert dma_io_tlb_dyn_slots to a lockless list and get rid of the > > spinlock? > > > > I'm sure it can be done for list_add() and list_del(). I'll have > > to think about list_move(). > > Hm, even the documentation of llist_empty() says that it is "not > guaranteed to be accurate or up to date". If it could be, I'm quite > sure the authors would have gladly implemented it as such. It doesn't but neither does your flag. If you want a guarantee, you'd need locks because a llist_empty() on its own can race with other llist_add/del_*() that may not yet be visible to a CPU at exactly that moment. BTW, the llist implementation cannot delete a random element, so not sure this is suitable for your implementation (it can only delete the first element or the whole list). I do think you need to change your invariants and not rely on an absolute list_empty() or some flag: P0: list_add(paddr); WRITE_ONCE(blah, paddr); P1: paddr = READ_ONCE(blah); list_empty(); Your invariant (on P1) should be blah == paddr => !list_empty(). If there is another P2 removing paddr from the list, this wouldn't work (nor your flag) but the assumption is that a correctly written driver that still has a reference to paddr doesn't use it after being removed from the list (i.e. it doesn't do a dma_unmap(paddr) and still continue to use this paddr for e.g. dma_sync()). For such invariant, you'd need ordering between list_add() and the write of paddr (smp_wmb() would do). On P1, you need an smp_rmb() before list_empty() since the implementation does a READ_ONCE only). You still need the locks for list modifications and list traversal as I don't see how you can use the llist implementation with random element removal. There is another scenario to take into account on the list_del() side. Let's assume that there are other elements on the list, so list_empty() == false: P0: list_del(paddr); /* the memory gets freed, added to some slab or page free list */ WRITE_ONCE(slab_free_list, __va(paddr)); P1: paddr = __pa(READ_ONCE(slab_free_list));/* re-allocating paddr freed on P0 */ if (!list_empty()) { /* assuming other elements on the list */ /* searching the list */ list_for_each() { if (pos->paddr) == __pa(vaddr)) /* match */ } } On P0, you want the list update to be visible before the memory is freed (and potentially reallocated on P1). An smp_wmb() on P0 would do. For P1, we don't care about list_empty() as there can be other elements already. But we do want any list elements reading during the search to be ordered after the slab_free_list reading. The smp_rmb() you'd add for the case above would suffice. -- Catalin