Received: by 2002:a05:6602:18e:0:0:0:0 with SMTP id m14csp5869891ioo; Wed, 1 Jun 2022 14:32:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwhx/bH1ix/7XBRo3HKmjN0RCt7uKm3JPl4DFl5Q4MX6qqZm+e0w3b5qb3nWVCoE7Sfrwsv X-Received: by 2002:a65:6049:0:b0:3fc:3706:4c4a with SMTP id a9-20020a656049000000b003fc37064c4amr1191253pgp.559.1654119140270; Wed, 01 Jun 2022 14:32:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654119140; cv=none; d=google.com; s=arc-20160816; b=trfJ7Or9ViTvkdwFlBNxkaT3jAQJm8/XJ4PufzhN1/KUBLLeyB34yq71n1/2st/HwT vfocRi8x2jtspy2q+ZdMz/AaPVNS5HhXRlm3rbENYYaK1Kp6YRfHavypA+XW7SI2Hbe8 CYteYr2Xi9yIODZDHp/AB2bLZErS28AbJoonDzBc5Sz3dKsdhBj7gErwDi1ROglQWoc2 bZuoGXceo+pJGe3/0HJ+4u0HgOAIM2VUbFvIhF8qFhBguslo2RjtmXGMsptThLr8IOPt APBwo4dCyeGDDwiof9F8xyY0zE4dx+o6MDW7pkS6rs47QnUWQfbBGN3b89HZrdbfxV4/ 9vAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=1kEmR6X4i/nh+b6VMavBDubG9LMGPY4F+6d16NUM+gg=; b=KTYaB4kW7ndzgAaTqJXrxUh47EF6c1ePh7SaHGUSO4tFUbNxENet/L/1f9q4JqY2CZ FJP02KPiA28qaHQRvQpzxNuxIW0wSKfzDcYF2rp9BOiJik4tlO3Bd9VX8YxpyfoUK4Tz IHiou2Xl/QZsm2p6Zfi4/+rINxiztdwFiTHQj/91S1/noHO9jx0//CQp2EpClzbAOITz qvYzh3uSzcQXapxnS7aTlaZZDEmsz57IHh2TyuHUBVCC5M6Xa+aHYLMzoHVDp1SPL2Ag H8m+Hs/wJ8YNtxx2LJ5t2wNFlNHDeyglREGTQpCm17do+B96ZdydYPe+qilFJGauUlAI jO4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b="mzH/QSAK"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id v26-20020a63ac1a000000b003fbde761c7csi3777451pge.654.2022.06.01.14.32.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jun 2022 14:32:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b="mzH/QSAK"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0B963252C01; Wed, 1 Jun 2022 13:20:03 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354842AbiFAO3F (ORCPT + 99 others); Wed, 1 Jun 2022 10:29:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354945AbiFAO2v (ORCPT ); Wed, 1 Jun 2022 10:28:51 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 266E130542 for ; Wed, 1 Jun 2022 07:27:36 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id z17so2142052pff.7 for ; Wed, 01 Jun 2022 07:27:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=1kEmR6X4i/nh+b6VMavBDubG9LMGPY4F+6d16NUM+gg=; b=mzH/QSAKngGq4V8ihQYKENupPuzAZLcXExKUCzkD7wVNpc9ZhtZ9SHvU7A/Wtsjk5p u8kr0G8Dxjuyhpjuqw3LG5uYEJBVyjeqJdOgUCIYXBdjvv7C4kZJqrt481urixeIGVCQ 6Bk5quC5jGzL+SmIzDU43ncWqQdAVI/I0jUgQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=1kEmR6X4i/nh+b6VMavBDubG9LMGPY4F+6d16NUM+gg=; b=R8NYK327IenvQpoLUdrDodFBfR/M8TVamTj3Q/sjphJQz61zgfpDLMTRNcFHQvgQAN lzvCYSk+wVTy9e9p6BQ03s18XLbvwwWs7oAWdYgdE22/C2HdmqmwiIjOL8fTX4K2R9RV HNFBAzbCmIp+yN7WkcM0IGBZMS+RAeOSIrzA6gyo8RS6I6JXhGw5EJHDLOF+wPsoZmnG ylozG19X1SGXMxPHcuP9iT+XJAx2rMsxyMhaJigCs0yhmqW9180Upp8HupzRIfvhtIRb De9AaN/hEc0rOedIeV+5udUgz1W/cNguxXLC/ziBOM+MY0x89p0TClca367D6oEUVFuK xWnw== X-Gm-Message-State: AOAM531qy6DsbQvZ9rW2D/m0TZtJlcdrBCaiQrO8H8AJoS2euhkfT8Qz TZwXZkeBIxWs9BPCW0vSBxm1MQ== X-Received: by 2002:a05:6a00:a01:b0:51b:51d8:3c2a with SMTP id p1-20020a056a000a0100b0051b51d83c2amr152799pfh.68.1654093656292; Wed, 01 Jun 2022 07:27:36 -0700 (PDT) Received: from google.com ([240f:75:7537:3187:ec3a:4b49:34bc:e5b4]) by smtp.gmail.com with ESMTPSA id 2-20020a170902c24200b00162523fdb8fsm1589163plg.252.2022.06.01.07.27.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jun 2022 07:27:35 -0700 (PDT) Date: Wed, 1 Jun 2022 23:27:30 +0900 From: Sergey Senozhatsky To: Christian =?iso-8859-1?Q?K=F6nig?= Cc: Sergey Senozhatsky , Christian =?iso-8859-1?Q?K=F6nig?= , Sumit Semwal , Gustavo Padovan , Tomasz Figa , Ricardo Ribalda , Christoph Hellwig , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org Subject: Re: [Linaro-mm-sig] Re: [PATCH] dma-fence: allow dma fence to have their own lock Message-ID: References: <20220530142232.2871634-1-senozhatsky@chromium.org> <7eee4274-bd69-df8d-9067-771366217804@amd.com> <33aba213-b6ad-4a15-9272-c62f5dfb1fb7@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <33aba213-b6ad-4a15-9272-c62f5dfb1fb7@gmail.com> X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On (22/06/01 14:45), Christian K?nig wrote: > Am 31.05.22 um 04:51 schrieb Sergey Senozhatsky: > > On (22/05/30 16:55), Christian K?nig wrote: > > > Am 30.05.22 um 16:22 schrieb Sergey Senozhatsky: > > > > [SNIP] > > > > So the `lock` should have at least same lifespan as the DMA fence > > > > that borrows it, which is impossible to guarantee in our case. > > > Nope, that's not correct. The lock should have at least same lifespan as the > > > context of the DMA fence. > > How does one know when it's safe to release the context? DMA fence > > objects are still transparently refcount-ed and "live their own lives", > > how does one synchronize lifespans? > > Well, you don't. > > If you have a dynamic context structure you need to reference count that as > well. In other words every time you create a fence in your context you need > to increment the reference count and every time a fence is release you > decrement it. OK then fence release should be able to point back to its "context" structure. Either a "private" data in dma fence or we need to "embed" fence into another object (refcounted) that owns the lock and provide dma fence ops->release callback, which can container_of() to the object that dma fence is embedded into. I think you are suggesting the latter. Thanks for clarifications. The limiting factor of this approach is that now our ops->release() is under the same "pressure" as dma_fence_put()->dma_fence_release() are. dma_fence_put() and dma_fence_release() can be called from any context, as far as I understand, e.g. IRQ, however our normal object ->release can schedule, we do things like synchronize_rcu() and so on. Nothing is impossible, just saying that even this approach is not 100% perfect and may need additional workarounds. > If you have a static context structure like most drivers have then you must > make sure that all fences at least signal before you unload your driver. We > still somewhat have a race when you try to unload a driver and the fence_ops > structure suddenly disappear, but we currently live with that. Hmm, indeed... I didn't consider fence_ops case. > Apart from that you are right, fences can live forever and we need to deal > with that. OK. I see.