Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp2719808pxb; Thu, 3 Feb 2022 12:42:14 -0800 (PST) X-Google-Smtp-Source: ABdhPJxGOz3rwQ4q7pihKkim+dlNjHfaeY4e5WcyRM5Wrf49dLHWKImw9HDigzBJfaG1N3QiozJ/ X-Received: by 2002:a05:6a00:1652:: with SMTP id m18mr23553678pfc.56.1643920933815; Thu, 03 Feb 2022 12:42:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643920933; cv=none; d=google.com; s=arc-20160816; b=fe98moZqS8YwmjB625teas+B+F5duYjAYrVsJZWeW63G2u5fXuWqWvK/z68H79AYkm v8LFKgTWluzLm3+qme30gmiijEbG90JeYvxz/PuMY/Ku9rM+DGPgheBrV/RJtZ/apLnK qnUc9EA7CIOsCGgG0Wbe/TPg+lXhyp3oatWP78/vBMbk2DIaewE6Ir56I7rE50vF76EV xdgPsq8ppg3DO1s90VjzAnyWyZt1XG+MfIrQ7t1XuECnZL7mTIjEyRBVtEd44qw/ihym T0AjN+eVV7GqBZeNytkKItqf1tN0+VaF6A5vAaAjDcZrXx8TRowYuwIX/zYju/L6b+Xo L3WA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=yi3egB09pFMWmxHf611BRU+6JQL5GH8GclXypo1awkA=; b=ntXaqWCIZHkyFj60jGA0/rxybM3otgoKP0GT7T50NfIMgvs14MrjEyBcj8oUKq1nDd fWEcEVxCKAX0uifU8cBUZ31sRDxJOamxxlJbNr7/pHiTZTMDLjfnw3+FTpuOT6WgdlQz 6LvCRNbAL4OYowlTjdCIm7W8YjK2fTtszn+MnIqXNaOnybN982XPoD6aOljma9q/MPLK WcD18aglKqe9dsWyVgqZjAZ9dmlXRwYDwrUtIgmARnkf8OuagG9i/w1Oclr1p1ueNe0c fDYqRcsOeyBMDt11ONvhkLrxEVN56dKo4JWSWgHbC7RTnKZn282hlZ0wMW7gYwhfxPRu O3jg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel-dk.20210112.gappssmtp.com header.s=20210112 header.b=FOEUsa6c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 198si1549373pga.752.2022.02.03.12.42.01; Thu, 03 Feb 2022 12:42:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel-dk.20210112.gappssmtp.com header.s=20210112 header.b=FOEUsa6c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1353571AbiBCStQ (ORCPT + 99 others); Thu, 3 Feb 2022 13:49:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353565AbiBCStO (ORCPT ); Thu, 3 Feb 2022 13:49:14 -0500 Received: from mail-il1-x12e.google.com (mail-il1-x12e.google.com [IPv6:2607:f8b0:4864:20::12e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D593C06173B for ; Thu, 3 Feb 2022 10:49:14 -0800 (PST) Received: by mail-il1-x12e.google.com with SMTP id x6so2881759ilg.9 for ; Thu, 03 Feb 2022 10:49:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=yi3egB09pFMWmxHf611BRU+6JQL5GH8GclXypo1awkA=; b=FOEUsa6cMLCKHgbftb7PdZxjMzaJLyhqXu6zE6/HGBcw8UJivONwkKRHJFIpKG+4Cp 5EAmoc5HXHDkp83ffol1bLsFixng5fqDrZltIRRmYBTGNu4Qp852viJt229/q1C7oIP1 QTT40oAxbDZZ9/Z8IJG8kW8p8XHWcvLRMV2mq4knwnCDoBzfVG+TFoc+G/5Bi2V3oNN1 83QK+vwB9In+wtpEKYYAg4R8pxdjSdNop27G7bzxPLlddks+j/MOfPiYg409nMtC7sFW 7vy18PnE3zDHNlH4CFTB2hAeTITqVooJMPS0CVv95WOnspP411cmJIAGf4fiM4jrUrUE U5cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=yi3egB09pFMWmxHf611BRU+6JQL5GH8GclXypo1awkA=; b=4W/3sCj+V8wF9Lc3jehuy2+LEa7PUP4VOooRIDBibzcYBYJGpxMayVwZts8NKaYMUJ DajJCtC/5grhnTZIOlE3r4QGm8RhjPV772Vovjcs1SxTwrW6XbRDZWWBTIoARq+qSDtQ Tp6KMxhDATnB8VrYHoOhY1LHoeH+55YbLrX7BBVPQptDe2ZxNenCsk1SQC537hDx9Vrl ViiaV1cSkBqOb7otYsi3YOkvClYJsU7lRc8ruQFkH3DHTCPdWPMiB+Trrp1sigVMV/CP sjDxInBWp3NplEi8d+1UjetvK9cpQLSROI+wWC/rJVdVxpiWuJzIJokYDg6eu2RM34aa lVIg== X-Gm-Message-State: AOAM531RTK8GjIVu8sxNRvuNEkxRunCK9re1sarPZWm2FYt9yrAVUUAN D513pWCLC3iEug3IdM/7Nk7w6Q== X-Received: by 2002:a05:6e02:1545:: with SMTP id j5mr21032614ilu.318.1643914153612; Thu, 03 Feb 2022 10:49:13 -0800 (PST) Received: from [192.168.1.30] ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id f4sm11634988iow.53.2022.02.03.10.49.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 03 Feb 2022 10:49:13 -0800 (PST) Subject: Re: [PATCH v4 2/3] io_uring: avoid ring quiesce while registering/unregistering eventfd To: Usama Arif , io-uring@vger.kernel.org, asml.silence@gmail.com, linux-kernel@vger.kernel.org Cc: fam.zheng@bytedance.com References: <20220203182441.692354-1-usama.arif@bytedance.com> <20220203182441.692354-3-usama.arif@bytedance.com> From: Jens Axboe Message-ID: <8369e0be-f922-ba6b-ceed-24886ebcdb78@kernel.dk> Date: Thu, 3 Feb 2022 11:49:12 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20220203182441.692354-3-usama.arif@bytedance.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/3/22 11:24 AM, Usama Arif wrote: > -static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx) > +static void io_eventfd_signal(struct io_ring_ctx *ctx) > { > - if (likely(!ctx->cq_ev_fd)) > - return false; > + struct io_ev_fd *ev_fd; > + > + rcu_read_lock(); > + /* rcu_dereference ctx->io_ev_fd once and use it for both for checking and eventfd_signal */ > + ev_fd = rcu_dereference(ctx->io_ev_fd); > + > + if (likely(!ev_fd)) > + goto out; > if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED) > - return false; > - return !ctx->eventfd_async || io_wq_current_is_worker(); > + goto out; > + > + if (!ctx->eventfd_async || io_wq_current_is_worker()) > + eventfd_signal(ev_fd->cq_ev_fd, 1); > + > +out: > + rcu_read_unlock(); > } This still needs what we discussed in v3, something ala: /* * This will potential race with eventfd registration, but that's * always going to be the case if there is IO inflight while an eventfd * descriptor is being registered. */ if (!rcu_dereference_raw(ctx->io_ev_fd)) return; rcu_read_lock(); ... which I think is cheap enough and won't hit sparse complaints. The > @@ -9353,35 +9370,70 @@ static int __io_sqe_buffers_update(struct io_ring_ctx *ctx, > > static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg) > { > + struct io_ev_fd *ev_fd; > __s32 __user *fds = arg; > - int fd; > + int fd, ret; > > - if (ctx->cq_ev_fd) > - return -EBUSY; > + mutex_lock(&ctx->ev_fd_lock); > + ret = -EBUSY; > + if (rcu_dereference_protected(ctx->io_ev_fd, lockdep_is_held(&ctx->ev_fd_lock))) { > + rcu_barrier(); > + if(rcu_dereference_protected(ctx->io_ev_fd, lockdep_is_held(&ctx->ev_fd_lock))) > + goto out; > + } I wonder if we can get away with assigning ctx->io_ev_fd to NULL when we do the call_rcu(). The struct itself will remain valid as long as we're under rcu_read_lock() protection, so I think we'd be fine? If we do that, then we don't need any rcu_barrier() or synchronize_rcu() calls, as we can register a new one while the previous one is still being killed. Hmm? > static int io_eventfd_unregister(struct io_ring_ctx *ctx) > { > - if (ctx->cq_ev_fd) { > - eventfd_ctx_put(ctx->cq_ev_fd); > - ctx->cq_ev_fd = NULL; > - return 0; > + struct io_ev_fd *ev_fd; > + int ret; > + > + mutex_lock(&ctx->ev_fd_lock); > + ev_fd = rcu_dereference_protected(ctx->io_ev_fd, lockdep_is_held(&ctx->ev_fd_lock)); > + if (ev_fd) { > + call_rcu(&ev_fd->rcu, io_eventfd_put); > + ret = 0; > + goto out; > } > + ret = -ENXIO; > > - return -ENXIO; > +out: > + mutex_unlock(&ctx->ev_fd_lock); > + return ret; > } I also think that'd be cleaner without the goto: { struct io_ev_fd *ev_fd; int ret; mutex_lock(&ctx->ev_fd_lock); ev_fd = rcu_dereference_protected(ctx->io_ev_fd, lockdep_is_held(&ctx->ev_fd_lock)); if (ev_fd) { call_rcu(&ev_fd->rcu, io_eventfd_put); mutex_unlock(&ctx->ev_fd_lock); return 0; } mutex_unlock(&ctx->ev_fd_lock); return -ENXIO; } -- Jens Axboe