Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp711439ybt; Fri, 26 Jun 2020 09:41:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxsPkW5QE67AZuDFDYEpjQwc1ECKTOTafe5/+mAjKVPbrsx0LvTzq7SfT+2ZWnEE7kEKib5 X-Received: by 2002:a50:ec8f:: with SMTP id e15mr4171544edr.70.1593189715722; Fri, 26 Jun 2020 09:41:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593189715; cv=none; d=google.com; s=arc-20160816; b=SEjJ7p0U1yEWXoQRwl2o2ZFob+oTCXHu3rLubzuB8eit7Kfzp+hEK4HuHqtdsTu6fL fA8cNWW4+63eUNNtZ8JhfbOP1w5MfJLQrH2U4g8c2X4Kcj825Xc6D1BNsLZljtWTa43b 9t+ektZRzg51S1NQGY3W+KWMqC2tqsYYVrqJbog+35BktI+4CzkKAUUQmRa0dVEeqvGQ PnjdDLqoUTsuicypLl816rwLQfHyh09VB/UluR3vsm/Uo/A2lOAkdQLc2hXNZXPbhmGs /bZ0WHIOxyUovyXuPAgFBsu3JrVJa63078a8yWR+i9ObO7StVbOPkbqxVF0dWNpys4++ tsKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=JtRyGt6tiuqtwS1yKOlAkCyE7jiftgXfal15sd+NZdk=; b=Lr/msL2QtMP12nQ6OHsiokxCocOqRzlEvvxXgJ92V9wxjiRbaSklH/ABF4i8VLalfn g1pv0mVneGrS+iaK8Pk0Lv4DyOTR0wtl3KcQ+m52Y61LWDs9UrRK9xugSr8agTVPPwx1 bqvKDlFMek/ayVsje9HVchCUCLihwV0t0A0Wk69eEPM1mpmvluIlkFGjC2ZbC9TTAbzW 7kLVmgY4ZcrEdnjDcO7tcS44Mq7kKfzCIaauig7p/ZZCv7oiads2wl+GIDWlypdPRa66 80itGpA1YOnkCcLT3RiAAM+2HMEP+LKBZfUg8ltY5QPc9EyfJGFzPZ3BXmJkhzLLSZfx N7fw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernelim-com.20150623.gappssmtp.com header.s=20150623 header.b=fpxsguBU; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id mc14si3906065ejb.282.2020.06.26.09.41.32; Fri, 26 Jun 2020 09:41:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernelim-com.20150623.gappssmtp.com header.s=20150623 header.b=fpxsguBU; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728314AbgFZPK6 (ORCPT + 99 others); Fri, 26 Jun 2020 11:10:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726917AbgFZPK6 (ORCPT ); Fri, 26 Jun 2020 11:10:58 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D280BC03E979 for ; Fri, 26 Jun 2020 08:10:57 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id dp18so9716097ejc.8 for ; Fri, 26 Jun 2020 08:10:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernelim-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=JtRyGt6tiuqtwS1yKOlAkCyE7jiftgXfal15sd+NZdk=; b=fpxsguBU3JER73RU9qOPNpFE2E2jEx1fk2c9+73s4rDH2TAlUCBSI3cTmdvV9GnNe9 IlQAHwtLYEN9UJE4ukGlACk7K+CSBKOVBq6taeWa2FKfHlMBRaNttWyz/HHeMreB7J3+ OdR1QGL7LYABeAivxYYb+8dj8bApe1g2IgTYySdh7SQZIwZ6G+1eHJGaeKDv3Q7z9kHK 8xUNgZtoQ8GTEQcEoqida/heTNf2Iku4SUSSDBWUjgZ9CnVpTDidL7R3GZShau5UHQFt V5B/IJQvFZjP3PVeDThT0ZLLXCDRQq3/kVcZcONjJ2EzHgl38QrdqSGWztmMcqR6RfAc NhMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=JtRyGt6tiuqtwS1yKOlAkCyE7jiftgXfal15sd+NZdk=; b=aOKjuSQtNuAMpfDTSat0g9VjYiEjO4aDyYD+BBzHx9Sx/jpyXfyEeChS4lQnSFiisj oDZnZ5Kq/HD2Jy/FBOeFSaytwq1n/Kt8QIICxN4YYlgW/ddInakzDqAapSDPsp4DZL7J mln+FA5ySBiy9aWQseilCx0bqv4buAELqdysbofWwDU7AIjo7bIBnXDuztNyMwNUcdoB ole+9vMsNFxr0kVytj5+h/SyraeajdY2hvM3zLTeqS6YZtz8JqlyYH30Kv/CpYFadje0 LWwuWL+jIdmh4kNK1ZPSTl63UQxJ4xmf1jeFpbVYoznpNRDmxU18JsSthSmI4lC/3rcj U63Q== X-Gm-Message-State: AOAM530gbc0ffTCHqStMGA9LEM7Br3mEBnakZv1wUhSIeMKRdrFnj+qc rx/hY5QqovyYeyVuGBFxw7N4pw== X-Received: by 2002:a17:906:4a87:: with SMTP id x7mr2530833eju.44.1593184256489; Fri, 26 Jun 2020 08:10:56 -0700 (PDT) Received: from gmail.com ([141.226.169.176]) by smtp.gmail.com with ESMTPSA id m23sm2760801ejc.38.2020.06.26.08.10.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Jun 2020 08:10:55 -0700 (PDT) Date: Fri, 26 Jun 2020 18:10:52 +0300 From: Dan Aloni To: Chuck Lever Cc: linux-rdma@vger.kernel.org, Linux NFS Mailing List Subject: Re: [PATCH] xprtrdma: fix EP destruction logic Message-ID: <20200626151052.6cckaquyu7k3nd6b@gmail.com> References: <0E2AA9D9-2503-462C-952D-FC0DD5111BD1@oracle.com> <20200626071034.34805-1-dan@kernelim.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Fri, Jun 26, 2020 at 08:56:41AM -0400, Chuck Lever wrote: > > On Jun 26, 2020, at 3:10 AM, Dan Aloni wrote: [..] > > - Add a mutex in `rpcrdma_ep_destroy` to guard against concurrent calls > > to `rpcrdma_xprt_disconnect` coming from either `rpcrdma_xprt_connect` > > or `xprt_rdma_close`. > > NAK. The RPC client provides appropriate exclusion, please let's not > add more serialization that can introduce further deadlocks. It appeared to me that this exclusion does not works well. As for my considerations, if I am not mistaken from analyzing crashes I've seen: -> xprt_autoclose (running on xprtiod) -> xprt->ops->close -> xprt_rdma_close -> rpcrdma_xprt_disconnect and: -> xprt_rdma_connect_worker (running on xprtiod) -> rpcrdma_xprt_connect -> rpcrdma_xprt_disconnect I understand the rationale or at least the aim that `close` and `connect` ops should not be concurrent on the same `xprt`, however: * `xprt_force_disconnect`, is called from various places, queues a call to `xprt_autoclose` to the background on `xprtiod` workqueue item, conditioned that `!XPRT_LOCKED` which is the case for connect that went to the background. * `xprt_rdma_connect` also sends `xprt_rdma_connect_worker` as an `xprtiod` workqueue item, unconditionally. So we have two work items that can run in parallel, and I don't see clear gating on this from the code. Maybe there's a simpler fix for this. Perhaps a `cancel_delayed_work_sync(&r_xprt->rx_connect_worker);` is appropriate in `xprt_rdma_close`? -- Dan Aloni