Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp100265yba; Wed, 15 May 2019 19:57:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqwudtk/EwCTiUVK9CVDObHw8mUJQ4mQ70NzSF9rfSM4Vzcr7ajfQBzBON53DlQjQR8ITYNu X-Received: by 2002:a63:fb02:: with SMTP id o2mr46554724pgh.357.1557975438990; Wed, 15 May 2019 19:57:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557975438; cv=none; d=google.com; s=arc-20160816; b=RWZhfhMZmaQ3ByOo6bT+yRNtYKSIqNL2sjENwpynSjjMz7pSsbUaYPU4MB5AT47ZBW ZyrzA1xGZXM6WDRrudJdVbHRE5LgdFEvEDC/27bigR8Db6b6iFVDhrl9nXvu3U7zMkSx StzH9YZbDPCYc71v5X92appbIGeqpNBZ9VRufZTXbJducOx4YlZVLI70jV++TsVfZQJx N9zyH85L1lmKAP/IqoMMg+621xdKcPsI1npaipXCDN2LXiiLsu7TrnqZ+mOkSfCcxxXI k65+TsN7U76P72xPp9PbvTPM5OKpz49gao2Erx1PU6ceifZ8OHvrH5kO2CCIOGxplEKl SfrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=HBVmlJdBkEdUOxhzFP7AHghdIzQLYg8Uct7AroDMhto=; b=KWTtaiJFy3He6rdavq/S1mMezkkS9XAh+BNL4AqDLR3fJ2Vfd4jSTjUGB496upAx6B fOiL1pXnF9cJEPwcznyiniSf/WpRvg8z2dWO10Gdyl0SzspaZxiHBxJi6h8nl84JYjHV ec4Y/vpTTKrlAppsnSEbVa/aw+om6K3JCEMBT+EAb3AG9q+VfvXyt/8th7lJ3ulr6ETE LQohchWotTiQyBSuZ32zN8XQQ7xBoZWaIZDGmi7L/mPdb055ISdia2VHGowuEzN0Zz0z Dj/ioDnJ7jnQN5D3Y8bC6S13amxOQp3fZVQEDEUwrJEhJCCXQ8LYMnyQ6LQ0h+amVWq1 bF/g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v18si3584198pgi.354.2019.05.15.19.56.51; Wed, 15 May 2019 19:57:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726157AbfEPCzn (ORCPT + 99 others); Wed, 15 May 2019 22:55:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39440 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725977AbfEPCzn (ORCPT ); Wed, 15 May 2019 22:55:43 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 23DBE2BF85; Thu, 16 May 2019 02:55:43 +0000 (UTC) Received: from [10.72.12.160] (ovpn-12-160.pek2.redhat.com [10.72.12.160]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3B1A3101E660; Thu, 16 May 2019 02:55:41 +0000 (UTC) Subject: Re: [PATCH] svc_run: make sure only one svc_run loop runs in one process To: libtirpc-devel@lists.sourceforge.net Cc: linux-nfs@vger.kernel.org References: <20190409113713.30595-1-xiubli@redhat.com> From: Xiubo Li Message-ID: <6a152a89-6de6-f5f2-9c16-5e32fef8cc64@redhat.com> Date: Thu, 16 May 2019 10:55:40 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190409113713.30595-1-xiubli@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Thu, 16 May 2019 02:55:43 +0000 (UTC) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Hey ping. What's the state of this patch and will it make sense here? Thanks BRs On 2019/4/9 19:37, xiubli@redhat.com wrote: > From: Xiubo Li > > In gluster-block project and there are 2 separate threads, both > of which will run the svc_run loop, this could work well in glibc > version, but in libtirpc we are hitting the random crash and stuck > issues. > > More detail please see: > https://github.com/gluster/gluster-block/pull/182 > > Signed-off-by: Xiubo Li > --- > src/svc_run.c | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/src/svc_run.c b/src/svc_run.c > index f40314b..b295755 100644 > --- a/src/svc_run.c > +++ b/src/svc_run.c > @@ -38,12 +38,17 @@ > #include > #include > #include > +#include > +#include > > > #include > #include "rpc_com.h" > #include > > +static bool svc_loop_running = false; > +static pthread_mutex_t svc_run_lock = PTHREAD_MUTEX_INITIALIZER; > + > void > svc_run() > { > @@ -51,6 +56,16 @@ svc_run() > struct pollfd *my_pollfd = NULL; > int last_max_pollfd = 0; > > + pthread_mutex_lock(&svc_run_lock); > + if (svc_loop_running) { > + pthread_mutex_unlock(&svc_run_lock); > + syslog (LOG_ERR, "svc_run: svc loop is already running in current process %d", getpid()); > + return; > + } > + > + svc_loop_running = true; > + pthread_mutex_unlock(&svc_run_lock); > + > for (;;) { > int max_pollfd = svc_max_pollfd; > if (max_pollfd == 0 && svc_pollfd == NULL) > @@ -111,4 +126,8 @@ svc_exit() > svc_pollfd = NULL; > svc_max_pollfd = 0; > rwlock_unlock(&svc_fd_lock); > + > + pthread_mutex_lock(&svc_run_lock); > + svc_loop_running = false; > + pthread_mutex_unlock(&svc_run_lock); > }