Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp6321775iog; Thu, 23 Jun 2022 16:46:59 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sZij8PBtRIuLMBTKE9IHNWG9qS6xj4zycvR6ZdjadueV4eJDn3Rpas78cyXoKs+LzU8kNa X-Received: by 2002:a05:6402:11:b0:431:680c:cca1 with SMTP id d17-20020a056402001100b00431680ccca1mr14065251edu.420.1656028019285; Thu, 23 Jun 2022 16:46:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656028019; cv=none; d=google.com; s=arc-20160816; b=uBcZKArsjP/ikyQL+STAip5Y/2JLWgfpsYEwUHX2sNvgkuwHGgy6t/KuhLcCWi9kqw PRwMVduzPE8LcuzMBO6PXHHVMOixmc7rGJGj/O486Zr3Yg/koXUiKRolHZLwOxJVZpue Ad607P4n1RpA47lBHkYGkvZuCPZXo6SJyCrHmRkSn+vPRd81GL3jOxZIRKu/iPlpgo7Y 4NM/qstpkeF1jlj5lbO67/j3OwojYDY8PGiRsYwgmR5xc4cdXRNgqtLnNL7gOPLjG6A/ eVQ3G/b4pkO7df7JnEVU0yp0lyPyReh4hluJlnV7IVbVQOGXF8pl9fnY0Fu2++8aff3d kqpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:feedback-id :dkim-signature:dkim-signature; bh=GGGcHpjYU8f0oqCF9sTkuf/ZNvqfHIvVbvWd6LyG0bU=; b=l5+fdQdG6QKJFaWEuFE2J1ddWjw5xW9eZpAGLO7I7XbwoaACA+MEsFbH+yoKYf2MIZ g50dh5Zm8DrSQFvt9FeRyqoZq3sTlUStEDlXLvmilzK93p5/xGktkuoqnEgib5L1maoJ wgA9f9ZDUdkzhuO5fksxwM4YVC0JkPZ5NRN/2QvocVMRR4lWsd8S0LZknYFcAvPoE0D5 YL5im2DXpf6VqBEfTR0+Iw6XWCaA6fLinuiSqR4swsD+C6Sd07hgCJ4j1PjXpxioegFS vaWzrMyYkbNbNGKT9WEd0nb3NHltuuTfPX9Yb9gMRMEyOD+5hrZs/IM2MTRmUHKICYTn ToIw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@tycho.pizza header.s=fm1 header.b=QC4ROALR; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=ju+sjPJq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z12-20020a05640240cc00b0042ddd05d4ebsi1224560edb.610.2022.06.23.16.46.13; Thu, 23 Jun 2022 16:46:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@tycho.pizza header.s=fm1 header.b=QC4ROALR; dkim=pass header.i=@messagingengine.com header.s=fm2 header.b=ju+sjPJq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230312AbiFWXmH (ORCPT + 99 others); Thu, 23 Jun 2022 19:42:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47978 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229454AbiFWXlg (ORCPT ); Thu, 23 Jun 2022 19:41:36 -0400 Received: from wout2-smtp.messagingengine.com (wout2-smtp.messagingengine.com [64.147.123.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFFF13B545; Thu, 23 Jun 2022 16:41:24 -0700 (PDT) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 5BE433200910; Thu, 23 Jun 2022 19:41:20 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Thu, 23 Jun 2022 19:41:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tycho.pizza; h= cc:cc:content-type:date:date:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to; s=fm1; t=1656027679; x=1656114079; bh=GGGcHpjYU8 f0oqCF9sTkuf/ZNvqfHIvVbvWd6LyG0bU=; b=QC4ROALRMM2EBGXv7ovtZ3EwII lxK/2BAQH/nSlfXBmGb3YW54L+MUgvM289M+JDp4EqC6NEwfksKU1v9Dta7C7CVI DmHGZ73IT1dQJ2EgE0hB7CQ+aHN10pW98U+Xv+bJ8ldZbRz98sBdjdila12KU6ea gLoYs8sBRO9H3WlfGzZ757Fr7hKyBl8ILNosnEUSVVl52cOWnonH3j7C+WfzITdi xTTyeMNZ3DSAuX17+TuUEfehqT8LB3RtrzdB/rkTAc7Be2ZrJ9GTIn197C8vZsTA NDvGgMUVzgnyU1nwLUCbFRuIclY9G0gytC5IiNBg7qeqo3DsUsiSRCdeZC+Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:date:date:feedback-id :feedback-id:from:from:in-reply-to:in-reply-to:message-id :mime-version:references:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; t=1656027679; x=1656114079; bh=GGGcHpjYU8f0oqCF9sTkuf/ZNvqf HIvVbvWd6LyG0bU=; b=ju+sjPJq3VdZMwnNCAggcJkQuMnuIIZWCBng6LL/bbyX N8ZDxvx579+FAUhbzxnsIllQ64aPZ3SucFdempLTdwqY7P3MfrBBRk2ECZsiL+YO sopGkx0gt1JsXo3GYL0WYV0q0XbOk2rceo/XvSN/uz6XrGtasnpuzPxI6gh/6WLk qkwPlThQN/eyTgaNGVoGbBz6WCLPZ+xffGtPTm3JuIhwodtfXCMc6zwFGWJxeSkj 4TZkb3MPULBdRgS/5YZMYpBwtENZA19wDICAhZnHrn6rvhMg1FMwlmFWZ2UOlFGz SAsJacJe6m4MSfjWVvCvCCh5yTTLKGZBcZax0Zzjqg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudefkedgvdehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfffhvfevuffkfhggtggujgesthdtredttddtvdenucfhrhhomhepvfihtghh ohcutehnuggvrhhsvghnuceothihtghhohesthihtghhohdrphhiiiiirgeqnecuggftrf grthhtvghrnhepjeeiiedtkeegvefhfeehgfdvheejgedugeduledtvdejveeijefhvedv kefftdehnecuffhomhgrihhnpehgihhthhhusgdrtghomhenucevlhhushhtvghrufhiii gvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthigthhhosehthigthhhordhpihii iigr X-ME-Proxy: Feedback-ID: i21f147d5:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 23 Jun 2022 19:41:18 -0400 (EDT) Date: Thu, 23 Jun 2022 17:41:17 -0600 From: Tycho Andersen To: Vivek Goyal Cc: Eric Biederman , Christian Brauner , Miklos Szeredi , fuse-devel@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: strange interaction between fuse + pidns Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_PASS, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 23, 2022 at 05:55:20PM -0400, Vivek Goyal wrote: > So in this case single process is client as well as server. IOW, one > thread is fuse server servicing fuse requests and other thread is fuse > client accessing fuse filesystem? Yes. Probably an abuse of the API and something people Should Not Do, but as you say the kernel still shouldn't lock up like this. > > since the thread has a copy of > > the fd table with an fd pointing to the same fuse device, the reference > > count isn't decremented to zero in fuse_dev_release(), and the task hangs > > forever. > > So why did fuse server thread stop responding to fuse messages. Why > did it not complete flush. In this particular case I think it's because the application crashed for unrelated reasons and tried to exit the pidns, hitting this problem. > BTW, unkillable wait happens on ly fc->no_interrupt = 1. And this seems > to be set only if server probably some previous interrupt request > returned -ENOSYS. > > fuse_dev_do_write() { > else if (oh.error == -ENOSYS) > fc->no_interrupt = 1; > } > > So a simple workaround might be for server to implement support for > interrupting requests. Yes, but that is the libfuse default IIUC. > Having said that, this does sounds like a problem and probably should > be fixed at kernel level. > > > > > diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c > > index 0e537e580dc1..c604dfcaec26 100644 > > --- a/fs/fuse/dev.c > > +++ b/fs/fuse/dev.c > > @@ -297,7 +297,6 @@ void fuse_request_end(struct fuse_req *req) > > spin_unlock(&fiq->lock); > > } > > WARN_ON(test_bit(FR_PENDING, &req->flags)); > > - WARN_ON(test_bit(FR_SENT, &req->flags)); > > if (test_bit(FR_BACKGROUND, &req->flags)) { > > spin_lock(&fc->bg_lock); > > clear_bit(FR_BACKGROUND, &req->flags); > > @@ -381,30 +380,33 @@ static void request_wait_answer(struct fuse_req *req) > > queue_interrupt(req); > > } > > > > - if (!test_bit(FR_FORCE, &req->flags)) { > > - /* Only fatal signals may interrupt this */ > > - err = wait_event_killable(req->waitq, > > - test_bit(FR_FINISHED, &req->flags)); > > - if (!err) > > - return; > > + /* Only fatal signals may interrupt this */ > > + err = wait_event_killable(req->waitq, > > + test_bit(FR_FINISHED, &req->flags)); > > Trying to do a fatal signal killable wait sounds reasonable. But I am > not sure about the history. > > - Why FORCE requests can't do killable wait. > - Why flush needs to have FORCE flag set. args->force implies a few other things besides this killable wait in fuse_simple_request(), most notably: req = fuse_request_alloc(fm, GFP_KERNEL | __GFP_NOFAIL); and __set_bit(FR_WAITING, &req->flags); seems like it probably can be invoked from some non-user/atomic context somehow? > > + if (!err) > > + return; > > > > - spin_lock(&fiq->lock); > > - /* Request is not yet in userspace, bail out */ > > - if (test_bit(FR_PENDING, &req->flags)) { > > - list_del(&req->list); > > - spin_unlock(&fiq->lock); > > - __fuse_put_request(req); > > - req->out.h.error = -EINTR; > > - return; > > - } > > + spin_lock(&fiq->lock); > > + /* Request is not yet in userspace, bail out */ > > + if (test_bit(FR_PENDING, &req->flags)) { > > + list_del(&req->list); > > spin_unlock(&fiq->lock); > > + __fuse_put_request(req); > > + req->out.h.error = -EINTR; > > + return; > > } > > + spin_unlock(&fiq->lock); > > > > /* > > - * Either request is already in userspace, or it was forced. > > - * Wait it out. > > + * Womp womp. We sent a request to userspace and now we're getting > > + * killed. > > */ > > - wait_event(req->waitq, test_bit(FR_FINISHED, &req->flags)); > > + set_bit(FR_INTERRUPTED, &req->flags); > > + /* matches barrier in fuse_dev_do_read() */ > > + smp_mb__after_atomic(); > > + /* request *must* be FR_SENT here, because we ignored FR_PENDING before */ > > + WARN_ON(!test_bit(FR_SENT, &req->flags)); > > + queue_interrupt(req); > > } > > > > static void __fuse_request_send(struct fuse_req *req) > > > > avaialble as a full patch here: > > https://github.com/tych0/linux/commit/81b9ff4c8c1af24f6544945da808dbf69a1293f7 > > > > but now things are even weirder. Tasks are stuck at the killable wait, but with > > a SIGKILL pending for the thread group. > > That's strange. No idea what's going on. Thanks for taking a look. This is where it falls apart for me. In principle the patch seems simple, but this sleeping behavior is beyond my understanding. Tycho