Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp1618217pxb; Mon, 20 Sep 2021 00:43:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwrkxAb76TtVC12b5DGVPdYc4zfR6HHcUoGT5JYAsNV0l2lPl2z0OlPunC3rhlMZlv7yX7g X-Received: by 2002:a05:6e02:1887:: with SMTP id o7mr6227971ilu.12.1632123799767; Mon, 20 Sep 2021 00:43:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632123799; cv=none; d=google.com; s=arc-20160816; b=BjlNagPmKCmBKvxTMUIrVBlhvrqVH6KYCXTpJR3Df3Gh0lrlra2OWgPgK1dAOZ+T95 VT3ovQd3a0IhYkdbPUILUPtPY2ZF8tqrxLZTB3O60ESsD4Z/Cd5x8oCwwFr5HVqID0qt qA3wvRHOsbYHPa/wECU9DKkcGW14iUh1oSEA9oapb4i+er91VBJJhpT+BhQFcgiT2AD9 y/JP+0+bPiPeqxI/C1Po4koMd8glZdvqANcPRozcWzn3imd171RgF90nk1rti+h25SaK cH0O+fEWhH0SrJD9NK0M5croJKYA/rdNTqQGIwiIepyUdwmOne4s3OtgkB6pKSo2iOkl TmOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=J9qBQOYWyq/J2kCasNwnx1dNYZ6EDKNE1MAqCFU8LD4=; b=i5ihGZY5BdBbHfuM237RAl43a7haptJfbcKqnNg0F2zre9chY8xW9EKzINeo93YwLv mVPOWtIPOHv4r69KBuIEMUjFi73hpFAQzoiE/mTWDwyitlXI4wn4Pl1GbCCO21l54+D0 GMwxxcfEK3Zw6w3Ve0yYOhUPn+f8Sz6Avf5/n+aSyKc/QBGTwDbxu6FLc1GMxbP7/Im2 RD+bWSVKEHXggWl8Fi+E+QQOwag77iagH4bRc/4OfH5R6heqVGfOUL88PxJAMHT+JTgK J6U8+KCOjKM1t1+BFbkyVw40Qh0P+Ss0lKKMtAzGNPP+tCZ943Sy5Tg99Akrbu8YFgAg PJUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ay23si12895661iob.33.2021.09.20.00.43.01; Mon, 20 Sep 2021 00:43:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231684AbhITCmJ (ORCPT + 99 others); Sun, 19 Sep 2021 22:42:09 -0400 Received: from outgoing-auth-1.mit.edu ([18.9.28.11]:42029 "EHLO outgoing.mit.edu" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S231642AbhITCmJ (ORCPT ); Sun, 19 Sep 2021 22:42:09 -0400 Received: from cwcc.thunk.org (pool-72-74-133-215.bstnma.fios.verizon.net [72.74.133.215]) (authenticated bits=0) (User authenticated as tytso@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id 18K2ecGo029501 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sun, 19 Sep 2021 22:40:39 -0400 Received: by cwcc.thunk.org (Postfix, from userid 15806) id 8FF4915C3752; Sun, 19 Sep 2021 22:40:38 -0400 (EDT) Date: Sun, 19 Sep 2021 22:40:38 -0400 From: "Theodore Ts'o" To: "Richard W.M. Jones" Cc: Eric Blake , linux-ext4@vger.kernel.org, libguestfs@redhat.com Subject: Re: e2fsprogs concurrency questions Message-ID: References: <20210917210655.sjrqvd3r73gwclti@redhat.com> <20210919123523.GA15475@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210919123523.GA15475@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Sun, Sep 19, 2021 at 01:35:23PM +0100, Richard W.M. Jones wrote: > Are there structures shared between ext2_fs handles? Sadly, no. > I mean, if we had two concurrent threads using different ext2_fs > handles, but open on the same file, is that going to be a problem? > (It sounds like it would be, with conflicting access to the block > allocation bitmap and so on.) Yes, there's going to be a problem. If you have two separate ext2fs_fs handles (each opened via a separate call to ext2fs_open), they will not share any structures, nor is there any locking for any of the data structures. So that means you could use a single ext2s_fs handle, and share it across threads --- but you need to make sure that only one thread is using the handle at a time, and you can't have two file handles open to the same inode, or read an inode twice, and then modify one, write it back, and expect the other inode will magically be updated --- because they won't be. Fundamentally, libext2fs was not designed for concurrent operation. I suppose you could use fuse2fs, and then having the clients access the file system via the FUSE interface. That might be more efficient. - Ted