Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp3546037ybb; Mon, 6 Apr 2020 10:46:34 -0700 (PDT) X-Google-Smtp-Source: APiQypJatCem4YU8DJoBDc+wkGVugtbsa2kvpdu76lWMJwSJX5md6PL4Ej8KPahAeijuFAIVr+Ba X-Received: by 2002:aca:c70f:: with SMTP id x15mr289583oif.80.1586195194878; Mon, 06 Apr 2020 10:46:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586195194; cv=none; d=google.com; s=arc-20160816; b=YO0LJ9MkVcFCD3QouK5q75ZjHNRIL3N4KN60Idrodox6636QlGSe8qvmgIsymrLHb8 sLG+fFdT22Q84BCuR7w2FH+piKghsNblTg3ibrd7fhH1qrr4DWIh1lsZaLaGJnqnlzHG LTxc85LCpMtxkiMno3qCqJJKxeSN0L1st2xVVb64OOSEoHQn1JLjkscMFglmayO0Hf9K bjN3SK5Ybv2YzOWBB4VYmd8IcwWzr57mIZzVrA4CM5Ctel9wZ2ydWb7Dq/pWV5Q0nQ7P OhWJbIaHU3W9AE1VWH0+sPDEgh0T5g5EcaQGBjar8QSDUhCgmPB/zXW/h90ESQialVLS Z2Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=U81r3aV8Yti08vAWGB/l0NuDnNc11KZYkoGqFSyTmdA=; b=I8wy/0WY0xFwhj58Ouf1ki5n8cEWHTN5+/TQlrLtbfo72EwCPe3XngTgc6Xfz1r4ul +cqrW4o+gGr3LzJlF8ocWnCFTJRJFF7uXkrrXKxEGiws8Yk3WxM6fRLMatwA0/FaaFQ0 cLP/9F8J5ks4VzYcfOxtMmDXPZ7CVoWx5HWHUe26FIXr0ASyhboCcjPXDRFTCZSzgjA1 MU4Fk7pPUzkIfR2t3nciKTafGhvjDqxjgGVuXhD2apHtl+nxQOUTOQH+vbEaPJiJJkZF n5Pt50IierbScUKhHofbUpAXLBU4TuONL+/9tJyGEvCt0gFY7OEv21WxEZBIdQWzgnwE lOyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q7si111807otg.319.2020.04.06.10.46.22; Mon, 06 Apr 2020 10:46:34 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729466AbgDFR3V (ORCPT + 99 others); Mon, 6 Apr 2020 13:29:21 -0400 Received: from gardel.0pointer.net ([85.214.157.71]:60764 "EHLO gardel.0pointer.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726491AbgDFR3U (ORCPT ); Mon, 6 Apr 2020 13:29:20 -0400 Received: from gardel-login.0pointer.net (gardel.0pointer.net [85.214.157.71]) by gardel.0pointer.net (Postfix) with ESMTP id 28E37E8017E; Mon, 6 Apr 2020 19:29:18 +0200 (CEST) Received: by gardel-login.0pointer.net (Postfix, from userid 1000) id 9177C161537; Mon, 6 Apr 2020 19:29:17 +0200 (CEST) Date: Mon, 6 Apr 2020 19:29:17 +0200 From: Lennart Poettering To: Miklos Szeredi Cc: Ian Kent , David Howells , Christian Brauner , Linus Torvalds , Al Viro , dray@redhat.com, Karel Zak , Miklos Szeredi , Steven Whitehouse , Jeff Layton , andres@anarazel.de, keyrings@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Aleksa Sarai Subject: Re: Upcoming: Notifications, FS notifications and fsinfo() Message-ID: <20200406172917.GA37692@gardel-login> References: <20200402143623.GB31529@gardel-login> <20200402152831.GA31612@gardel-login> <20200402155020.GA31715@gardel-login> <20200403110842.GA34663@gardel-login> <20200403150143.GA34800@gardel-login> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mo, 06.04.20 11:22, Miklos Szeredi (miklos@szeredi.hu) wrote: > > Nah. What I wrote above is drastically simplified. It's IRL more > > complex. Specific services need to be killed between certain mounts > > are unmounted, since they are a backend for another mount. NFS, or > > FUSE or stuff like that usually has some processes backing them > > around, and we need to stop the mounts they provide before these > > services, and then the mounts these services reside on after that, and > > so on. It's a complex dependency tree of stuff that needs to be done > > in order, so that we can deal with arbitrarily nested mounts, storage > > subsystems, and backing services. > > That still doesn't explain why you need to keep track of all mounts in > the system. > > If you are aware of the dependency, then you need to keep track of > that particular mount. If not, then why? it works the other way round in systemd: something happens, i.e. a device pops up or a mount is established and systemd figures our if there's something to do. i.e. whether services shall be pulled in or so. It's that way for a reason: there are plenty services that want to instantiated once for each object of a certain kind to pop up (this happens very often for devices, but could also happen for any other kind of "unit" systemd manages, and one of those kinds are mount units). For those we don't know the unit to pull in yet (because it's not going to be a well-named singleton, but an instance incorporating some identifier from the source unit) when the unit that pops up does so, thus we can only wait for the the latter to determine what to pull in. > What I'm starting to see is that there's a fundamental conflict > between how systemd people want to deal with new mounts and how some > other people want to use mounts (i.e. tens of thousands of mounts in > an automount map). Well, I am not sure what automount has to do with anything. You can have 10K mounts with or without automount, it's orthogonal to that. In fact, I assumed the point of automount was to pretend there are 10K mounts but not actually have them most of the time, no? I mean, whether there's room to optimize D-Bus IPC or not is entirely orthogonal to anything discussed here regarding fsinfo(). Don't make this about systemd sending messages over D-Bus, that's a very different story, and a non-issue if you ask me: Right now, when you have n mounts, and any mount changes, or one is added or removed then we have to parse the whole mount table again, asynchronously, processing all n entries again, every frickin time. This means the work to process n mounts popping up at boot is O(n?). That sucks, it should be obvious to anyone. Now if we get that fixed, by some mount API that can send us minimal notifications about what happened and where, then this becomes O(n), which is totally OK. You keep talking about filtering, which will just lower the "n" a bit in particular cases to some value "m" maybe (with m < n), it does not address the fact that O(m?) is still a big problem. hence, filtering is great, no problem, add it if you want it. I personally don't care about filtering though, and I doubt we'd use it in systemd, I just care about the O(n?) issue. If you ask me if D-Bus can handle 10K messages sent over the bus during boot, then yes, it totally can handle that. Can systemd nicely process O(n?) mounts internally though equally well? No, obviously not, if n grows too large. Anyone computer scientist should understand that.. Anyway, I have the suspicion this discussion has stopped being useful. I think you are trying to fix problems that userspce actually doesn't have. I can just tell you what we understand the problems are, but if you are out trying to fix other percieved ones, then great, but I mostly lost interest. Lennart -- Lennart Poettering, Berlin