2019-03-18 11:25:00

by Paul Menzel

[permalink] [raw]
Subject: New service e2scrub_reap

Dear Ted, dear Darrick,


On Debian Sid/unstable, I noticed the new service `scrub/e2scrub_reap.service`
installed in the default target [1][2].

> E2fsprogs now has an e2scrub script which will allow e2fsck to be run on
> volumes that are mounted on an LVM device. The e2scrub_all will find
> all ext* file systems and run them using e2scrub (if possible).

```
$ nl -ba scrub/e2scrub_reap.service.in
1 [Unit]
2 Description=Remove Stale Online ext4 Metadata Check Snapshots
3 Documentation=man:e2scrub_all(8)
4
5 [Service]
6 Type=oneshot
7 WorkingDirectory=/
8 PrivateNetwork=true
9 ProtectSystem=true
10 ProtectHome=read-only
11 PrivateTmp=yes
12 AmbientCapabilities=CAP_SYS_ADMIN CAP_SYS_RAWIO
13 NoNewPrivileges=yes
14 User=root
15 IOSchedulingClass=idle
16 CPUSchedulingPolicy=idle
17 ExecStart=@root_sbindir@/e2scrub_all -A -r
18 SyslogIdentifier=%N
19 RemainAfterExit=no
20
21 [Install]
22 WantedBy=default.target
```

As this is installed in the default target, it increases the boot time of
my target system, which does not have any LVM volumes. Especially, as a
shell script is started, and on my system resources are scarce during
boot-up.

```
$ systemctl status -o short-precise e2scrub_reap.service
● e2scrub_reap.service - Remove Stale Online ext4 Metadata Check Snapshots
Loaded: loaded (/lib/systemd/system/e2scrub_reap.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2019-03-18 12:17:13 CET; 1min 1s ago
Docs: man:e2scrub_all(8)
Process: 447 ExecStart=/sbin/e2scrub_all -A -r (code=exited, status=0/SUCCESS)
Main PID: 447 (code=exited, status=0/SUCCESS)

Mar 18 12:17:08.223560 plumpsklo systemd[1]: Starting Remove Stale Online ext4 Metadata Check Snapshots...
Mar 18 12:17:13.996465 plumpsklo systemd[1]: e2scrub_reap.service: Succeeded.
Mar 18 12:17:13.996808 plumpsklo systemd[1]: Started Remove Stale Online ext4 Metadata Check Snapshots.
```

Reading the manual, the switch `-r` “removes e2scrub snapshots but do not
check anything”.

Does this have to be done during boot-up, or could it be done after the
default target was reached, or even during shutting down?


Kind regards,

Paul


[1]: https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/scrub
[2]: https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/doc/RelNotes/v1.45.0.txt


Attachments:
smime.p7s (5.05 kB)
S/MIME Cryptographic Signature

2019-03-18 21:47:41

by Theodore Ts'o

[permalink] [raw]
Subject: Re: New service e2scrub_reap

On Mon, Mar 18, 2019 at 12:24:55PM +0100, Paul Menzel wrote:
> Dear Ted, dear Darrick,
>
> On Debian Sid/unstable, I noticed the new service `scrub/e2scrub_reap.service`
> installed in the default target [1][2].
>
> $ systemctl status -o short-precise e2scrub_reap.service
> ● e2scrub_reap.service - Remove Stale Online ext4 Metadata Check Snapshots
> Loaded: loaded (/lib/systemd/system/e2scrub_reap.service; enabled; vendor preset: enabled)
> Active: inactive (dead) since Mon 2019-03-18 12:17:13 CET; 1min 1s ago
> Docs: man:e2scrub_all(8)
> Process: 447 ExecStart=/sbin/e2scrub_all -A -r (code=exited, status=0/SUCCESS)
> Main PID: 447 (code=exited, status=0/SUCCESS)
>
> Mar 18 12:17:08.223560 plumpsklo systemd[1]: Starting Remove Stale Online ext4 Metadata Check Snapshots...
> Mar 18 12:17:13.996465 plumpsklo systemd[1]: e2scrub_reap.service: Succeeded.
> Mar 18 12:17:13.996808 plumpsklo systemd[1]: Started Remove Stale Online ext4 Metadata Check Snapshots.

Yeah, that's unfortunate. I'm seeing a similar time on my (fairly
high-end) laptop:

# time e2scrub_all -A -r

real 0m4.356s
user 0m0.677s
sys 0m1.285s


We should be able to fix this in general by avoiding the use of lsblk
at all, and in the case of e2scrub -r, just simply iterating over the
output of:

lvs --name-prefixes -o vg_name,lv_name,lv_path,origin -S lv_role=snapshot

(which takes about a fifth of a second on my laptop and it should be
even faster if there are no LVM volumes on the system)

And without the -r option, we should just be able to do this:

lvs --name-prefixes -o vg_name,lv_name,lv_path -S lv_active=active,lv_role=public

Right now we're calling lvs for every single block device emitted by
lsblk, and from what I can tell, we can do a much better job
optimizing e2scrub_all.

> Reading the manual, the switch `-r` “removes e2scrub snapshots but do not
> check anything”.
>
> Does this have to be done during boot-up, or could it be done after the
> default target was reached, or even during shutting down?

This shouldn't be blocking any other targets, I think there should be
a way to configure the unit file so that it runs in parallel with the
other systemd units. My systemd-fu is not super strong, so I'll have
to do some investigating to see how we can fix this.

Regards,

- Ted

2019-03-18 22:04:02

by Paul Menzel

[permalink] [raw]
Subject: Re: New service e2scrub_reap

Dear Ted,


On 18.03.19 22:47, Theodore Ts'o wrote:
> On Mon, Mar 18, 2019 at 12:24:55PM +0100, Paul Menzel wrote:

>> On Debian Sid/unstable, I noticed the new service `scrub/e2scrub_reap.service`
>> installed in the default target [1][2].
>>
>> $ systemctl status -o short-precise e2scrub_reap.service
>> ● e2scrub_reap.service - Remove Stale Online ext4 Metadata Check Snapshots
>> Loaded: loaded (/lib/systemd/system/e2scrub_reap.service; enabled; vendor preset: enabled)
>> Active: inactive (dead) since Mon 2019-03-18 12:17:13 CET; 1min 1s ago
>> Docs: man:e2scrub_all(8)
>> Process: 447 ExecStart=/sbin/e2scrub_all -A -r (code=exited, status=0/SUCCESS)
>> Main PID: 447 (code=exited, status=0/SUCCESS)
>>
>> Mar 18 12:17:08.223560 plumpsklo systemd[1]: Starting Remove Stale Online ext4 Metadata Check Snapshots...
>> Mar 18 12:17:13.996465 plumpsklo systemd[1]: e2scrub_reap.service: Succeeded.
>> Mar 18 12:17:13.996808 plumpsklo systemd[1]: Started Remove Stale Online ext4 Metadata Check Snapshots.
>
> Yeah, that's unfortunate. I'm seeing a similar time on my (fairly
> high-end) laptop:
>
> # time e2scrub_all -A -r
>
> real 0m4.356s
> user 0m0.677s
> sys 0m1.285s

Thank you for your response and tests.

> We should be able to fix this in general by avoiding the use of lsblk
> at all, and in the case of e2scrub -r, just simply iterating over the
> output of:
>
> lvs --name-prefixes -o vg_name,lv_name,lv_path,origin -S lv_role=snapshot
>
> (which takes about a fifth of a second on my laptop and it should be
> even faster if there are no LVM volumes on the system)
>
> And without the -r option, we should just be able to do this:
>
> lvs --name-prefixes -o vg_name,lv_name,lv_path -S lv_active=active,lv_role=public
>
> Right now we're calling lvs for every single block device emitted by
> lsblk, and from what I can tell, we can do a much better job
> optimizing e2scrub_all.

Indeed. That sounds like a way to improve the situation.

>> Reading the manual, the switch `-r` “removes e2scrub snapshots but do not
>> check anything”.
>>
>> Does this have to be done during boot-up, or could it be done after the
>> default target was reached, or even during shutting down?
>
> This shouldn't be blocking any other targets, I think there should be
> a way to configure the unit file so that it runs in parallel with the
> other systemd units. My systemd-fu is not super strong, so I'll have
> to do some investigating to see how we can fix this.

Sorry about my wording. It’s not about blocking targets, but an
additional program which fights for the resources. Until the graphical
target (or graphical login manager) is reached on my system, a lot of
process already wait for CPU resources. That is the bottleneck during
the boot-up of my system.

So it’d be great, if services, which actually do not have to run during
boot-up would only be started after the default target has been reached.
Something like the ordering dependency

After=default.target

which does not work though to my knowledge. I’ll ask the systemd folks
again.


Kind regards,

Paul

2019-03-18 23:32:56

by Darrick J. Wong

[permalink] [raw]
Subject: Re: New service e2scrub_reap

On Mon, Mar 18, 2019 at 11:03:59PM +0100, Paul Menzel wrote:
> Dear Ted,
>
>
> On 18.03.19 22:47, Theodore Ts'o wrote:
> > On Mon, Mar 18, 2019 at 12:24:55PM +0100, Paul Menzel wrote:
>
> > > On Debian Sid/unstable, I noticed the new service `scrub/e2scrub_reap.service`
> > > installed in the default target [1][2].
> > >
> > > $ systemctl status -o short-precise e2scrub_reap.service
> > > ● e2scrub_reap.service - Remove Stale Online ext4 Metadata Check Snapshots
> > > Loaded: loaded (/lib/systemd/system/e2scrub_reap.service; enabled; vendor preset: enabled)
> > > Active: inactive (dead) since Mon 2019-03-18 12:17:13 CET; 1min 1s ago
> > > Docs: man:e2scrub_all(8)
> > > Process: 447 ExecStart=/sbin/e2scrub_all -A -r (code=exited, status=0/SUCCESS)
> > > Main PID: 447 (code=exited, status=0/SUCCESS)
> > >
> > > Mar 18 12:17:08.223560 plumpsklo systemd[1]: Starting Remove Stale Online ext4 Metadata Check Snapshots...
> > > Mar 18 12:17:13.996465 plumpsklo systemd[1]: e2scrub_reap.service: Succeeded.
> > > Mar 18 12:17:13.996808 plumpsklo systemd[1]: Started Remove Stale Online ext4 Metadata Check Snapshots.
> >
> > Yeah, that's unfortunate. I'm seeing a similar time on my (fairly
> > high-end) laptop:
> >
> > # time e2scrub_all -A -r
> >
> > real 0m4.356s
> > user 0m0.677s
> > sys 0m1.285s
>
> Thank you for your response and tests.
>
> > We should be able to fix this in general by avoiding the use of lsblk
> > at all, and in the case of e2scrub -r, just simply iterating over the
> > output of:
> >
> > lvs --name-prefixes -o vg_name,lv_name,lv_path,origin -S lv_role=snapshot
> >
> > (which takes about a fifth of a second on my laptop and it should be
> > even faster if there are no LVM volumes on the system)
> >
> > And without the -r option, we should just be able to do this:
> >
> > lvs --name-prefixes -o vg_name,lv_name,lv_path -S lv_active=active,lv_role=public
> >
> > Right now we're calling lvs for every single block device emitted by
> > lsblk, and from what I can tell, we can do a much better job
> > optimizing e2scrub_all.
>
> Indeed. That sounds like a way to improve the situation.

That's ... interesting. On my developer workstations (Ubuntu 16.04 and
18.04) it generally takes 1/10th the amount of time to run
e2scrub_all.

Even on my aging ~2010 era server that only has disks it takes 0.3s:

# time e2scrub_all -A -r

real 0m0.280s
user 0m0.160s
sys 0m0.126s

I wonder what's different between our computers? Do you have a
lvm2-lvmetad service running?

However, since e2scrub is tied to lvm, Ted is right that calling lvs in
the outer loop would be far more efficient. I'll have a look at
reworking this.

> > > Reading the manual, the switch `-r` “removes e2scrub snapshots but do not
> > > check anything”.
> > >
> > > Does this have to be done during boot-up, or could it be done after the
> > > default target was reached, or even during shutting down?
> >
> > This shouldn't be blocking any other targets, I think there should be
> > a way to configure the unit file so that it runs in parallel with the
> > other systemd units. My systemd-fu is not super strong, so I'll have
> > to do some investigating to see how we can fix this.
>
> Sorry about my wording. It’s not about blocking targets, but an additional
> program which fights for the resources. Until the graphical target (or
> graphical login manager) is reached on my system, a lot of process already
> wait for CPU resources. That is the bottleneck during the boot-up of my
> system.
>
> So it’d be great, if services, which actually do not have to run during
> boot-up would only be started after the default target has been reached.
> Something like the ordering dependency
>
> After=default.target
>
> which does not work though to my knowledge. I’ll ask the systemd folks
> again.

The biggest risk of delaying that is that the system will crash while
the root fs was being scrubbed and then the snapshot will run out of
space while the rest of the system comes back up. However, this service
can run in parallel with the other tasks; there's no need for it to run
solo.

--D

>
>
> Kind regards,
>
> Paul

2019-03-19 15:03:28

by Theodore Ts'o

[permalink] [raw]
Subject: Re: New service e2scrub_reap

On Mon, Mar 18, 2019 at 04:32:38PM -0700, Darrick J. Wong wrote:
> That's ... interesting. On my developer workstations (Ubuntu 16.04 and
> 18.04) it generally takes 1/10th the amount of time to run
> e2scrub_all.
>
> Even on my aging ~2010 era server that only has disks it takes 0.3s:
>
> # time e2scrub_all -A -r
>
> real 0m0.280s
> user 0m0.160s
> sys 0m0.126s
>
> I wonder what's different between our computers? Do you have a
> lvm2-lvmetad service running?

No, I don't. I do have lvm2-lvmpolld service active, but I don't
think that's used by lvs.

What I can see is from running the script under bash -vx is that
e2scrub_all is calling:

lvs --nameprefixes -o vg_name,lv_name,lv_role --noheadings <dev>

for each device returned by lsblk (whether or not it is a LVM device).
At least on my system, it takes around a seventh of a second to run:

# sudo time lvs --nameprefixes -o vg_name,lv_name,lv_role --noheadings /dev/lambda/root
LVM2_VG_NAME='lambda' LVM2_LV_NAME='root' LVM2_LV_ROLE='public'
0.01user 0.01system 0:00.14elapsed 16%CPU (0avgtext+0avgdata 13528maxresident)k
8704inputs+0outputs (0major+1056minor)pagefaults 0swaps

# sudo time lvs --nameprefixes -o vg_name,lv_name,lv_role --noheadings /dev/nvmen0
Volume group "nvmen0" not found
Cannot process volume group nvmen0
Command exited with non-zero status 5
0.02user 0.01system 0:00.18elapsed 25%CPU (0avgtext+0avgdata 13648maxresident)k
8704inputs+0outputs (0major+1554minor)pagefaults 0swaps

"e2scrub -A -r" is running the lvs command ten times. So that's
around 1.5 to 2 seconds of the five second run.

Looking at the strace output of the lvs command, I don't see it doing
anything that would a long time, but it *is* doing a huge amount of
open/fstat/close on a huge number of sysfs files. (Essentially, it's
searching all block devices looking for a match for the given LVM
volume.) So the e2scrub -A script is going to be opening O(N**2)
sysfs files where N is the number of block devices in the system.

> However, since e2scrub is tied to lvm, Ted is right that calling lvs in
> the outer loop would be far more efficient. I'll have a look at
> reworking this.

We can also use lvm's selection criteria so we don't have to call eval
on the output of the lvs unnecessarily.

Something else that I noticed --- I don't think lvs --nameprefixes
escapes shell magic characters. Fortunately lvm only allows names to
contain characters from the set [a-zA-Z+_.-], and I don't *think*
there is the way for userspace to trick lvm to returning a device
pathname that might include the string "/dev/bobby/$(rm -rf /)"
But we might want to take a second, more paranoid look at whether
we are sure it's safe.

- Ted

P.S. Obligatory xkcd reference: https://xkcd.com/327/

P.P.S. Just for yucks, it might also be worth testing to see whether
or not the automounts that some desktops use based on the volume label
on the USB stick is doing shell escape santization....