Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp124757pxj; Thu, 20 May 2021 06:01:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz6YcmwGvGUUqjaY0ltf7vqeWjpm152JxnTYMV/8qYD6MLix0ytS5hVvMPEJwxlp/boP/8V X-Received: by 2002:a50:aa95:: with SMTP id q21mr4787970edc.329.1621515662065; Thu, 20 May 2021 06:01:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621515662; cv=none; d=google.com; s=arc-20160816; b=dQoIP6vFI5tBkXTOROPLU2UCE5+1PaOdelv4MPsRTwYZTnqRG9jOikWZts+RMVPQr7 j7D2ILIurLWCG9292FTfAr/0v7MDu79DeGHn04JbGbThmHC5eGrmhZCgftYeuEjSWESR ocXZDAADwV6zAA48f5llaE8Bfc/YxUUVdkLF6uH2MI8RlrePdiHaBnf+VXKRZSVMcdqA 3ofYjB9uhucNG1UgIfiO+gcVrP/LAIrfdfsRevUwAAeoqNPWk67K0lrc9Fgm5xkWATo+ mflJTSitzyuR/YxyzqeBatWax7AGSnYzq4uBqb77c5LEi0ilD+aevZiwA2gFPbFazUUI ujmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:mail-followup-to:reply-to:message-id :subject:cc:to:from:date; bh=2lK6ThhO60DoSJzjdaTzpV9Bydvw29beeYF2HCo5R9Y=; b=ISiS+O8fw/ScjQYV0+1KxQ4PT8q9kUG2dIoHuXFZ2fXVxKG98jCe3B1GuSGN4qaZag F0+2V7emsVDXXj+IfBE6vUVk7c1yR7eM7fLF4SYNpKCHq5xVjjWDp79CytC+He/uUzS8 9ycgBgJ9UxYac6TNSN7c3f8ZplpkKQ918PEsE+KqbIFOwTt3XBgTv26UnKAUp77q3vRN aMDVQr/lhlh8BAlgmQhbX2lFOsXxiFaOKlhd7Y3NYVP6LQ1NnTwkd+iFqHvn+KZxSVK6 kfPJXxHsx6SVKfSngqXIHWEP28POQF+teGPPw1iHVGbi7gUp660vaB6QfQlJxGxCu/Mc XEDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h9si2403758ejc.653.2021.05.20.06.00.34; Thu, 20 May 2021 06:01:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243237AbhETNAc (ORCPT + 99 others); Thu, 20 May 2021 09:00:32 -0400 Received: from mx2.suse.de ([195.135.220.15]:35674 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243088AbhETM7H (ORCPT ); Thu, 20 May 2021 08:59:07 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 81D43AC5B; Thu, 20 May 2021 12:57:42 +0000 (UTC) Received: by ds.suse.cz (Postfix, from userid 10065) id 9767EDA7F9; Thu, 20 May 2021 14:55:08 +0200 (CEST) Date: Thu, 20 May 2021 14:55:08 +0200 From: David Sterba To: Geert Uytterhoeven Cc: David Sterba , linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Arnd Bergmann Subject: Re: [PATCH] btrfs: scrub: per-device bandwidth control Message-ID: <20210520125508.GA7604@twin.jikos.cz> Reply-To: dsterba@suse.cz Mail-Followup-To: dsterba@suse.cz, Geert Uytterhoeven , David Sterba , linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Arnd Bergmann References: <20210518144935.15835-1-dsterba@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23.1-rc1 (2014-03-12) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 20, 2021 at 09:43:10AM +0200, Geert Uytterhoeven wrote: > > - values written to the file accept suffixes like K, M > > - file is in the per-device directory /sys/fs/btrfs/FSID/devinfo/DEVID/scrub_speed_max > > - 0 means use default priority of IO > > > > The scheduler is a simple deadline one and the accuracy is up to nearest > > 128K. > > > > Signed-off-by: David Sterba > > Thanks for your patch, which is now commit b4a9f4bee31449bc ("btrfs: > scrub: per-device bandwidth control") in linux-next. > > noreply@ellerman.id.au reported the following failures for e.g. > m68k/defconfig: > > ERROR: modpost: "__udivdi3" [fs/btrfs/btrfs.ko] undefined! > ERROR: modpost: "__divdi3" [fs/btrfs/btrfs.ko] undefined! I'll fix it, thanks for the report. > > +static void scrub_throttle(struct scrub_ctx *sctx) > > +{ > > + const int time_slice = 1000; > > + struct scrub_bio *sbio; > > + struct btrfs_device *device; > > + s64 delta; > > + ktime_t now; > > + u32 div; > > + u64 bwlimit; > > + > > + sbio = sctx->bios[sctx->curr]; > > + device = sbio->dev; > > + bwlimit = READ_ONCE(device->scrub_speed_max); > > + if (bwlimit == 0) > > + return; > > + > > + /* > > + * Slice is divided into intervals when the IO is submitted, adjust by > > + * bwlimit and maximum of 64 intervals. > > + */ > > + div = max_t(u32, 1, (u32)(bwlimit / (16 * 1024 * 1024))); > > + div = min_t(u32, 64, div); > > + > > + /* Start new epoch, set deadline */ > > + now = ktime_get(); > > + if (sctx->throttle_deadline == 0) { > > + sctx->throttle_deadline = ktime_add_ms(now, time_slice / div); > > ERROR: modpost: "__udivdi3" [fs/btrfs/btrfs.ko] undefined! > > div_u64(bwlimit, div) > > > + sctx->throttle_sent = 0; > > + } > > + > > + /* Still in the time to send? */ > > + if (ktime_before(now, sctx->throttle_deadline)) { > > + /* If current bio is within the limit, send it */ > > + sctx->throttle_sent += sbio->bio->bi_iter.bi_size; > > + if (sctx->throttle_sent <= bwlimit / div) > > + return; > > + > > + /* We're over the limit, sleep until the rest of the slice */ > > + delta = ktime_ms_delta(sctx->throttle_deadline, now); > > + } else { > > + /* New request after deadline, start new epoch */ > > + delta = 0; > > + } > > + > > + if (delta) > > + schedule_timeout_interruptible(delta * HZ / 1000); > > ERROR: modpost: "__divdi3" [fs/btrfs/btrfs.ko] undefined! > > I'm a bit surprised gcc doesn't emit code for the division by the > constant 1000, but emits a call to __divdi3(). So this has to become > div_u64(), too. > > > + /* Next call will start the deadline period */ > > + sctx->throttle_deadline = 0; > > +} > > BTW, any chance you can start adding lore Link: tags to your commits, to > make it easier to find the email thread to reply to when reporting a > regression? Well, no I'm not going to do that, sorry. It should be easy enough to paste the patch subject to the search field on lore.k.org and click the link leading to the mail, I do that all the time. Making sure that patches have all the tags and information takes time already so I'm not too keen to spend time on adding links.