Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1268011pxb; Wed, 10 Feb 2021 04:33:03 -0800 (PST) X-Google-Smtp-Source: ABdhPJx5BeA50w4JilI5x22inuSHwnRDahP95uoQCd4AoiURZhJt1+n1XyC96UB7fKGKtK+/vXvk X-Received: by 2002:a17:906:4f16:: with SMTP id t22mr2697412eju.307.1612960382997; Wed, 10 Feb 2021 04:33:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612960382; cv=none; d=google.com; s=arc-20160816; b=gviTRSZveAZjJK2uJ45yOnmqPXcUgNdaIFIOqRjRMtYqm0XH5x4OxmUDKXRdLpJNLD Mu4d3HRUtmbBB/aa3pEOu/4Xfzxp5W/IaJ2EABU6y2Nz4fibTPtPj/c4shn9XXqUxlVM 20397wyjocy0YictAmC30i+eGFZlonyOG0OwoRK7fFQk/aoga+U7UbjteQ4wORngkNqY w3N3G2u/yn9mEAVQ51bvJI5px3LsFVHaqbtlQdDfXpacjF1H47ET4P+3V6GsF0pxfRx7 7oD05JO60qW6OtABb3bgvrFnSTI8au91nQK6KaEufmI9Vq6KnEJZcDM5DRW7esYTM7+p gVuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=ca06mZilvKgQH8bo6DeGWLMNkKFEiIrbTaXWqGUI8Pg=; b=lGQrorK2fYDmH4DICnSRjPzqRibfgrxfda9MAPSk7rKieH9ftBc8O2uIH5ss5OaSeB KD+8R62wPC9kGOKNrm3sGjqhzpTrvrqO7mEgWp7x916B0V67Wbfb113023pS36U45gEz HazY9SYmDdUDgeHnjZg2PM4zMEaNf5R1QNWv/ftDrQMkoN96NjNM/DxO2tVVV5uT1VHK t8NlsFTFwYkJuGXOs3gC6yx58SPbqDvGB42P9h0yQn6O7rXLHR4ANmp7oUJ9nnEF0DD2 P7rUO4IwqptAln5yH1+/+iQSkL1kS/1q8FMJpp7IGUJqau+o1BPUw569zDoy5egd61AY /AeQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q21si1216389edt.445.2021.02.10.04.32.39; Wed, 10 Feb 2021 04:33:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231744AbhBJMbk (ORCPT + 99 others); Wed, 10 Feb 2021 07:31:40 -0500 Received: from mx2.suse.de ([195.135.220.15]:34672 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231855AbhBJMaI (ORCPT ); Wed, 10 Feb 2021 07:30:08 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6ABD7AC43; Wed, 10 Feb 2021 12:29:26 +0000 (UTC) Date: Wed, 10 Feb 2021 12:29:25 +0000 From: Michal Rostecki To: =?utf-8?B?TWljaGHFgiBNaXJvc8WCYXc=?= Cc: Chris Mason , Josef Bacik , David Sterba , "open list:BTRFS FILE SYSTEM" , open list , Michal Rostecki Subject: Re: [PATCH RFC 6/6] btrfs: Add roundrobin raid1 read policy Message-ID: <20210210122925.GB23499@wotan.suse.de> References: <20210209203041.21493-1-mrostecki@suse.de> <20210209203041.21493-7-mrostecki@suse.de> <20210210042428.GC12086@qmqm.qmqm.pl> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20210210042428.GC12086@qmqm.qmqm.pl> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 10, 2021 at 05:24:28AM +0100, Michał Mirosław wrote: > On Tue, Feb 09, 2021 at 09:30:40PM +0100, Michal Rostecki wrote: > [...] > > For the array with 3 HDDs, not adding any penalty resulted in 409MiB/s > > (429MB/s) performance. Adding the penalty value 1 resulted in a > > performance drop to 404MiB/s (424MB/s). Increasing the value towards 10 > > was making the performance even worse. > > > > For the array with 2 HDDs and 1 SSD, adding penalty value 1 to > > rotational disks resulted in the best performance - 541MiB/s (567MB/s). > > Not adding any value and increasing the value was making the performance > > worse. > > > > Adding penalty value to non-rotational disks was always decreasing the > > performance, which motivated setting it as 0 by default. For the purpose > > of testing, it's still configurable. > [...] > > + bdev = map->stripes[mirror_index].dev->bdev; > > + inflight = mirror_load(fs_info, map, mirror_index, stripe_offset, > > + stripe_nr); > > + queue_depth = blk_queue_depth(bdev->bd_disk->queue); > > + > > + return inflight < queue_depth; > [...] > > + last_mirror = this_cpu_read(*fs_info->last_mirror); > [...] > > + for (i = last_mirror; i < first + num_stripes; i++) { > > + if (mirror_queue_not_filled(fs_info, map, i, stripe_offset, > > + stripe_nr)) { > > + preferred_mirror = i; > > + goto out; > > + } > > + } > > + > > + for (i = first; i < last_mirror; i++) { > > + if (mirror_queue_not_filled(fs_info, map, i, stripe_offset, > > + stripe_nr)) { > > + preferred_mirror = i; > > + goto out; > > + } > > + } > > + > > + preferred_mirror = last_mirror; > > + > > +out: > > + this_cpu_write(*fs_info->last_mirror, preferred_mirror); > > This looks like it effectively decreases queue depth for non-last > device. After all devices are filled to queue_depth-penalty, only > a single mirror will be selected for next reads (until a read on > some other one completes). > Good point. And if all devices are going to be filled for longer time, this function will keep selecting the last one. Maybe I should select last+1 in that case. Would that address your concern or did you have any other solution in mind? Thanks for pointing that out. > Have you tried testing with much more jobs / non-sequential accesses? > I didn't try with non-sequential accesses. Will do that before respinning v2. > Best Reagrds, > Michał Mirosław Regards, Michal