Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp5171979ybb; Tue, 24 Mar 2020 12:14:36 -0700 (PDT) X-Google-Smtp-Source: ADFU+vt7p6+CSnXhVm+4Y0TsrH5YRum6VxBSZ9JCOxt+xCRkXkRFHZa1f32YmFDQKJ3rASPjYI88 X-Received: by 2002:a9d:220e:: with SMTP id o14mr21677635ota.88.1585077276828; Tue, 24 Mar 2020 12:14:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1585077276; cv=none; d=google.com; s=arc-20160816; b=F/WMQQpp0CzWopEMH8EfgcxN10FLOwnc0pIorYjhXPe82WgxNHve3EYso03RYlqvX5 yHUXsgPSM5x63PG4FKdhhSxKAB3EcMiSVWsYQUB4aQER5mepQEjLY85IXZBNWes0inPu wH4QwznIaBZkXc6eo8N1zOKyj9vqnh+LBJ5WjXU5Dfa2zauG1kEyLm3ULgv4jdxrzl58 xvSQHDMo8g98rJ8MkWGN2ejrAyYW9RWdvTcVDRl9fBDJm6qwup1fbeD5ymgJJdRYEnIw nmThS1pmh5GR1lHH0WvPDiNQTKWACSby2Y3+wALVAMU8VHRBQYBcf1vKyajZwt4VHPF0 n4/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=IWa3hFcJfMHl7V5fCeCQBW+Qo+00+itJrJJVYsrBbg8=; b=boZ00kjsclsS4St8ENwDc2dPdjSrh35vG4YlLI8tDu6aZN2s+y50kIXvGVXMAHuj1c nrnVqr0HpskBlX8VjV4SBbuk1Fty+TWF9ELAhmcAgQ2U9muzqCKuHAuI1VG4V62S3mcL TCVZjAhxM4/8CBk5RqJgmmeqo1MRxQ9HhO0dr1mtU3tvhSaEq4ZWjkvtKSYF09wQDEd+ pA5w0jHwdiSvxGQrYY3Wj3f/DSNQeBlIhE873lJVhLLnOYcaJvrOLdTDwfnyByRBzbax 6hdZbM4PnLaf1DVQRzeyrXKJUweiHAWrULaTlW6KqkVpV8nTAmMxhGtCm00I6ipRl33U +ZYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=X0EuWrTy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t144si8891516oie.129.2020.03.24.12.14.18; Tue, 24 Mar 2020 12:14:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=X0EuWrTy; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727845AbgCXTL6 (ORCPT + 99 others); Tue, 24 Mar 2020 15:11:58 -0400 Received: from us-smtp-delivery-74.mimecast.com ([216.205.24.74]:37126 "EHLO us-smtp-delivery-74.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727715AbgCXTL6 (ORCPT ); Tue, 24 Mar 2020 15:11:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585077116; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=IWa3hFcJfMHl7V5fCeCQBW+Qo+00+itJrJJVYsrBbg8=; b=X0EuWrTyNhAISlqhNCKke8f62K/ozcNSSX0tAGTvjt8B+hqMMvjO5S4SeJj1Nk83UFadjF udV5NSDFnmi8OftMAj1jQHfbsDKci8vB8Si90+LAs8zPBmCI9vOtRRgYCTeGiXksGtPTzg Cx8gn4VYV2ThL+1+Ccp1I2RZy7WiVag= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-339-cmfqhujuOge-VqRkcub2_w-1; Tue, 24 Mar 2020 15:11:54 -0400 X-MC-Unique: cmfqhujuOge-VqRkcub2_w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 21A89800D5C; Tue, 24 Mar 2020 19:11:53 +0000 (UTC) Received: from localhost (unknown [10.18.25.174]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4871660BF3; Tue, 24 Mar 2020 19:11:50 +0000 (UTC) Date: Tue, 24 Mar 2020 15:11:49 -0400 From: Mike Snitzer To: Lukas Straub Cc: linux-kernel , Alasdair Kergon , dm-devel , Mikulas Patocka Subject: Re: [PATCH v2] dm-integrity: Prevent RMW for full metadata buffer writes Message-ID: <20200324191148.GA2921@redhat.com> References: <20200227142601.61f3cd54@luklap> <20200324171821.GA2444@redhat.com> <20200324195912.518dc87c@luklap> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200324195912.518dc87c@luklap> User-Agent: Mutt/1.5.21 (2010-09-15) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 24 2020 at 2:59pm -0400, Lukas Straub wrote: > On Tue, 24 Mar 2020 13:18:22 -0400 > Mike Snitzer wrote: > > > On Thu, Feb 27 2020 at 8:26am -0500, > > Lukas Straub wrote: > > > > > If a full metadata buffer is being written, don't read it first. This > > > prevents a read-modify-write cycle and increases performance on HDDs > > > considerably. > > > > > > To do this we now calculate the checksums for all sectors in the bio in one > > > go in integrity_metadata and then pass the result to dm_integrity_rw_tag, > > > which now checks if we overwrite the whole buffer. > > > > > > Benchmarking with a 5400RPM HDD with bitmap mode: > > > integritysetup format --no-wipe --batch-mode --interleave-sectors $((64*1024)) -t 4 -s 512 -I crc32c -B /dev/sdc > > > integritysetup open --buffer-sectors=1 -I crc32c -B /dev/sdc hdda_integ > > > dd if=/dev/zero of=/dev/mapper/hdda_integ bs=64K count=$((16*1024*4)) conv=fsync oflag=direct status=progress > > > > > > Without patch: > > > 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 400.326 s, 10.7 MB/s > > > > > > With patch: > > > 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 41.2057 s, 104 MB/s > > > > > > Signed-off-by: Lukas Straub > > > --- > > > Hello Everyone, > > > So here is v2, now checking if we overwrite a whole metadata buffer instead > > > of the (buggy) check if we overwrite a whole tag area before. > > > Performance stayed the same (with --buffer-sectors=1). > > > > > > The only quirk now is that it advertises a very big optimal io size in the > > > standard configuration (where buffer_sectors=128). Is this a Problem? > > > > > > v2: > > > -fix dm_integrity_rw_tag to check if we overwrite a whole metadat buffer > > > -fix optimal io size to check if we overwrite a whole metadata buffer > > > > > > Regards, > > > Lukas Straub > > > > > > Not sure why you didn't cc Mikulas but I just checked with him and he > > thinks: > > > > You're only seeing a boost because you're using 512-byte sector and > > 512-byte buffer. Patch helps that case but hurts in most other cases > > (due to small buffers). > > Hmm, buffer-sectors is still user configurable. If the user wants fast > write performance on slow HDDs he can set a small buffer-sector and > benefit from this patch. With the default buffer-sectors=128 it > shouldn't have any impact. OK, if you'd be willing to conduct tests to prove there is no impact with larger buffers that'd certainly help reinforce your position and make it easier for me to take your patch. But if you're just testing against slow HDD testbeds then the test coverage isn't wide enough. Thanks, Mike