Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1051283ybz; Thu, 16 Apr 2020 01:46:08 -0700 (PDT) X-Google-Smtp-Source: APiQypL2qtJtO2+AU9+ix2F4ilHn/B9C2NE9U7YpYvOKVyQrmMFycSDZqBv2gNz6cIvfa8ZDYhEQ X-Received: by 2002:a17:906:c44b:: with SMTP id ck11mr8776298ejb.110.1587026768103; Thu, 16 Apr 2020 01:46:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587026768; cv=none; d=google.com; s=arc-20160816; b=0LRrTurUQLSV6k+v5eRNwcHxuQLhroZd35MMrQ496kicc6BsAA7ZQqOucrSSJB5bUJ J+m0qYI8fugs4xQ0FO91Lp8ydjtF9Mv8YfNCCShcYEztgDT1J1JIH8BM0MzpN1Ej4g5o zMugGxktUjXlQIFvQrNXmFBGiPYU7pfqSINNgy2eTQsnRSzEMNRDo6Hq/Ef8t6x6vgZB 7jRJ712i4JlyhWaoNiXXcvuATg8V27xHuLaudiqFz/KxnAFtAED29dMJSQ+TAx4+O5wO Il22v7CoDIFythdrbuG0uSYGIYcZqAQJcY2YuCyfI9fk5HIQzENJIdzjureIO9iS8bBz RHFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=x6uigvF/bqZ4XEJh+/Tp1y2eX448wtyf+x6UNwQLqPY=; b=Ith5Km/IdtGdhkbO2DtLpdjhqXWFOMNV5qR2r5RDcRX+BPR1qXJL0wdpyHuy/nqxuj pH8ZWK4Jd2rWhAdxT9T2dme1JgIXBVqeEY21/87XMar7cp7ecaonzayTIAkiJd/idj8t weGZyP+edsy2zzPdS7EapWFvrXlMcO60+5bqmsvUsuVCTsTVq/qnRto/qJ6eWL67OGh5 OeSXl+kRBJ7VsMXSHIyHMKqgzTb0jZbiFYSiqSFpH4E2t7oyCOgCZvhvlM2Rh2bUCG+o lIdkP9sgWblh1yYUGPSvsZNDybS+3oyK7sSgZ8oBgmb9VpSbSvtGMecs1vZBRfIx7xr7 fjxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SO7VsIzB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c22si11610330eja.230.2020.04.16.01.45.44; Thu, 16 Apr 2020 01:46:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=SO7VsIzB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2502724AbgDPIo1 (ORCPT + 99 others); Thu, 16 Apr 2020 04:44:27 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:32007 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2501961AbgDPIdF (ORCPT ); Thu, 16 Apr 2020 04:33:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1587025983; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=x6uigvF/bqZ4XEJh+/Tp1y2eX448wtyf+x6UNwQLqPY=; b=SO7VsIzB47+T1G+5I1oHNdbrD6NB4ZG+PgZkF/JKzT0vqErjBgZVPvXb8m0BP0GLTBlY8U 9DtjrjvJx6hdHCNrf+1R8GJ+Vgd2lAmXBwpOqjoWcexOINRbz7gt8aMUJZtBJQeUD6EGAX MJ+vB4QEepOtQu2MpNaKwTonm1AhOXk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-359-IrgZkNNJM023eKiYKLtW2A-1; Thu, 16 Apr 2020 04:24:28 -0400 X-MC-Unique: IrgZkNNJM023eKiYKLtW2A-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E1A5310CE780; Thu, 16 Apr 2020 08:24:26 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (file01.intranet.prod.int.rdu2.redhat.com [10.11.5.7]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1A80E119DED; Thu, 16 Apr 2020 08:24:24 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (localhost [127.0.0.1]) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4) with ESMTP id 03G8ONMg008685; Thu, 16 Apr 2020 04:24:23 -0400 Received: from localhost (mpatocka@localhost) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4/Submit) with ESMTP id 03G8OKM4008681; Thu, 16 Apr 2020 04:24:21 -0400 X-Authentication-Warning: file01.intranet.prod.int.rdu2.redhat.com: mpatocka owned process doing -bs Date: Thu, 16 Apr 2020 04:24:20 -0400 (EDT) From: Mikulas Patocka X-X-Sender: mpatocka@file01.intranet.prod.int.rdu2.redhat.com To: Dan Williams cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Peter Zijlstra , X86 ML , Linux Kernel Mailing List , device-mapper development Subject: Re: [PATCH] memcpy_flushcache: use cache flusing for larger lengths In-Reply-To: Message-ID: References: User-Agent: Alpine 2.02 (LRH 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 9 Apr 2020, Mikulas Patocka wrote: > With dm-writecache on emulated pmem (with the memmap argument), we get > > With the original kernel: > 8508 - 11378 > real 0m4.960s > user 0m0.638s > sys 0m4.312s > > With dm-writecache hacked to use cached writes + clflushopt: > 8505 - 11378 > real 0m4.151s > user 0m0.560s > sys 0m3.582s I did some multithreaded tests: http://people.redhat.com/~mpatocka/testcases/pmem/microbenchmarks/pmem-multithreaded.txt And it turns out that for singlethreaded access, write+clwb performs better, while for multithreaded access, non-temporal stores perform better. 1 sequential write-nt 8 bytes 1.3 GB/s 2 sequential write-nt 8 bytes 2.5 GB/s 3 sequential write-nt 8 bytes 2.8 GB/s 4 sequential write-nt 8 bytes 2.8 GB/s 5 sequential write-nt 8 bytes 2.5 GB/s 1 sequential write 8 bytes + clwb 1.6 GB/s 2 sequential write 8 bytes + clwb 2.4 GB/s 3 sequential write 8 bytes + clwb 1.7 GB/s 4 sequential write 8 bytes + clwb 1.2 GB/s 5 sequential write 8 bytes + clwb 0.8 GB/s For one thread, we can see that write-nt 8 bytes has 1.3 GB/s and write 8+clwb has 1.6 GB/s, but for multiple threads, write-nt has better throughput. The dm-writecache target is singlethreaded (all the copying is done while holding the writecache lock), so it benefits from clwb. Should memcpy_flushcache be changed to write+clwb? Or are there some multithreaded users of memcpy_flushcache that would be hurt by this change? Mikulas