Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp6432258ybe; Wed, 18 Sep 2019 03:26:57 -0700 (PDT) X-Google-Smtp-Source: APXvYqxjLfkcBz/2YWgz1ME08r/2ldBBHiSWeqMa9u85utjSu/45snBLTgjgjMGztf7VHuze7nrv X-Received: by 2002:a17:906:1659:: with SMTP id n25mr9002997ejd.258.1568802417075; Wed, 18 Sep 2019 03:26:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568802417; cv=none; d=google.com; s=arc-20160816; b=hFrnptKVh+0lqdmVljdUcYrkEPbUuzqidT20bwEGgu8Bqr+45PgighCPSfO3PAPCbK jpIDSq/5PcrbpXn4Ay0oo9pIDhh84XckTwGLeBoDJ740ccLOpaoJKGBKIuewxigGIn4c PkpSuiQpjSaJopkUtftCM4yx+mRlhf6QPYDLK3h19q4v9XN+6/7iI84vEXYhhaaPqua1 gYE71BpyQz2DExGgz/KhnOZ7ZEJ60e0kcKHJxZnsTAfcpG1hNkeLjAfQaZbunllAuCL9 dUutwgv8oG4WMA9aB6Fk1YrlmVfWkzxw2SwlhlvVTuFr/1o/SXUAQTqGlbldQL+PZqtG 0Tug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=UXyA5GSEx2eQozDuk9K7fmGE4qHxlXT6u4lqIJkfno8=; b=CCb6zbssDKHGHTSbtKSk3G6AcxLOihnFHd2j1VljSXzkz8StM41bm1UEEYbagvfBXR KRSvaCaE2w9ZkZqjt08Gurc0/IrlBQgZxzdNbncSfOcFGesZXyXhHt0K+V5/H8rFw9xO WxVmh/a76/0bE9KUy81EMC5L20vAzbOpPD63NSZ7wmuMGkXjmnmLnAgWCfPHqRcpC2Fm 00SZPyYt5kbpyXC/o/06SfVE/SwPL8MwUfFIpdOTPZxLIYqlj3Y8AgRt1j1RcJYZaVU+ V8tHueup5LHO3fYD4P51kxvd6umJPmLbMo9K42oCIpzYz0vEno9jiqHjRzH9YzDFVKiE F0BQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g8si3034367edb.335.2019.09.18.03.26.34; Wed, 18 Sep 2019 03:26:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729893AbfIRKX5 (ORCPT + 99 others); Wed, 18 Sep 2019 06:23:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41904 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729855AbfIRKX4 (ORCPT ); Wed, 18 Sep 2019 06:23:56 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 52D8A197BCE4; Wed, 18 Sep 2019 10:23:56 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (file01.intranet.prod.int.rdu2.redhat.com [10.11.5.7]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 54AC460872; Wed, 18 Sep 2019 10:23:51 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (localhost [127.0.0.1]) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4) with ESMTP id x8IANohg030335; Wed, 18 Sep 2019 06:23:50 -0400 Received: from localhost (mpatocka@localhost) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4/Submit) with ESMTP id x8IANoxE030331; Wed, 18 Sep 2019 06:23:50 -0400 X-Authentication-Warning: file01.intranet.prod.int.rdu2.redhat.com: mpatocka owned process doing -bs Date: Wed, 18 Sep 2019 06:23:50 -0400 (EDT) From: Mikulas Patocka X-X-Sender: mpatocka@file01.intranet.prod.int.rdu2.redhat.com To: Huaisheng HS1 Ye cc: "snitzer@redhat.com" , "agk@redhat.com" , "prarit@redhat.com" , Tzu ting Yu1 , "dm-devel@redhat.com" , "linux-kernel@vger.kernel.org" , Huaisheng Ye Subject: Re: [PATCH] dm writecache: skip writecache_wait for pmem mode In-Reply-To: Message-ID: References: User-Agent: Alpine 2.02 (LRH 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.70]); Wed, 18 Sep 2019 10:23:56 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 5 Sep 2019, Huaisheng HS1 Ye wrote: > > -----Original Message----- > > From: Mikulas Patocka > > Sent: Wednesday, September 4, 2019 11:36 PM > > On Wed, 4 Sep 2019, Huaisheng HS1 Ye wrote: > > > > > > > > Hi Mikulas, > > > > > > Thanks for your reply, I see what you mean, but I can't agree with you. > > > > > > For pmem mode, this code path (writecache_flush) is much more hot than > > > SSD mode. Because in the code, the AUTOCOMMIT_BLOCKS_PMEM has been > > > defined to 64, which means if more than 64 blocks have been inserted > > > to cache device, also called uncommitted, writecache_flush would be called. > > > Otherwise, there is a timer callback function will be called every > > > 1000 milliseconds. > > > > > > #define AUTOCOMMIT_BLOCKS_SSD 65536 > > > #define AUTOCOMMIT_BLOCKS_PMEM 64 > > > #define AUTOCOMMIT_MSEC 1000 > > > > > > So when dm-writecache running in working mode, there are continuous > > > WRITE operations has been mapped to writecache_map, writecache_flush > > > will be used much more often than SSD mode. > > > > > > Cheers, > > > Huaisheng Ye > > > > So, you save one instruction cache line for every 64*4096 bytes written to > > persistent memory. > > > > If you insist on it, I can acknowledge it, but I think it is really an > > over-optimization. > > > > Acked-By: Mikulas Patocka > > > > Mikulas > > Thanks for your Acked-by, I have learned so much from your code. > > And I have another question about the LRU. > > Current code only put the last written blocks into the front of list > wc->lru, READ hit doesn't affect the position of block in wc->lru. That > is to say, if a block has been written to cache device, even there would > be a lot of READ operation for that block next but without WRITE hit, > which still would flow to the end of wc->lru, and eventually it would be > written back. > > I am not sure whether this behavior disobeys LRU principle or not. But > if this situation above appears, that would lead to some HOT blocks > (without WRITE hit) had been written back, even READ hit many times. Is > it worth submitting patch to adjust the position of blocks when READ > hit? Just a discussion, I want to know your design idea. > > Cheers, > Huaisheng Ye The dm-writecache target is supposed to optimize writes, not reads. Normally, there won't be any reads following a write, because the data would be stored in the cache in RAM. Mikulas