Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp395455pxk; Wed, 23 Sep 2020 06:13:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzddZFup8/IfXXx5B/SSKLML+qoWzvviBZnIfu1kL+YX2+9HP0/5UsvGjO2mQIiulNKNMET X-Received: by 2002:a05:6402:1697:: with SMTP id a23mr9748059edv.195.1600866825194; Wed, 23 Sep 2020 06:13:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600866825; cv=none; d=google.com; s=arc-20160816; b=JqXvIxbagayM2Wqo4fyrFMjOFwhENN4anb4Q0WVKYVII/mMZsCHBc8tmhmE4WEULih Bu/cmszXtUB1j3f07lWLDPxlKyjXIJN/Xdbifl6wztyjtJnXg1Hoz2q1vmKKFUq7Sgg6 eQJzQDRQDMDL5dFwlEpR4T8fKw1we4h9rPa+kszENLm3dvmtOGv7RJzs8uCUXYNFS723 1d/0nV55mCCg1phci/ohdaKqXcNP5EC2/Zzwjvip+VZ0QSAVXTxMKpTOPVm7fRbuLU0q wQbR8nPwWAi/CgrcuZyYTjNIPrypgucw/CryegHiZBeZUvo7VDZQIKtXNd6OpDtE7lZh 1pzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date:dkim-signature; bh=kDxFW0Zci5a4gqRLOX3WWCWSqU2y/VaKVhoCtZvVWrs=; b=0D1Zpalac2fqxidgsUZejo1GLVLj7vuDey2Zx88xZSo0J1eSSR/QPpf1k+lLJZc/3J EXxaw2eLbKS0Mr9aORe2QkM0Q7r/Td+4mZjUnZYDHAOtLnYL0HIZs8GGnW233XyqRyDs MiYL729A5RKnJxpskrGCkJ2F+Kd9LtIWh59H72gEacdC7iU/qgrwIv/lsuGn93xl0Wzv Syjpz3U2TfXEX+wrZWLkJ46JZzKwRIqKhsWDoe96YB1oxizor+f/Lk66aB1hz0OcZFzp lsMG0wVgl95w4UxJCM0I0cq0p5Ur3Iutz77QKHu0gRrfaICZTG8WKh0l/irddfpy/5VN oYYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="ZGkP/NDr"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g19si13193648eds.240.2020.09.23.06.13.21; Wed, 23 Sep 2020 06:13:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="ZGkP/NDr"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726130AbgIWNLz (ORCPT + 99 others); Wed, 23 Sep 2020 09:11:55 -0400 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:25241 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726332AbgIWNLy (ORCPT ); Wed, 23 Sep 2020 09:11:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600866712; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=kDxFW0Zci5a4gqRLOX3WWCWSqU2y/VaKVhoCtZvVWrs=; b=ZGkP/NDrjVBk1hUW9PJbmxuCpHuhhvIf3SFU4pV3x5gxuOkxj6a03Z+ce7SuFwAUM+4Cba 5C0jt8EpUefShXxsIquWkIQZD/qiCoNBmDUSW5itd56Zb46PpQSM8CYgG+/BNX5SwnQf0n nNoo9+NxvQIRbGLQ/xfRY3m85uAcr0I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-118-LjZwdbeiODmyd-uDEwuOfA-1; Wed, 23 Sep 2020 09:11:47 -0400 X-MC-Unique: LjZwdbeiODmyd-uDEwuOfA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5830B802B4C; Wed, 23 Sep 2020 13:11:45 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (file01.intranet.prod.int.rdu2.redhat.com [10.11.5.7]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B90DB78822; Wed, 23 Sep 2020 13:11:44 +0000 (UTC) Received: from file01.intranet.prod.int.rdu2.redhat.com (localhost [127.0.0.1]) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4) with ESMTP id 08NDBiOY022621; Wed, 23 Sep 2020 09:11:44 -0400 Received: from localhost (mpatocka@localhost) by file01.intranet.prod.int.rdu2.redhat.com (8.14.4/8.14.4/Submit) with ESMTP id 08NDBhOr022617; Wed, 23 Sep 2020 09:11:43 -0400 X-Authentication-Warning: file01.intranet.prod.int.rdu2.redhat.com: mpatocka owned process doing -bs Date: Wed, 23 Sep 2020 09:11:43 -0400 (EDT) From: Mikulas Patocka X-X-Sender: mpatocka@file01.intranet.prod.int.rdu2.redhat.com To: Jan Kara cc: Dave Chinner , Dan Williams , Linus Torvalds , Alexander Viro , Andrew Morton , Vishal Verma , Dave Jiang , Ira Weiny , Matthew Wilcox , Eric Sandeen , Dave Chinner , "Kani, Toshi" , "Norton, Scott J" , "Tadakamadla, Rajesh (DCIG/CDI/HPS Perf)" , Linux Kernel Mailing List , linux-fsdevel , linux-nvdimm Subject: Re: NVFS XFS metadata (was: [PATCH] pmem: export the symbols __copy_user_flushcache and __copy_from_user_flushcache) In-Reply-To: <20200923095739.GC6719@quack2.suse.cz> Message-ID: References: <20200922050314.GB12096@dread.disaster.area> <20200923095739.GC6719@quack2.suse.cz> User-Agent: Alpine 2.02 (LRH 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 23 Sep 2020, Jan Kara wrote: > On Tue 22-09-20 12:46:05, Mikulas Patocka wrote: > > > mapping 2^21 blocks requires a 5 level indirect tree. Which one if going > > > to be faster to truncate away - a single record or 2 million individual > > > blocks? > > > > > > IOWs, we can take afford to take an extra cacheline miss or two on a > > > tree block search, because we're accessing and managing orders of > > > magnitude fewer records in the mapping tree than an indirect block > > > tree. > > > > > > PMEM doesn't change this: extents are more time and space efficient > > > at scale for mapping trees than indirect block trees regardless > > > of the storage medium in use. > > > > PMEM doesn't have to be read linearly, so the attempts to allocate large > > linear space are not needed. They won't harm but they won't help either. > > > > That's why NVFS has very simple block allocation alrogithm - it uses a > > per-cpu pointer and tries to allocate by a bit scan from this pointer. If > > the group is full, it tries a random group with above-average number of > > free blocks. > > I agree with Dave here. People are interested in 2MB or 1GB contiguous > allocations for DAX so that files can be mapped at PMD or event PUD levels > thus saving a lot of CPU time on page faults and TLB. NVFS has upper limit on block size 1MB. So, should raise it to 2MB? Will 2MB blocks be useful to someone? Is there some API how userspace can ask the kernel for aligned allocation? fallocate() doesn't seem to offer an option for alignment. > > EXT4 uses bit scan for allocations and people haven't complained that it's > > inefficient, so it is probably OK. > > Yes, it is more or less OK but once you get to 1TB filesystem size and > larger, the number of block groups grows enough that it isn't that great > anymore. We are actually considering new allocation schemes for ext4 for > this large filesystems... NVFS can run with block size larger than page size, so you can reduce the number of block groups by increasing block size. (ext4 also has bigalloc feature that will do it) > > If you think that the lack of journaling is show-stopper, I can implement > > it. But then, I'll have something that has complexity of EXT4 and > > performance of EXT4. So that there will no longer be any reason why to use > > NVFS over EXT4. Without journaling, it will be faster than EXT4 and it may > > attract some users who want good performance and who don't care about GID > > and UID being updated atomically, etc. > > I'd hope that your filesystem offers more performance benefits than just > what you can get from a lack of journalling :). ext4 can be configured to I also don't know how to implement journling on persistent memory :) On EXT4 or XFS you can pin dirty buffers in memory until the journal is flushed. This is obviously impossible on persistent memory. So, I'm considering implementing only some lightweight journaling that will guarantee atomicity between just a few writes. > run without a journal as well - mkfs.ext4 -O ^has_journal. And yes, it does > significantly improve performance for some workloads but you have to have > some way to recover from crashes so it's mostly used for scratch > filesystems (e.g. in build systems, Google uses this feature a lot for some > of their infrastructure as well). > > Honza > -- > Jan Kara > SUSE Labs, CR I've run "dir-test /mnt/test/ 8000000 8000000" and the result is: EXT4 with journal - 5m54,019s EXT4 without journal - 4m4,444s NVFS - 2m9,482s Mikulas