Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp627623rwd; Thu, 25 May 2023 01:20:29 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4yuHVkROL6mrRdragDMDXyxv7x88Odq7lU6ni5tHTu+DgNp42nPe5Yv9jcIZUiydZk+4TK X-Received: by 2002:a17:90b:1c0f:b0:253:5375:bf57 with SMTP id oc15-20020a17090b1c0f00b002535375bf57mr997698pjb.26.1685002828702; Thu, 25 May 2023 01:20:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685002828; cv=none; d=google.com; s=arc-20160816; b=Hj50RArYtsuVqul4jwrWXlY8xCnzIjs3aODldji9jxRSEM//dgJ0BAzOYoJNAaTOUu Zau4AhIsA/jizdJ5FKwGLlnm/XeH/peAGWy9ObwoHBe8NWMdAnHq/+LU11gUeAExGwfo AJag2M3UmPjoh+EJQKzvBQ0cKlHceRYJu9UsvAPxmUD1LuYKCtm3VJwFAPm7SGZfm+YV yzxBH3mvBW8hxc3IUlq01BWgTjBSGla62AeA77sNYw1AKcyJ2jvg5cjEOOKVXkhmhV3i Q0CjnCt8b0RFZf2e0iOolZsZHXw+GS75UJHv14kfFbaI+j7LHbKsbEh7T7ZOlKD4MOH5 0Vvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=1SbotI5SOBhG7bXBLx5uW2S/thCuqPvVS6nzOwu9CA8=; b=f/O+ELIyK4mjLZQQ13GhWmJkYWvsE9Lq973p/J84C2l0BClwYQt/lt4MhKD2g5Zd0S jvutWR/mwZCZn4/9eTYQMjANHW6f8MCWH32fgeU00uGSH9B+LwbzuJtsqEHPKFl7QAjk Iea2QQGMsD3tPSg+97vwGyYJ57LjznURTKusphsbl+nX7gh9iH4+QkNBjdnT+7LdA9Tv 8TkPgf7oUqmEchsPT4FvaliYhFbGiH06xvIUEwyg50yynVCKyx4oiyyMqcdu+c/MF3Ym z9VAhegp2WR0OO59xH8a5YN6odZast2NR57e5ug+sQYK79qpibWaCyTA3KK4Z8bq6kMb DWHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20210309 header.b=V58wO3Bz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o27-20020a63921b000000b0051b65f4d94asi612637pgd.484.2023.05.25.01.20.13; Thu, 25 May 2023 01:20:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20210309 header.b=V58wO3Bz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233181AbjEYH7q (ORCPT + 99 others); Thu, 25 May 2023 03:59:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230381AbjEYH7p (ORCPT ); Thu, 25 May 2023 03:59:45 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B15FE122; Thu, 25 May 2023 00:59:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1SbotI5SOBhG7bXBLx5uW2S/thCuqPvVS6nzOwu9CA8=; b=V58wO3BzmmE0SuFDyj/dG3wtLK lAzsAvB3ZVtK85AeH07/CJcxdL6YrVPVNDBjp1gDEQPt3y4PFxy1zANuqX7eVYeIgd3P7ogmbXOTU da9+kR0/TlSsh1y3pAg8cJlsO45ZGPvvuTYFq/fNI4L86nVhMdF9l0CdKYvln7GJza2VN0yD6cUMZ 9AGBExT5R92fPahMcA7LeZjMrK7dkBLryYUAAfcvK7uRiSTSxKle8hlE9ag/8t3Jjq9Yn1IOJUZuo uX3hxlvo7360UC0O1DFIK/1Wbl7Y+nHItalF92A3t01qoCyYcTe7+nOpiFXaZa42gRD6g3h1sC3BR Yy6h2YmA==; Received: from hch by bombadil.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1q25sp-00Fu4N-2E; Thu, 25 May 2023 07:59:39 +0000 Date: Thu, 25 May 2023 00:59:39 -0700 From: Christoph Hellwig To: Dave Chinner Cc: Uladzislau Rezki , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, Andrew Morton , LKML , Baoquan He , Lorenzo Stoakes , Christoph Hellwig , Matthew Wilcox , "Liam R . Howlett" , "Paul E . McKenney" , Joel Fernandes , Oleksiy Avramchenko , linux-xfs@vger.kernel.org Subject: Re: [PATCH 0/9] Mitigate a vmap lock contention Message-ID: References: <20230522110849.2921-1-urezki@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 25, 2023 at 07:56:56AM +1000, Dave Chinner wrote: > > It is up to community to decide. As i see XFS needs it also. Maybe in > > the future it can be removed(who knows). If the vmalloc code itself can > > deliver such performance as vm_map* APIs. > > vm_map* APIs cannot be replaced with vmalloc, they cover a very > different use case. i.e. vmalloc allocates mapped memory, > vm_map_ram() maps allocated memory.... > > > vm_map_ram() and friends interface was added because of vmalloc drawbacks. > > No. vm_map*() were scalability improvements added in 2009 to replace > on vmap() and vunmap() to avoid global lock contention in the vmap > allocator that XFS had been working around for years with it's own > internal vmap cache.... All of that is true. At the same time XFS could very much switch to vmalloc for !XBF_UNMAPPED && size > PAGE_SIZE buffers IFF that provided an advantage. The need for vmap and then vm_map_* initially came from the fact that we were using the page cache to back the xfs_buf (or page_buf back then). With your work on getting rid of the pagecache usage we could just use vmalloc now if we wanted to and it improves performance. Or at some point we could even look into just using large folios for that with the infrastructure for that improving a lot (but I suspect we're not quite there yet). But ther are other uses of vm_map_* that can't do that, and they will benefit from the additional scalability as well even IFF just using vmalloc was fine for XFS now (I don't know, I haven't actually looked into it).