Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1590559pxb; Thu, 28 Jan 2021 23:06:23 -0800 (PST) X-Google-Smtp-Source: ABdhPJwHMa2Uzx9wESXhQyM428PucOkLJz0GDpWO6EJWhetwkCPoXwRrp4wUK1DYMzWqR+myFZ/W X-Received: by 2002:aa7:ca55:: with SMTP id j21mr3561356edt.172.1611903983285; Thu, 28 Jan 2021 23:06:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1611903983; cv=none; d=google.com; s=arc-20160816; b=xMuU4x1UGdVnoi/A3s/6uq9lFJ8MjjsK29SQ1NJGwqBQLxD/wfiFJMZh0gjajvbueh GSiM1Rnncjif6xHw6d1k/wGEkgxVgPJ31xZZYEU/4AHJxYLrLSHioEWWmlKBp+f2ATa3 pXWp9cMj3jQLMic9JRfHZ9dtAGSFfxtIstDyGIo5Nc5n7X45cC0EKlTz0XTLH/mWzfCe wBMOh8z7ozviN2f+1c0hqaSomsItQbSEU2OmasKMRprL5WoDy9GjKU56e9RMKwVDqg44 OVmPmH2MT/UmEk5BqhXVEZklIMU639A1gq2lmK3oJrK/dfdPpc5TSrT5KJJW01C3z8lw bXZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=RHkb9iRBTwm24zlABbb7G/fVnjMwluHcWg1rpa5naEE=; b=KaocU3k0W8Y4Qqzrq0LTI/8/LWlerNmbYy/uDs/IKvlxALbnl6uddntivc32Nb2g3i DgFm0eCZ+7gYC2vLwXoO/0zS6yDmO+tTVwYgxbim02tus9cprOJbYIYHLon50AkWhevb QHjGna4oKE4JkW/5LxzqYxELzihgBg0HgfvHMnkkUA5/UmBMcYwcPYzp8rkO4XxrWBYJ 0f1WFoEqdMHILL3xzpbHMjYrnvsq9Jzaf8M1fELd3UYJcoI8gijAXC/yZt3ZsI/ZTO2x OfojYOYcFWtHFqh+4IL7f68Qc9mKP1JbSE/CUWOIeYgU9LFkx+gnQUEIzIygr9mLakqo 885g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=FU1gKzkR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d9si4150982ejz.720.2021.01.28.23.05.59; Thu, 28 Jan 2021 23:06:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=FU1gKzkR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232040AbhA2HE4 (ORCPT + 99 others); Fri, 29 Jan 2021 02:04:56 -0500 Received: from mail.kernel.org ([198.145.29.99]:39034 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229786AbhA2HEy (ORCPT ); Fri, 29 Jan 2021 02:04:54 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6692D60234; Fri, 29 Jan 2021 07:04:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1611903853; bh=B8baj/kr8egXx07AKTfXmu2q+uVtPa7Wa10vxWLTyX4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=FU1gKzkR0Kkuw4BUPc2Ykvl/qSOzFXlBbHnlbTvI/BhKT7ahDUpGEZB46gpU40Y83 MoHNwFp1wY6tLb5AUTCsnEfd/AMZYsjtoOTM/ubtkBakz4JEJpx2QSQ5qDWK5JBhpB 4v3NJredQAUcuf0ikqua2sKTx8pGC/pdnTjma60/EtjBbJKLV0sBexjvYgwWm6cmAr 6YkTn12NhzoZ5vWMbb1h+pYot7MLL4lAT5rK7YbxkDbSaKqIWEB24dlTSavj0W7G0i ikyrL756m9C6l5UyrhfShwTx10Viq23ua1S5t39e8a1T6A4j5m1JHtrkXMph6V4jLt ck06v8VwMSCEw== Date: Fri, 29 Jan 2021 09:03:55 +0200 From: Mike Rapoport To: James Bottomley Cc: Michal Hocko , David Hildenbrand , Andrew Morton , Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Ingo Molnar , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Rick Edgecombe , Roman Gushchin , Shakeel Butt , Shuah Khan , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org, Hagen Paul Pfeifer , Palmer Dabbelt Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation Message-ID: <20210129070355.GC242749@kernel.org> References: <20210121122723.3446-1-rppt@kernel.org> <20210121122723.3446-8-rppt@kernel.org> <20210126114657.GL827@dhcp22.suse.cz> <303f348d-e494-e386-d1f5-14505b5da254@redhat.com> <20210126120823.GM827@dhcp22.suse.cz> <20210128092259.GB242749@kernel.org> <2b6a5f22f0b062432186b89eeef58e2ba45e09c1.camel@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2b6a5f22f0b062432186b89eeef58e2ba45e09c1.camel@linux.ibm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jan 28, 2021 at 07:28:57AM -0800, James Bottomley wrote: > On Thu, 2021-01-28 at 14:01 +0100, Michal Hocko wrote: > > On Thu 28-01-21 11:22:59, Mike Rapoport wrote: > [...] > > > One of the major pushbacks on the first RFC [1] of the concept was > > > about the direct map fragmentation. I tried really hard to find > > > data that shows what is the performance difference with different > > > page sizes in the direct map and I didn't find anything. > > > > > > So presuming that large pages do provide advantage the first > > > implementation of secretmem used PMD_ORDER allocations to amortise > > > the effect of the direct map fragmentation and then handed out 4k > > > pages at each fault. In addition there was an option to reserve a > > > finite pool at boot time and limit secretmem allocations only to > > > that pool. > > > > > > At some point David suggested to use CMA to improve overall > > > flexibility [3], so I switched secretmem to use CMA. > > > > > > Now, with the data we have at hand (my benchmarks and Intel's > > > report David mentioned) I'm even not sure this whole pooling even > > > required. > > > > I would still like to understand whether that data is actually > > representative. With some underlying reasoning rather than I have run > > these XYZ benchmarks and numbers do not look terrible. > > My theory, and the reason I made Mike run the benchmarks, is that our > fear of TLB miss has been alleviated by CPU speculation advances over > the years. You can appreciate this if you think that both Intel and > AMD have increased the number of levels in the page table to > accommodate larger virtual memory size 5 instead of 3. That increases > the length of the page walk nearly 2x in a physical system and even > more in a virtual system. Unless this were massively optimized, > systems would have slowed down significantly. Using 2M pages only > eliminates one level and 2G pages eliminates 2, so I theorized that > actually fragmentation wouldn't be the significant problem we once > thought it was and asked Mike to benchmark it. > > The benchmarks show that indeed, it isn't a huge change in the data TLB > miss time, I suspect because data is nicely continuous nowadays and the > prediction that goes into the CPU optimizations quite easy. ITLB > fragmentation actually seems to be quite a bit worse, likely because we > still don't have branch prediction down to an exact science. Another thing is that normally useful work done by userspace so data accesses are dominated by userspace and any change in dTLB miss rate for kernel data accesses is only a small fraction of all misses. > James > > -- Sincerely yours, Mike.