Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp2642420ybh; Mon, 9 Mar 2020 09:58:26 -0700 (PDT) X-Google-Smtp-Source: ADFU+vttdaO9iw0t8b1J+TFGyEJlQm+C1TomGi5GrVU/2cw3YrlpiwU2F2TrfE/zeMpShF++nftH X-Received: by 2002:a9d:1708:: with SMTP id i8mr11444203ota.250.1583773106499; Mon, 09 Mar 2020 09:58:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1583773106; cv=none; d=google.com; s=arc-20160816; b=t/0Px8lPNy3QzA262F9UFkor1Zbt+gT+mYFvXhWxjrrnemO5CLtF/2ycxh4vFVfIom cmVdYzXMviKght3TsQFA7pBrav3A0xvjL/7VGZfNx4MZjoW9AiYosEW/3dIrUj9ctiX/ 8EZ6OPmlgB0nwkAx+96ULLpkNtS+glAPwAZD3i5BXqIMrbfCKNhDDW4juz1A2f5aljwc 9GTCqSJmdUdYLhMwprNyLZBa/InbMM0AZ+b0zrER6wY1P43jVN+bQY1kfJN5b6gACXc6 fqZ2BLZVEQoSKHuwU6EzHzALwM4Ys3zWW3sWju05RhSmrx3LuER09+5qfOwELwozzf5h isqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=841exEXd1stHalUNNf4rEHi5J454JrcSooSLACneRgs=; b=Sonh11ERAj5hkYF+TDHnP8FwPr1rBb9ETUQkw+nxONuXU7UP/+PpTSpWI/aUgC2u2b bSZB6UEulGkVTxaXziC3HcF4fd+xx4bPj4+fNqWNBGoSU/1tQcV1Qofm5plSPqEO4ADQ rMND59abj4C3CLLNAernAGN9+p7BVRRX4s05jnXxK5K18ALQhx+LpI8IYC4bw4Fjav/O DQLq112T0+PkmPR174MEgCWVHtH43oAJuouZUkABkoBgGa0C3963VKdH94fY3p84dSXo P0wk1FRxeETR9nLdyLyIAi/+cbhmEU0XK72DEW9LKFOClXp7CiGRbUWYmvt2AA+SWYXA k35g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l14si6160446otn.157.2020.03.09.09.58.14; Mon, 09 Mar 2020 09:58:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727202AbgCIQ5y (ORCPT + 99 others); Mon, 9 Mar 2020 12:57:54 -0400 Received: from foss.arm.com ([217.140.110.172]:54694 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727101AbgCIQ5y (ORCPT ); Mon, 9 Mar 2020 12:57:54 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B2E1C1FB; Mon, 9 Mar 2020 09:57:53 -0700 (PDT) Received: from arrakis.emea.arm.com (arrakis.cambridge.arm.com [10.1.196.71]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 332003F534; Mon, 9 Mar 2020 09:57:51 -0700 (PDT) Date: Mon, 9 Mar 2020 16:57:49 +0000 From: Catalin Marinas To: Russell King - ARM Linux admin Cc: Arnd Bergmann , Nishanth Menon , Santosh Shilimkar , Tero Kristo , Linux ARM , Michal Hocko , Rik van Riel , Santosh Shilimkar , Dave Chinner , Linux Kernel Mailing List , Linux-MM , Yafang Shao , Al Viro , Johannes Weiner , linux-fsdevel , kernel-team@fb.com, Kishon Vijay Abraham I , Linus Torvalds , Andrew Morton , Roman Gushchin Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU Message-ID: <20200309165749.GB4124965@arrakis.emea.arm.com> References: <20200212085004.GL25745@shell.armlinux.org.uk> <671b05bc-7237-7422-3ece-f1a4a3652c92@oracle.com> <7c4c1459-60d5-24c8-6eb9-da299ead99ea@oracle.com> <20200306203439.peytghdqragjfhdx@kahuna> <20200309155945.GA4124965@arrakis.emea.arm.com> <20200309160919.GM25745@shell.armlinux.org.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200309160919.GM25745@shell.armlinux.org.uk> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 09, 2020 at 04:09:19PM +0000, Russell King wrote: > On Mon, Mar 09, 2020 at 03:59:45PM +0000, Catalin Marinas wrote: > > On Sun, Mar 08, 2020 at 11:58:52AM +0100, Arnd Bergmann wrote: > > > - revisit CONFIG_VMSPLIT_4G_4G for arm32 (and maybe mips32) > > > to see if it can be done, and what the overhead is. This is probably > > > more work than the others combined, but also the most promising > > > as it allows the most user address space and physical ram to be used. > > > > A rough outline of such support (and likely to miss some corner cases): > > > > 1. Kernel runs with its own ASID and non-global page tables. > > > > 2. Trampoline code on exception entry/exit to handle the TTBR0 switching > > between user and kernel. > > > > 3. uaccess routines need to be reworked to pin the user pages in memory > > (get_user_pages()) and access them via the kernel address space. > > > > Point 3 is probably the ugliest and it would introduce a noticeable > > slowdown in certain syscalls. > > We also need to consider that it has implications for the single-kernel > support; a kernel doing this kind of switching would likely be horrid > for a kernel supporting v6+ with VIPT aliasing caches. Good point. I think with VIPT aliasing cache uaccess would have to flush the cache before/after access, depending on direction. > Would we be adding a new red line between kernels supporting > VIPT-aliasing caches (present in earlier v6 implementations) and > kernels using this system? get_user_pages() should handle the flush_dcache_page() call and the latter would dial with the aliases. But this adds heavily to the cost of the uaccess. Maybe some trick with temporarily locking the user page table and copying the user pmd into a dedicated kernel pmd, then accessing the user via this location. The fault handler would need to figure out the real user address and I'm not sure how we deal with the page table lock (or mmap_sem). An alternative to the above would be to have all uaccess routines in a trampoline which restores the user pgd but with only a couple of pmds for mapping the kernel address temporarily. This would avoid the issue of concurrent modification of the user page tables. Anyway, I don't think any of the above looks better than highmem. -- Catalin