Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp589354ybz; Wed, 15 Apr 2020 14:39:38 -0700 (PDT) X-Google-Smtp-Source: APiQypI2fMeo4HB2rh/2cU0NOYkYUtUtUTK5mlWmp+CscK6ldXDemSlef3O7eE34lfm36lgTv3pX X-Received: by 2002:a17:907:2049:: with SMTP id pg9mr7161136ejb.248.1586986778409; Wed, 15 Apr 2020 14:39:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586986778; cv=none; d=google.com; s=arc-20160816; b=J6ks3ktuf2FHBa5e9UlaQG9OfjOuvs3dU3HOitbqBEwIE/wAxJm5jYRD1PEaaYliUa tmOzbYJ7XIexkOhkGNWpOut/tpEvCeEK/544dQhym0YWApLJLlzelSwQwW28cagynsIv O/mGAw63y7+CQGUXkRzor48zFt7IaaFtKxME9jROjOkJ3nG9UIdEdx9+jFmrU5BLx3AM 0xudiN07c63wSEFiquxmqsoz0cdEM93ceVj2jTrRhIzSwGxF2Hi1cStUkVq8jcAUtXYF Sgz4dHx5paN7IOA2FnYWmwDWIoOrcqEtDB7XXRrjFkvErzSYfxAiz/hStddgb2tzBMv8 AMwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=EYElTyl0BVEuRZ/YZRIannaVxf8JX5ztw8CmNN6o67Y=; b=CRKae7+f11IhsjJAQMEUiSju3tMbhqXvTlRuT9RZrmG+G6LGnUjO+ake7EKylkBFZd URWn92ntwgrAP0/G5xP6NOa5gQ1QovUiUwkLwNLjpeAv9c8kN/G+m49FFRAe/49NFUu+ rKH93Z44PHEeAQo92fqaeV10yr+9RcHixYj+u6+HzRokM2bSSM081tzKLMhT2MkS0Ml6 /ioB3yp0DwODvkqrJrGzjDUVzWEUoA1DH/d5MPK5bjvdygrPMkHmmx1gen/3CkbiM5Mp 9l5MIec185/b57GfalyxlNauaD7h5tpPMES9JVCNE5pDLz7/oYvzc+Tr6c1CycuCKYaq j+mg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=k+JvPfJh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bi9si11287112edb.478.2020.04.15.14.39.14; Wed, 15 Apr 2020 14:39:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=k+JvPfJh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389729AbgDNOU1 (ORCPT + 99 others); Tue, 14 Apr 2020 10:20:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389553AbgDNOUR (ORCPT ); Tue, 14 Apr 2020 10:20:17 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F77FC061A0C; Tue, 14 Apr 2020 07:20:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Transfer-Encoding :Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=EYElTyl0BVEuRZ/YZRIannaVxf8JX5ztw8CmNN6o67Y=; b=k+JvPfJhqpx47HmkCwXxr89xM+ QyEgEaoxNBUTi8nN9/LWRxUHDwtUVhOo0YNyDJ5W6ei8rlBqmf525cLWD6KlLovGZg2abMLOSLsgC U8wk4m1tAXieGZyNr7Aa3+0vMExYQ7HvVRy/YckMT6FP7pWrU6cfoQLCwlyqQ8uaDQdRqKnYkRF85 yQRnqEDKov4IQy+woIW66pkgx5WElp0Iqco0DozCF8IxXsxc8UanwYLiVg5+G2RGgAlOH8X2w+1TF 7pxsmEI4fCJ4Q8lBBtsqArFRH/jddmGFkpD7092oVyFzilyURxd1GphdGrt2GGHGW+Ljlk6PVVYT3 iSr755EQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jOMQA-0000bu-TE; Tue, 14 Apr 2020 14:20:14 +0000 Date: Tue, 14 Apr 2020 07:20:14 -0700 From: Matthew Wilcox To: Christophe Leroy Cc: Nicholas Piggin , linux-arch@vger.kernel.org, "H. Peter Anvin" , Will Deacon , x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Ingo Molnar , Borislav Petkov , Catalin Marinas , Thomas Gleixner , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v2 4/4] mm/vmalloc: Hugepage vmalloc mappings Message-ID: <20200414142014.GO21484@bombadil.infradead.org> References: <20200413125303.423864-1-npiggin@gmail.com> <20200413125303.423864-5-npiggin@gmail.com> <20200413134106.GN21484@bombadil.infradead.org> <36616218-1d3a-b18a-8fb8-4fc9eff22780@c-s.fr> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <36616218-1d3a-b18a-8fb8-4fc9eff22780@c-s.fr> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 14, 2020 at 02:28:35PM +0200, Christophe Leroy wrote: > Le 13/04/2020 ? 15:41, Matthew Wilcox a ?crit?: > > On Mon, Apr 13, 2020 at 10:53:03PM +1000, Nicholas Piggin wrote: > > > +static int vmap_pages_range_noflush(unsigned long start, unsigned long end, > > > + pgprot_t prot, struct page **pages, > > > + unsigned int page_shift) > > > +{ > > > + if (page_shift == PAGE_SIZE) { > > > > ... I think you meant 'page_shift == PAGE_SHIFT' > > > > Overall I like this series, although it's a bit biased towards CPUs > > which have page sizes which match PMD/PUD sizes. It doesn't offer the > > possibility of using 64kB page sizes on ARM, for example. But it's a > > step in the right direction. > > I was going to ask more or less the same question, I would have liked to use > 512kB hugepages on powerpc 8xx. > > Even the 8M hugepages (still on the 8xx), can they be used as well, taking > into account that two PGD entries have to point to the same 8M page ? > > I sent out a series which tends to make the management of 512k and 8M pages > closer to what Linux expects, in order to use them inside kernel, for Linear > mappings and Kasan mappings for the moment. See > https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=164620 > It would be nice if we could amplify it a use it for ioremaps and vmallocs > as well. I haven't been looking at vmalloc at all; I've been looking at the page cache. See: https://lore.kernel.org/linux-mm/20200212041845.25879-1-willy@infradead.org/ Once we have large pages in the page cache, I want to sort out the API for asking the CPU to insert a TLB entry. Right now, we use set_pte_at(), set_pmd_at() and set_pud_at(). I'm thinking something along the lines of: vm_fault_t vmf_set_page_at(struct vm_fault *vmf, struct page *page); and the architecture can insert whatever PTEs and/or TLB entries it likes based on compound_order(page) -- if, say, it's a 1MB page, it might choose to insert 2 * 512kB entries, or just the upper or lower 512kB entry (depending which half of the 1MB page the address sits in).