Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp4972634pxu; Tue, 13 Oct 2020 11:24:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzG4vxb+CHX/s3ka0qbZY6nWHPgr+5e+MB5yzYC9s7tUnLJhf8hfBWrJcG5v4C6XHlM5LTP X-Received: by 2002:a17:906:647:: with SMTP id t7mr1109738ejb.428.1602613493068; Tue, 13 Oct 2020 11:24:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602613493; cv=none; d=google.com; s=arc-20160816; b=vOUMWQ54bMjRgwvCOSoSWuAioJTNOmlflkB4mpeEgf84ioUPMwffHHYKEQYZ/EL1XP 25jQ+OjNL2Q+YVQDv1tYJJqU0yTozn+DUiujgs9xqH4uFvgcXc2a54mCySFHnlNFJPzm hXr4ynt7kNP9ehiZPMBTd8LfzbGEzhpVbKx8PMFS4fQUDDRPRxRgHiz1adctnSR3uGp0 PSWjpn/lR8xi49oQbJUSABIXpNGXVCLqjMTSUAnZ8mJX5d0dSEm3tt6y1dH5o1j9huiG cGlPuszwzU0g7NsSFhT0WFRJH6ne0zwIWREA7c2Iv0/39ZiXWNrwb2wnQA7LL4ebf4ys w6AA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=nKmU+LbcnQSVFFd6C3cfrySC/BIAi35L7wC0vvgO6LA=; b=WVWa9NOnnNMZR2b1bp/5HwfRePvRgugmDiOBXNMAf7FEu0L1ZNyEvPwgK1Yc02ORO5 iQxAK38TtQ0t/z8Jbexz96KhOaVfggCpi6HyydiUZgXCsTf0KJlyHtTmCH/EI0eQ0WxC KSib6zmSkfGxCzlAFHafSGGVc+a9SfVFTOxrc+1eHv8Fu5oKpAQFJVF3K3FKqV6gx7ha xQA/zUvATfyLFxTKoJb/jBwhDcDCJqYdSpNMxZu2PCYjAIZ+wsfB9izTs1plGOLEL+GH 6d1InEa5ThI2NR4/q+lf18fpe7q1Zw4XIBhEIzZySdN4oeL63zfjKENqSaPkvKwx0DRs E9tw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r17si412545eds.37.2020.10.13.11.24.30; Tue, 13 Oct 2020 11:24:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391049AbgJMIGw (ORCPT + 99 others); Tue, 13 Oct 2020 04:06:52 -0400 Received: from mx2.suse.de ([195.135.220.15]:37856 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389340AbgJMIGB (ORCPT ); Tue, 13 Oct 2020 04:06:01 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 85A1CAD6B; Tue, 13 Oct 2020 08:05:59 +0000 (UTC) Date: Tue, 13 Oct 2020 10:05:57 +0200 From: Joerg Roedel To: Linus Torvalds Cc: Ingo Molnar , Linux Kernel Mailing List , Thomas Gleixner , Borislav Petkov , Peter Zijlstra , Andrew Morton Subject: Re: [GIT PULL] x86/mm changes for v5.10 Message-ID: <20201013080557.GF3302@suse.de> References: <20201012172415.GA2962950@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 12, 2020 at 03:07:45PM -0700, Linus Torvalds wrote: > On Mon, Oct 12, 2020 at 10:24 AM Ingo Molnar wrote: > > > > Do not sync vmalloc/ioremap mappings on x86-64 kernels. > > > > Hopefully now without the bugs! > > Let's hope so. > > If this turns out to work this time, can we do a similar preallocation > of the page directories on 32-bit? Because I think now x86-32 is the > only remaining case of doing that arch_sync_kernel_mappings() thing. > > Or is there some reason that won't work that I've lost sight of? There were two reasons which made me decide to not pre-allocate on x86-32: 1) The sync-level is the same as the huge-page level (PMD) on both paging modes, so with large ioremap mappings the synchronization is always needed. The huge ioremap mapping could possibly be disabled without much performance impact on x86-32. 2) The vmalloc area has a variable size and grows with less RAM in the machine. And when the vmalloc area gets larger, more pages are needed. Another factor is the configurable vm-split. With a 1G/3G split on a machine with 128MB of RAM there would be: VMalloc area size (hole ignored): 3072MB - 128MB = 2944MB PTE-pages needed (with PAE): 2944MB / 2MB/page = 1472 4k pages Memory needed: 1472*4k = 5888kb So on such machine the pre-allocation would need 5.75MB of the 128MB RAM. Without PAE it is half of that. This is an exotic configuration and I am not sure it matters much in practice. It could also be worked around by setting limits such as, for example, don't make the vmalloc area larger then the avauilable memory in the system. So pre-allocating has its implications. If we decide to pre-allocate on x86-32 too, then we should be prepared for that fall-out of the higher memory usage. Regards, Joerg