Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp3618057ybi; Fri, 19 Jul 2019 06:16:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqzDIL3t9DNfzrGIXEavhXp3NH/n1dq9AovQAv2gUWUNihCzOYP9pc98H1wLQub9j+gdvUYH X-Received: by 2002:a17:902:9689:: with SMTP id n9mr57709482plp.241.1563542159935; Fri, 19 Jul 2019 06:15:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563542159; cv=none; d=google.com; s=arc-20160816; b=fdU7/Nh1e7AZMqc6uzWOIxdFUpfgzt9alf1Zu5Q6ZMd/Uf1N5W1WZY+Jrnneq8Uj+N aZHPBH6/b0KZj16Srh0Vo1Ripi8yxxMXDO+E6GSoU6uS4HMn0Oh56EM3p7nPQ3d0FEDB vVj9nrTxV9O/g75+8MbZNvEnLCV9wXXODV82aUEREe+C5YKKTmRpju8j0adxL9+9Q50J oNMwGIs8+Z/XlXmp6wXfUmzzLkvup/fel4fPww99DBVWg8QtjXlKVY963DJ9YnBOL+PB oSfETpxbVO+zQmVyL373Ua8dmH5+1bfoTah3tMAQRvK+ilTmEPdEhGstdg1PesJKmCn2 Z1DA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=dqn6UcVWjH9sdSwsVPylSPg5VYTS6ulQoDiGG1SQL9Y=; b=A05eltNCANl5nU3Ont0osRfQmMeXkGtAqMrpInk0LPFbkKFxDH1nQjq8gUFDTIFJ2b Iu4J44/R+iGGlDxiOAFMsyEAtdt4oGF0hKctoMuiQtBeKMPg5z5HUGf+ZYl4JcATFGQe 1naxW3/sepdB7ChW2K3EdzHLPuy8dIfJpCZGaFp5jXCANHwLbTQDiAohDsNoyJKROGvY 9t6RCrcKyxRJAl6t/tbdNCkwYzEALj24qst5Npwo04R6nSn6qDgIQT8Iyt4YInN/1/sa +0p9nrm9SmwhbasUJ+37lA29ekC9TicAr6YRfZrVuPnSDfs3TWhH++Zo5bxDYrNvlQX4 glUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ZcNjefoh; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a65si1593012pge.558.2019.07.19.06.15.43; Fri, 19 Jul 2019 06:15:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ZcNjefoh; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728006AbfGSMYR (ORCPT + 99 others); Fri, 19 Jul 2019 08:24:17 -0400 Received: from mail.kernel.org ([198.145.29.99]:51794 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726072AbfGSMYR (ORCPT ); Fri, 19 Jul 2019 08:24:17 -0400 Received: from mail-wr1-f41.google.com (mail-wr1-f41.google.com [209.85.221.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 008D02187F for ; Fri, 19 Jul 2019 12:24:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1563539056; bh=Ck4eZ/5Sqk7t+dWiweDZ19MBs3yztrS1A6zmoiFGVAw=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=ZcNjefoh2NgQpIgYbn+H2tVOGJH8ErxcAhc6+nhkF2NKowZQmfV/bo9bFIeERf6V+ e5lEbDQVV+FjMtX+0oLpIZOZOF9kLkX08V64y5bEm2Tk69vUbmMkunpHRdqm7EDgXk orAA3mbAq+W7VQCoANy++bomLiKbdAY+oZSYbb/s= Received: by mail-wr1-f41.google.com with SMTP id r1so32066265wrl.7 for ; Fri, 19 Jul 2019 05:24:15 -0700 (PDT) X-Gm-Message-State: APjAAAUSBNyDsL8Jp4edPdlPKor17aaurZx3tSDWS0AYSGRLU9ROuXTR i57VJ3FPIrIozIvqe4voRa9+ABUo2l7wdR1XyRxEqQ== X-Received: by 2002:adf:dd0f:: with SMTP id a15mr15129921wrm.265.1563539054479; Fri, 19 Jul 2019 05:24:14 -0700 (PDT) MIME-Version: 1.0 References: <20190717071439.14261-1-joro@8bytes.org> <20190717071439.14261-4-joro@8bytes.org> <20190718091745.GG13091@suse.de> <20190719122111.GD19068@suse.de> In-Reply-To: <20190719122111.GD19068@suse.de> From: Andy Lutomirski Date: Fri, 19 Jul 2019 05:24:03 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 3/3] mm/vmalloc: Sync unmappings in vunmap_page_range() To: Joerg Roedel Cc: Andy Lutomirski , Joerg Roedel , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andrew Morton , LKML , Linux-MM Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 19, 2019 at 5:21 AM Joerg Roedel wrote: > > On Thu, Jul 18, 2019 at 12:04:49PM -0700, Andy Lutomirski wrote: > > I find it problematic that there is no meaningful documentation as to > > what vmalloc_sync_all() is supposed to do. > > Yeah, I found that too, there is no real design around > vmalloc_sync_all(). It looks like it was just added to fit the purpose > on x86-32. That also makes it hard to find all necessary call-sites. > > > Which is obviously entirely inapplicable. If I'm understanding > > correctly, the underlying issue here is that the vmalloc fault > > mechanism can propagate PGD entry *addition*, but nothing (not even > > flush_tlb_kernel_range()) propagates PGD entry *removal*. > > Close, the underlying issue is not about PGD, but PMD entry > addition/removal on x86-32 pae systems. > > > I find it suspicious that only x86 has this. How do other > > architectures handle this? > > The problem on x86-PAE arises from the !SHARED_KERNEL_PMD case, which was > introduced by the Xen-PV patches and then re-used for the PTI-x32 > enablement to be able to map the LDT into user-space at a fixed address. > > Other architectures probably don't have the !SHARED_KERNEL_PMD case (or > do unsharing of kernel page-tables on any level where a huge-page could > be mapped). > > > At the very least, I think this series needs a comment in > > vmalloc_sync_all() explaining exactly what the function promises to > > do. > > Okay, as it stands, it promises to sync mappings for the vmalloc area > between all PGDs in the system. I will add that as a comment. > > > But maybe a better fix is to add code to flush_tlb_kernel_range() > > to sync the vmalloc area if the flushed range overlaps the vmalloc > > area. > > That would also cause needless overhead on x86-64 because the vmalloc > area doesn't need syncing there. I can make it x86-32 only, but that is > not a clean solution imo. Could you move the vmalloc_sync_all() call to the lazy purge path, though? If nothing else, it will cause it to be called fewer times under any given workload, and it looks like it could be rather slow on x86_32. > > > Or, even better, improve x86_32 the way we did x86_64: adjust > > the memory mapping code such that top-level paging entries are never > > deleted in the first place. > > There is not enough address space on x86-32 to partition it like on > x86-64. In the default PAE configuration there are _four_ PGD entries, > usually one for the kernel, and then 512 PMD entries. Partitioning > happens on the PMD level, for example there is one entry (2MB of address > space) reserved for the user-space LDT mapping. Ugh, fair enough.