Received: by 2002:a25:ad19:0:0:0:0:0 with SMTP id y25csp6777352ybi; Mon, 22 Jul 2019 01:27:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqwRFzAFvMbEwZewXjRdLSqcV9Ew+k8cj4MlDawj8HdE6JX6mMqIaA6rD2hv8H2TAx1/2Mv5 X-Received: by 2002:a17:902:9004:: with SMTP id a4mr74398462plp.109.1563784020020; Mon, 22 Jul 2019 01:27:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563784020; cv=none; d=google.com; s=arc-20160816; b=d0Ek/OZdb9OqxXvpeb7DGQm9ouUZNzolUqF7rCDqoIi8a3qmROt5U+nF3RLO6EgaGX AbFuDaxRyBxJIRlMkoJKSedYXBqwaY1Ym7mLOFfpBFP/yBTUV8WJh5CwNAnvgg9Nknlb mKZzInQ4d5LyapR+ZkA3a3h1t3YZnYvb/3qv0MgKg3xEDAai2K5ijiXjkZwp/eD/Cj7Y GbZ1RJr6fNzLY9tqV6Fq2GcsiFrhAYhxNsLd4eYvOHRAAfk/aFIiwDLwN+2yS3PJ72vI Uvrp7F5ZzAA5w6hkfBNCEYHS2A8gaey08WSl6/Y84Z6FJM6cBPE3RcIXGfw7AyA2n70B UPyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Qb7BEtNSY6YJscdZNl4JnYQL83SzOmRWvDsiZpuK1nE=; b=hzLGnyhb8p01IuxcpKiZoPcym1YpyAfGb7LCKbCC6+Dn/OsfOIMWZRIsY6Qn8eQOIb ZcklvzXkKHJe8F7Dsg9or8Syog8Mak8gxMSfJ8BX1QgtcY2Q7eyZkmGka5cAvnMNa9RD GYlB7Dg1O9Sl1EBn7o78Kcspq8kjB1xGakpc04WNlf9F58+xFUljl934LU7dQ5RHJcdQ YDNovl/HonoiqrwUWk7r3O8fiFdnZYLVhaSJC5sNmskGzwwyY3lUB2bzjID/9YQj4lka Fbu+8K2HfO6uSRPSQOtlDDtgPTyLBVB2bMuJxSP0Xi4zwfpc/2BunPys9UzDonyke2NG mNeQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 185si41399721pfa.40.2019.07.22.01.26.44; Mon, 22 Jul 2019 01:27:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727365AbfGVILT (ORCPT + 99 others); Mon, 22 Jul 2019 04:11:19 -0400 Received: from mx2.suse.de ([195.135.220.15]:37446 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726339AbfGVILT (ORCPT ); Mon, 22 Jul 2019 04:11:19 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D86DDAF8E; Mon, 22 Jul 2019 08:11:17 +0000 (UTC) Date: Mon, 22 Jul 2019 10:11:15 +0200 From: Joerg Roedel To: Joerg Roedel Cc: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 3/3] mm/vmalloc: Sync unmappings in vunmap_page_range() Message-ID: <20190722081115.GH19068@suse.de> References: <20190719184652.11391-1-joro@8bytes.org> <20190719184652.11391-4-joro@8bytes.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190719184652.11391-4-joro@8bytes.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Srewed up the subject :(, it needs to be "mm/vmalloc: Sync unmappings in __purge_vmap_area_lazy()" of course. On Fri, Jul 19, 2019 at 08:46:52PM +0200, Joerg Roedel wrote: > From: Joerg Roedel > > On x86-32 with PTI enabled, parts of the kernel page-tables > are not shared between processes. This can cause mappings in > the vmalloc/ioremap area to persist in some page-tables > after the region is unmapped and released. > > When the region is re-used the processes with the old > mappings do not fault in the new mappings but still access > the old ones. > > This causes undefined behavior, in reality often data > corruption, kernel oopses and panics and even spontaneous > reboots. > > Fix this problem by activly syncing unmaps in the > vmalloc/ioremap area to all page-tables in the system before > the regions can be re-used. > > References: https://bugzilla.suse.com/show_bug.cgi?id=1118689 > Reviewed-by: Dave Hansen > Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F') > Signed-off-by: Joerg Roedel > --- > mm/vmalloc.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 4fa8d84599b0..e0fc963acc41 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1258,6 +1258,12 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) > if (unlikely(valist == NULL)) > return false; > > + /* > + * First make sure the mappings are removed from all page-tables > + * before they are freed. > + */ > + vmalloc_sync_all(); > + > /* > * TODO: to calculate a flush range without looping. > * The list can be up to lazy_max_pages() elements. > @@ -3038,6 +3044,9 @@ EXPORT_SYMBOL(remap_vmalloc_range); > /* > * Implement a stub for vmalloc_sync_all() if the architecture chose not to > * have one. > + * > + * The purpose of this function is to make sure the vmalloc area > + * mappings are identical in all page-tables in the system. > */ > void __weak vmalloc_sync_all(void) > { > -- > 2.17.1