Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935172AbbHJKle (ORCPT ); Mon, 10 Aug 2015 06:41:34 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:43925 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933845AbbHJKQG (ORCPT ); Mon, 10 Aug 2015 06:16:06 -0400 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Thomas Gleixner" , "Jan Beulich" , "Linus Torvalds" , "Borislav Petkov" , "Steven Rostedt" , "Boris Ostrovsky" , "Ingo Molnar" , "Andy Lutomirski" , "Sasha Levin" , "David Vrabel" , "Brian Gerst" , "Andrew Cooper" , "H. Peter Anvin" , "Peter Zijlstra" , "xen-devel" , "Denys Vlasenko" , "Konrad Rzeszutek Wilk" , "Andy Lutomirski" , "security@kernel.org" Date: Mon, 10 Aug 2015 12:12:31 +0200 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.2 109/110] x86/xen: Probe target addresses in set_aliased_prot() before the hypercall In-Reply-To: X-SA-Exim-Connect-IP: 2.173.94.72 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4344 Lines: 119 3.2.71-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Andy Lutomirski commit aa1acff356bbedfd03b544051f5b371746735d89 upstream. The update_va_mapping hypercall can fail if the VA isn't present in the guest's page tables. Under certain loads, this can result in an OOPS when the target address is in unpopulated vmap space. While we're at it, add comments to help explain what's going on. This isn't a great long-term fix. This code should probably be changed to use something like set_memory_ro. Signed-off-by: Andy Lutomirski Cc: Andrew Cooper Cc: Andy Lutomirski Cc: Boris Ostrovsky Cc: Borislav Petkov Cc: Brian Gerst Cc: David Vrabel Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Sasha Levin Cc: Steven Rostedt Cc: Thomas Gleixner Cc: security@kernel.org Cc: xen-devel Link: http://lkml.kernel.org/r/0b0e55b995cda11e7829f140b833ef932fcabe3a.1438291540.git.luto@kernel.org Signed-off-by: Ingo Molnar Signed-off-by: Ben Hutchings --- arch/x86/xen/enlighten.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) --- a/arch/x86/xen/enlighten.c +++ b/arch/x86/xen/enlighten.c @@ -321,6 +321,7 @@ static void set_aliased_prot(void *v, pg pte_t pte; unsigned long pfn; struct page *page; + unsigned char dummy; ptep = lookup_address((unsigned long)v, &level); BUG_ON(ptep == NULL); @@ -330,6 +331,32 @@ static void set_aliased_prot(void *v, pg pte = pfn_pte(pfn, prot); + /* + * Careful: update_va_mapping() will fail if the virtual address + * we're poking isn't populated in the page tables. We don't + * need to worry about the direct map (that's always in the page + * tables), but we need to be careful about vmap space. In + * particular, the top level page table can lazily propagate + * entries between processes, so if we've switched mms since we + * vmapped the target in the first place, we might not have the + * top-level page table entry populated. + * + * We disable preemption because we want the same mm active when + * we probe the target and when we issue the hypercall. We'll + * have the same nominal mm, but if we're a kernel thread, lazy + * mm dropping could change our pgd. + * + * Out of an abundance of caution, this uses __get_user() to fault + * in the target address just in case there's some obscure case + * in which the target address isn't readable. + */ + + preempt_disable(); + + pagefault_disable(); /* Avoid warnings due to being atomic. */ + __get_user(dummy, (unsigned char __user __force *)v); + pagefault_enable(); + if (HYPERVISOR_update_va_mapping((unsigned long)v, pte, 0)) BUG(); @@ -341,6 +368,8 @@ static void set_aliased_prot(void *v, pg BUG(); } else kmap_flush_unused(); + + preempt_enable(); } static void xen_alloc_ldt(struct desc_struct *ldt, unsigned entries) @@ -348,6 +377,17 @@ static void xen_alloc_ldt(struct desc_st const unsigned entries_per_page = PAGE_SIZE / LDT_ENTRY_SIZE; int i; + /* + * We need to mark the all aliases of the LDT pages RO. We + * don't need to call vm_flush_aliases(), though, since that's + * only responsible for flushing aliases out the TLBs, not the + * page tables, and Xen will flush the TLB for us if needed. + * + * To avoid confusing future readers: none of this is necessary + * to load the LDT. The hypervisor only checks this when the + * LDT is faulted in due to subsequent descriptor access. + */ + for(i = 0; i < entries; i += entries_per_page) set_aliased_prot(ldt + i, PAGE_KERNEL_RO); } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/