From: Ingo Molnar Subject: Re: [PATCH 1/1] x86: fix text_poke Date: Fri, 25 Apr 2008 19:02:37 +0200 Message-ID: <20080425170237.GA24472@elte.hu> References: <20080425151931.GA25510@elte.hu> <20080425152650.GA894@elte.hu> <20080425154854.GC3265@one.firstfloor.org> <20080425162215.GA16273@elte.hu> <20080425164509.GB19962@elte.hu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Andi Kleen , Jiri Slaby , David Miller , zdenek.kabelac@gmail.com, rjw@sisk.pl, paulmck@linux.vnet.ibm.com, akpm@linux-foundation.org, linux-ext4@vger.kernel.org, herbert@gondor.apana.org.au, penberg@cs.helsinki.fi, clameter@sgi.com, linux-kernel@vger.kernel.org, Mathieu Desnoyers , pageexec@freemail.hu, "H. Peter Anvin" , Jeremy Fitzhardinge To: Linus Torvalds Return-path: Received: from mx2.mail.elte.hu ([157.181.151.9]:52757 "EHLO mx2.mail.elte.hu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753350AbYDYRDM (ORCPT ); Fri, 25 Apr 2008 13:03:12 -0400 Content-Disposition: inline In-Reply-To: Sender: linux-ext4-owner@vger.kernel.org List-ID: * Linus Torvalds wrote: > On Fri, 25 Apr 2008, Ingo Molnar wrote: > > > > ah, on 64-bit. That we better make consistent anyway, via the patch > > below. set_pte_phys() needs to become non-init as well. > > Make it return the "pte_t *", and now you don't have to walk the page > tables twice to just clear it immediately afterwards. At that point I > think my patch will be happy and useful, but I also worry a bit > whether it was worth the changes.. performance i dont think we should be too worried about at this moment - this code is so rarely used that it should be driven by robustness i think. one theoretical worry i have is that we've got the pending immediate values changes from Mathieu. Those end up removing the original BUG_ON(len > sizeof(long)) restriction (and the alignment check) and uses a carefully crafted (but scary as hell) sequence of text_poke() sequences to turn a marker into a single-instruction NOP when the marker is inactive. Single-instruction NOP markers is a rather ... tempting goal and it can (and must be able to) patch instructions across page boundaries as well. i think with the PageReserved WARN_ON() we should be sufficiently protected against stray scribbles so Mathieu's fix might be usable as well - see it below. Note that the BUG_ON()s at the end of the text_poke() version below should have caught this bug too i think - because the bug was due to mis-mapping the pages due to the incorrect kernel_text_address() condition so we'd have noticed that the expected bits did not end up in the right place. Ingo -----------------------> Subject: Fix sched-devel text_poke From: Mathieu Desnoyers Date: Thu, 24 Apr 2008 11:03:33 -0400 Use core_text_address() instead of kernel_text_address(). Deal with modules in the same way used for the core kernel. Signed-off-by: Mathieu Desnoyers Signed-off-by: Ingo Molnar --- arch/x86/kernel/alternative.c | 38 ++++++++++++++++++-------------------- 1 file changed, 18 insertions(+), 20 deletions(-) Index: linux/arch/x86/kernel/alternative.c =================================================================== --- linux.orig/arch/x86/kernel/alternative.c +++ linux/arch/x86/kernel/alternative.c @@ -511,31 +511,29 @@ void *__kprobes text_poke(void *addr, co unsigned long flags; char *vaddr; int nr_pages = 2; + struct page *pages[2]; + int i; - BUG_ON(len > sizeof(long)); - BUG_ON((((long)addr + len - 1) & ~(sizeof(long) - 1)) - - ((long)addr & ~(sizeof(long) - 1))); - if (kernel_text_address((unsigned long)addr)) { - struct page *pages[2] = { virt_to_page(addr), - virt_to_page(addr + PAGE_SIZE) }; - if (!pages[1]) - nr_pages = 1; - vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL); - BUG_ON(!vaddr); - local_irq_save(flags); - memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); - local_irq_restore(flags); - vunmap(vaddr); + if (!core_kernel_text((unsigned long)addr)) { + pages[0] = vmalloc_to_page(addr); + pages[1] = vmalloc_to_page(addr + PAGE_SIZE); } else { - /* - * modules are in vmalloc'ed memory, always writable. - */ - local_irq_save(flags); - memcpy(addr, opcode, len); - local_irq_restore(flags); + pages[0] = virt_to_page(addr); + pages[1] = virt_to_page(addr + PAGE_SIZE); } + BUG_ON(!pages[0]); + if (!pages[1]) + nr_pages = 1; + vaddr = vmap(pages, nr_pages, VM_MAP, PAGE_KERNEL); + BUG_ON(!vaddr); + local_irq_save(flags); + memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); + local_irq_restore(flags); + vunmap(vaddr); sync_core(); /* Could also do a CLFLUSH here to speed up CPU recovery; but that causes hangs on some VIA CPUs. */ + for (i = 0; i < len; i++) + BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]); return addr; }