Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934838AbcCOMWr (ORCPT ); Tue, 15 Mar 2016 08:22:47 -0400 Received: from mail-lb0-f182.google.com ([209.85.217.182]:34985 "EHLO mail-lb0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934806AbcCOMW3 convert rfc822-to-8bit (ORCPT ); Tue, 15 Mar 2016 08:22:29 -0400 MIME-Version: 1.0 In-Reply-To: References: <4f6880ee0c1545b3ae9c25cfe86a879d724c4e7b.1457949315.git.glider@google.com> Date: Tue, 15 Mar 2016 15:22:26 +0300 Message-ID: Subject: Re: [PATCH v7 5/7] mm, kasan: Stackdepot implementation. Enable stackdepot for SLAB From: Andrey Ryabinin To: Alexander Potapenko Cc: Andrey Konovalov , Christoph Lameter , Dmitry Vyukov , Andrew Morton , Steven Rostedt , Joonsoo Kim , JoonSoo Kim , Kostya Serebryany , kasan-dev , LKML , "linux-mm@kvack.org" Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3581 Lines: 92 2016-03-15 12:27 GMT+03:00 Alexander Potapenko : > On Mon, Mar 14, 2016 at 5:56 PM, Andrey Ryabinin wrote: >> 2016-03-14 13:43 GMT+03:00 Alexander Potapenko : >> >>> + >>> + rec = this_cpu_ptr(&depot_recursion); >>> + /* Don't store the stack if we've been called recursively. */ >>> + if (unlikely(*rec)) >>> + goto fast_exit; >>> + *rec = true; >> >> >> This just can't work. As long as preemption enabled, task could >> migrate on another cpu anytime. > Ah, you're right. > Do you think disabling preemption around memory allocation is an option here? It's definitely not an option. Flag on current doesn't have any disadvantage over per-cpu approach and it doesn't require preemption safe context. However, making the allocation in a separate context would be a better way to eliminate recursion. i.e. instead of allocating memory depot_save_stack() kicks a work which allocates memory. >> You could use per-task flag, although it's possible to miss some >> in-irq stacktraces: >> >> depot_save_stack() >> if (current->stackdeport_recursion) >> goto fast_exit; >> current->stackdepot_recursion++ >> >> .... >> depot_save_stack() >> if (current->stackdeport_recursion) >> goto fast_exit; >> >> >> >>> + if (unlikely(!smp_load_acquire(&next_slab_inited))) { >>> + /* Zero out zone modifiers, as we don't have specific zone >>> + * requirements. Keep the flags related to allocation in atomic >>> + * contexts and I/O. >>> + */ >>> + alloc_flags &= ~GFP_ZONEMASK; >>> + alloc_flags &= (GFP_ATOMIC | GFP_KERNEL); >>> + /* When possible, allocate using vmalloc() to reduce physical >>> + * address space fragmentation. vmalloc() doesn't work if >>> + * kmalloc caches haven't been initialized or if it's being >>> + * called from an interrupt handler. >>> + */ >>> + if (kmalloc_caches[KMALLOC_SHIFT_HIGH] && !in_interrupt()) { >> >> This is clearly a wrong way to check whether is slab available or not. > Well, I don't think either vmalloc() or kmalloc() provide any > interface to check if they are available. > >> Besides you need to check >> vmalloc() for availability, not slab. > The problem was in kmalloc caches being unavailable, although I can > imagine other problems could have arose. > Perhaps we can drill a hole to get the value of vmap_initialized? >> Given that STAC_ALLOC_ORDER is 2 now, I think it should be fine to use >> alloc_pages() all the time. >> Or fix condition, up to you. > Ok, I'm going to drop vmalloc() for now, we can always implement this later. > Note that this also removes the necessity to check for recursion. >>> + prealloc = __vmalloc( >>> + STACK_ALLOC_SIZE, alloc_flags, PAGE_KERNEL); >>> + } else { >>> + page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER); >>> + if (page) >>> + prealloc = page_address(page); >>> + } >>> + } >>> + > > > > -- > Alexander Potapenko > Software Engineer > > Google Germany GmbH > Erika-Mann-Straße, 33 > 80636 München > > Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle > Registergericht und -nummer: Hamburg, HRB 86891 > Sitz der Gesellschaft: Hamburg