Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753239AbcCNQ5G (ORCPT ); Mon, 14 Mar 2016 12:57:06 -0400 Received: from mail-lb0-f182.google.com ([209.85.217.182]:35580 "EHLO mail-lb0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752927AbcCNQ4z (ORCPT ); Mon, 14 Mar 2016 12:56:55 -0400 MIME-Version: 1.0 In-Reply-To: <4f6880ee0c1545b3ae9c25cfe86a879d724c4e7b.1457949315.git.glider@google.com> References: <4f6880ee0c1545b3ae9c25cfe86a879d724c4e7b.1457949315.git.glider@google.com> Date: Mon, 14 Mar 2016 19:56:19 +0300 Message-ID: Subject: Re: [PATCH v7 5/7] mm, kasan: Stackdepot implementation. Enable stackdepot for SLAB From: Andrey Ryabinin To: Alexander Potapenko Cc: Andrey Konovalov , Christoph Lameter , Dmitry Vyukov , Andrew Morton , Steven Rostedt , Joonsoo Kim , JoonSoo Kim , Kostya Serebryany , kasan-dev , LKML , "linux-mm@kvack.org" Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2152 Lines: 58 2016-03-14 13:43 GMT+03:00 Alexander Potapenko : > + > + rec = this_cpu_ptr(&depot_recursion); > + /* Don't store the stack if we've been called recursively. */ > + if (unlikely(*rec)) > + goto fast_exit; > + *rec = true; This just can't work. As long as preemption enabled, task could migrate on another cpu anytime. You could use per-task flag, although it's possible to miss some in-irq stacktraces: depot_save_stack() if (current->stackdeport_recursion) goto fast_exit; current->stackdepot_recursion++ .... depot_save_stack() if (current->stackdeport_recursion) goto fast_exit; > + if (unlikely(!smp_load_acquire(&next_slab_inited))) { > + /* Zero out zone modifiers, as we don't have specific zone > + * requirements. Keep the flags related to allocation in atomic > + * contexts and I/O. > + */ > + alloc_flags &= ~GFP_ZONEMASK; > + alloc_flags &= (GFP_ATOMIC | GFP_KERNEL); > + /* When possible, allocate using vmalloc() to reduce physical > + * address space fragmentation. vmalloc() doesn't work if > + * kmalloc caches haven't been initialized or if it's being > + * called from an interrupt handler. > + */ > + if (kmalloc_caches[KMALLOC_SHIFT_HIGH] && !in_interrupt()) { This is clearly a wrong way to check whether is slab available or not. Besides you need to check vmalloc() for availability, not slab. Given that STAC_ALLOC_ORDER is 2 now, I think it should be fine to use alloc_pages() all the time. Or fix condition, up to you. > + prealloc = __vmalloc( > + STACK_ALLOC_SIZE, alloc_flags, PAGE_KERNEL); > + } else { > + page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER); > + if (page) > + prealloc = page_address(page); > + } > + } > +