Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1081356pxb; Fri, 26 Feb 2021 01:57:02 -0800 (PST) X-Google-Smtp-Source: ABdhPJwdBHn7Vm0/Bn7CNXKiwiohU3/6BjVznsSsfn68FIPG+e4zyeVP3h5QVs7UQZdt5WG6fJA+ X-Received: by 2002:a17:906:543:: with SMTP id k3mr55356eja.139.1614333422722; Fri, 26 Feb 2021 01:57:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614333422; cv=none; d=google.com; s=arc-20160816; b=oYz2Es6jPdzIg4y4yQ6BRkpW7nYGSBdJLTQhbKhTUn95Jjm2IxgaXLkeoORfeJr2Jm 9+yVGnFkQMKSBIHxktb0ojyfXiF8q7yaI65NjyeR+eEkMuZhdU7WcVyqLioVKkRs1FtT BgAxZrzv/7hQwKHnoIpG7txFpUmvVTPCeKYFfXcakS8Y7y8+6cykACxYqhd0MIjoRHEe hJmIvl6Z2+QD/TIZsKD11272thG7uQtplFcVCdQfU54F7eoO92j5QjrpH581R+8X6d38 GnwlYPaT6RXx+UblvRQFpAoW3/cweOFQpYOD29/JFy426P4mEh4AeCMSHJhIsi33afvF p29Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=164C3+YWNzP2JbflbjQVDi6FSRe77JsWMIECU5clQzA=; b=KR8rqpiun3PolG8bb0vmkWYkp3QYNFuNPkE6Gnb7wHXXLRFhIKrdClzWz+kfyiFIqI 7/qO9HimPDaqou02caBvNjX8dEBowrI8UhdM88ZIgYZn3nrnP1Rz4kjTpnWgZZfG/06M 3dweQzkk51F4urll5VOrbICsJLSWYJjql50QkxDPSxslKXFR6JBbBrbHi++fwpaSGS9f Ee6mi4tklMChqrUDE7VE4L4Okz7WkP0XGuB9cG+z7XH6SdctUDY8P73F6fjWMyHKZ9D/ ET+QqXqh+tn90zhVeZ27MX+jwdMXPu+tA5wOToMlzy34Xtc3eS9Ozw+6Zz21DsiVNjif wQbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IRCGjF4D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t16si5737201edi.293.2021.02.26.01.56.40; Fri, 26 Feb 2021 01:57:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IRCGjF4D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbhBZJ4D (ORCPT + 99 others); Fri, 26 Feb 2021 04:56:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230486AbhBZJyI (ORCPT ); Fri, 26 Feb 2021 04:54:08 -0500 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACD52C06174A for ; Fri, 26 Feb 2021 01:53:26 -0800 (PST) Received: by mail-pf1-x42b.google.com with SMTP id j12so5894499pfj.12 for ; Fri, 26 Feb 2021 01:53:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=164C3+YWNzP2JbflbjQVDi6FSRe77JsWMIECU5clQzA=; b=IRCGjF4D4DPPnKyGaBLLA/2lJsDKf0gZ2tuACpFg1Ygv6eLlvasUOnBBRXCuFwV3DV 6rFYVCJDBXrFFtYg2wgFhAPC5FwLh5jIOCLWLIBY/JPF11z8JkGUeDSwkPHOyDHR0uLq NW6U9DN4xQy8ZCoelt8/hT4CJ5DI89/8vr8Zilj9V1rnfQg28U8unCs9+tqr4kSCDTKm +nfkR1PS4b6GEwLqTLTjMiWF9favkT/j+k18ESOeywJGFMZH6vu03nXdNxxIkEIEM020 QwPHQsEhNgESghDCFEDwUCzm48FvVhhf8xH/Pz2ee4FqZWVxQBwOxm5F2JWcmbDWfH0N Pyhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=164C3+YWNzP2JbflbjQVDi6FSRe77JsWMIECU5clQzA=; b=ghLclG+snb+Yx/2dZkm6K/GKOilyvvitVtQg5snzUMtqdcjjkANOTMwx1LdykghR3z TjDgBUqQi/626SF2Fe4icSOp8XaRw4i08zHqAqbXu+2atIOKDCZ6d7vbHQ1SLS/ham1U LwpbJ0cbxBLQvn7uyQ0+8cebDbOFoG56E0ptDu3CUPLQFCB81LVsww/EoL5cGYS9gYql tFuxnjmcgscwcbinCWgufNOh1aN4gOC6nhB+hzTxRq/+7mUaGwQbO6zlUrZqOpa80c+0 h/wjrHscsNq9BTaV97Vx+mASO6zW+eH6o6OX7PMuOrpmJ/SDUTHPwkHscDoHYwgv0HFx cd3Q== X-Gm-Message-State: AOAM530tJFdonGWO43j8NofKTEQIgLgCAvS3d7E+A+oPIWWoknx0vGc5 EAsXM66FJ3FiFqgqgoItV3Jojg== X-Received: by 2002:aa7:93d2:0:b029:1ee:1433:85b7 with SMTP id y18-20020aa793d20000b02901ee143385b7mr2522420pff.12.1614333206088; Fri, 26 Feb 2021 01:53:26 -0800 (PST) Received: from localhost.localdomain ([122.173.196.104]) by smtp.gmail.com with ESMTPSA id x11sm8110160pja.46.2021.02.26.01.53.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Feb 2021 01:53:25 -0800 (PST) From: Sumit Garg To: kgdb-bugreport@lists.sourceforge.net Cc: daniel.thompson@linaro.org, jason.wessel@windriver.com, dianders@chromium.org, linux-kernel@vger.kernel.org, Sumit Garg Subject: [PATCH v2] kdb: Get rid of custom debug heap allocator Date: Fri, 26 Feb 2021 15:23:06 +0530 Message-Id: <20210226095306.1236539-1-sumit.garg@linaro.org> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the only user for debug heap is kdbnearsym() which can be modified to rather ask the caller to supply a buffer for symbol name. So do that and modify kdbnearsym() callers to pass a symbol name buffer allocated statically and hence remove custom debug heap allocator. This change has been tested using kgdbtest on arm64 which doesn't show any regressions. Suggested-by: Daniel Thompson Signed-off-by: Sumit Garg --- Changes in v2: - Allocate namebuf statically instead of stack to maintain debugger robustness. kernel/debug/kdb/kdb_debugger.c | 1 - kernel/debug/kdb/kdb_main.c | 6 +- kernel/debug/kdb/kdb_private.h | 7 +- kernel/debug/kdb/kdb_support.c | 294 +------------------------------- 4 files changed, 11 insertions(+), 297 deletions(-) diff --git a/kernel/debug/kdb/kdb_debugger.c b/kernel/debug/kdb/kdb_debugger.c index 0220afda3200..e91fc3e4edd5 100644 --- a/kernel/debug/kdb/kdb_debugger.c +++ b/kernel/debug/kdb/kdb_debugger.c @@ -140,7 +140,6 @@ int kdb_stub(struct kgdb_state *ks) */ kdb_common_deinit_state(); KDB_STATE_CLEAR(PAGER); - kdbnearsym_cleanup(); if (error == KDB_CMD_KGDB) { if (KDB_STATE(DOING_KGDB)) KDB_STATE_CLEAR(DOING_KGDB); diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c index 9d69169582c6..6efe9ec53906 100644 --- a/kernel/debug/kdb/kdb_main.c +++ b/kernel/debug/kdb/kdb_main.c @@ -526,6 +526,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg, char symbol = '\0'; char *cp; kdb_symtab_t symtab; + static char namebuf[KSYM_NAME_LEN]; /* * If the enable flags prohibit both arbitrary memory access @@ -585,7 +586,7 @@ int kdbgetaddrarg(int argc, const char **argv, int *nextarg, } if (!found) - found = kdbnearsym(addr, &symtab); + found = kdbnearsym(addr, &symtab, namebuf); (*nextarg)++; @@ -1503,6 +1504,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long addr, int i; int j; unsigned long word; + static char namebuf[KSYM_NAME_LEN]; memset(cbuf, '\0', sizeof(cbuf)); if (phys) @@ -1518,7 +1520,7 @@ static void kdb_md_line(const char *fmtstr, unsigned long addr, break; kdb_printf(fmtstr, word); if (symbolic) - kdbnearsym(word, &symtab); + kdbnearsym(word, &symtab, namebuf); else memset(&symtab, 0, sizeof(symtab)); if (symtab.sym_name) { diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h index b857a84de3b5..1707eeebc59a 100644 --- a/kernel/debug/kdb/kdb_private.h +++ b/kernel/debug/kdb/kdb_private.h @@ -108,8 +108,7 @@ extern char *kdbgetenv(const char *); extern int kdbgetaddrarg(int, const char **, int*, unsigned long *, long *, char **); extern int kdbgetsymval(const char *, kdb_symtab_t *); -extern int kdbnearsym(unsigned long, kdb_symtab_t *); -extern void kdbnearsym_cleanup(void); +extern int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf); extern char *kdb_strdup(const char *str, gfp_t type); extern void kdb_symbol_print(unsigned long, const kdb_symtab_t *, unsigned int); @@ -233,10 +232,6 @@ extern struct task_struct *kdb_curr_task(int); #define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL) -extern void *debug_kmalloc(size_t size, gfp_t flags); -extern void debug_kfree(void *); -extern void debug_kusage(void); - extern struct task_struct *kdb_current_task; extern struct pt_regs *kdb_current_regs; diff --git a/kernel/debug/kdb/kdb_support.c b/kernel/debug/kdb/kdb_support.c index b59aad1f0b55..9b907a84f2db 100644 --- a/kernel/debug/kdb/kdb_support.c +++ b/kernel/debug/kdb/kdb_support.c @@ -57,8 +57,6 @@ int kdbgetsymval(const char *symname, kdb_symtab_t *symtab) } EXPORT_SYMBOL(kdbgetsymval); -static char *kdb_name_table[100]; /* arbitrary size */ - /* * kdbnearsym - Return the name of the symbol with the nearest address * less than 'addr'. @@ -79,13 +77,11 @@ static char *kdb_name_table[100]; /* arbitrary size */ * hold active strings, no kdb caller of kdbnearsym makes more * than ~20 later calls before using a saved value. */ -int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab) +int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab, char *namebuf) { int ret = 0; unsigned long symbolsize = 0; unsigned long offset = 0; -#define knt1_size 128 /* must be >= kallsyms table size */ - char *knt1 = NULL; if (KDB_DEBUG(AR)) kdb_printf("kdbnearsym: addr=0x%lx, symtab=%px\n", addr, symtab); @@ -93,14 +89,9 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab) if (addr < 4096) goto out; - knt1 = debug_kmalloc(knt1_size, GFP_ATOMIC); - if (!knt1) { - kdb_printf("kdbnearsym: addr=0x%lx cannot kmalloc knt1\n", - addr); - goto out; - } + symtab->sym_name = kallsyms_lookup(addr, &symbolsize , &offset, - (char **)(&symtab->mod_name), knt1); + (char **)(&symtab->mod_name), namebuf); if (offset > 8*1024*1024) { symtab->sym_name = NULL; addr = offset = symbolsize = 0; @@ -109,42 +100,6 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab) symtab->sym_end = symtab->sym_start + symbolsize; ret = symtab->sym_name != NULL && *(symtab->sym_name) != '\0'; - if (ret) { - int i; - /* Another 2.6 kallsyms "feature". Sometimes the sym_name is - * set but the buffer passed into kallsyms_lookup is not used, - * so it contains garbage. The caller has to work out which - * buffer needs to be saved. - * - * What was Rusty smoking when he wrote that code? - */ - if (symtab->sym_name != knt1) { - strncpy(knt1, symtab->sym_name, knt1_size); - knt1[knt1_size-1] = '\0'; - } - for (i = 0; i < ARRAY_SIZE(kdb_name_table); ++i) { - if (kdb_name_table[i] && - strcmp(kdb_name_table[i], knt1) == 0) - break; - } - if (i >= ARRAY_SIZE(kdb_name_table)) { - debug_kfree(kdb_name_table[0]); - memmove(kdb_name_table, kdb_name_table+1, - sizeof(kdb_name_table[0]) * - (ARRAY_SIZE(kdb_name_table)-1)); - } else { - debug_kfree(knt1); - knt1 = kdb_name_table[i]; - memmove(kdb_name_table+i, kdb_name_table+i+1, - sizeof(kdb_name_table[0]) * - (ARRAY_SIZE(kdb_name_table)-i-1)); - } - i = ARRAY_SIZE(kdb_name_table) - 1; - kdb_name_table[i] = knt1; - symtab->sym_name = kdb_name_table[i]; - knt1 = NULL; - } - if (symtab->mod_name == NULL) symtab->mod_name = "kernel"; if (KDB_DEBUG(AR)) @@ -152,23 +107,10 @@ int kdbnearsym(unsigned long addr, kdb_symtab_t *symtab) "symtab->mod_name=%px, symtab->sym_name=%px (%s)\n", ret, symtab->sym_start, symtab->mod_name, symtab->sym_name, symtab->sym_name); - out: - debug_kfree(knt1); return ret; } -void kdbnearsym_cleanup(void) -{ - int i; - for (i = 0; i < ARRAY_SIZE(kdb_name_table); ++i) { - if (kdb_name_table[i]) { - debug_kfree(kdb_name_table[i]); - kdb_name_table[i] = NULL; - } - } -} - static char ks_namebuf[KSYM_NAME_LEN+1], ks_namebuf_prev[KSYM_NAME_LEN+1]; /* @@ -259,11 +201,13 @@ void kdb_symbol_print(unsigned long addr, const kdb_symtab_t *symtab_p, unsigned int punc) { kdb_symtab_t symtab, *symtab_p2; + static char namebuf[KSYM_NAME_LEN]; + if (symtab_p) { symtab_p2 = (kdb_symtab_t *)symtab_p; } else { symtab_p2 = &symtab; - kdbnearsym(addr, symtab_p2); + kdbnearsym(addr, symtab_p2, namebuf); } if (!(symtab_p2->sym_name || (punc & KDB_SP_VALUE))) return; @@ -665,232 +609,6 @@ unsigned long kdb_task_state(const struct task_struct *p, unsigned long mask) return (mask & kdb_task_state_string(state)) != 0; } -/* Last ditch allocator for debugging, so we can still debug even when - * the GFP_ATOMIC pool has been exhausted. The algorithms are tuned - * for space usage, not for speed. One smallish memory pool, the free - * chain is always in ascending address order to allow coalescing, - * allocations are done in brute force best fit. - */ - -struct debug_alloc_header { - u32 next; /* offset of next header from start of pool */ - u32 size; - void *caller; -}; - -/* The memory returned by this allocator must be aligned, which means - * so must the header size. Do not assume that sizeof(struct - * debug_alloc_header) is a multiple of the alignment, explicitly - * calculate the overhead of this header, including the alignment. - * The rest of this code must not use sizeof() on any header or - * pointer to a header. - */ -#define dah_align 8 -#define dah_overhead ALIGN(sizeof(struct debug_alloc_header), dah_align) - -static u64 debug_alloc_pool_aligned[256*1024/dah_align]; /* 256K pool */ -static char *debug_alloc_pool = (char *)debug_alloc_pool_aligned; -static u32 dah_first, dah_first_call = 1, dah_used, dah_used_max; - -/* Locking is awkward. The debug code is called from all contexts, - * including non maskable interrupts. A normal spinlock is not safe - * in NMI context. Try to get the debug allocator lock, if it cannot - * be obtained after a second then give up. If the lock could not be - * previously obtained on this cpu then only try once. - * - * sparse has no annotation for "this function _sometimes_ acquires a - * lock", so fudge the acquire/release notation. - */ -static DEFINE_SPINLOCK(dap_lock); -static int get_dap_lock(void) - __acquires(dap_lock) -{ - static int dap_locked = -1; - int count; - if (dap_locked == smp_processor_id()) - count = 1; - else - count = 1000; - while (1) { - if (spin_trylock(&dap_lock)) { - dap_locked = -1; - return 1; - } - if (!count--) - break; - udelay(1000); - } - dap_locked = smp_processor_id(); - __acquire(dap_lock); - return 0; -} - -void *debug_kmalloc(size_t size, gfp_t flags) -{ - unsigned int rem, h_offset; - struct debug_alloc_header *best, *bestprev, *prev, *h; - void *p = NULL; - if (!get_dap_lock()) { - __release(dap_lock); /* we never actually got it */ - return NULL; - } - h = (struct debug_alloc_header *)(debug_alloc_pool + dah_first); - if (dah_first_call) { - h->size = sizeof(debug_alloc_pool_aligned) - dah_overhead; - dah_first_call = 0; - } - size = ALIGN(size, dah_align); - prev = best = bestprev = NULL; - while (1) { - if (h->size >= size && (!best || h->size < best->size)) { - best = h; - bestprev = prev; - if (h->size == size) - break; - } - if (!h->next) - break; - prev = h; - h = (struct debug_alloc_header *)(debug_alloc_pool + h->next); - } - if (!best) - goto out; - rem = best->size - size; - /* The pool must always contain at least one header */ - if (best->next == 0 && bestprev == NULL && rem < dah_overhead) - goto out; - if (rem >= dah_overhead) { - best->size = size; - h_offset = ((char *)best - debug_alloc_pool) + - dah_overhead + best->size; - h = (struct debug_alloc_header *)(debug_alloc_pool + h_offset); - h->size = rem - dah_overhead; - h->next = best->next; - } else - h_offset = best->next; - best->caller = __builtin_return_address(0); - dah_used += best->size; - dah_used_max = max(dah_used, dah_used_max); - if (bestprev) - bestprev->next = h_offset; - else - dah_first = h_offset; - p = (char *)best + dah_overhead; - memset(p, POISON_INUSE, best->size - 1); - *((char *)p + best->size - 1) = POISON_END; -out: - spin_unlock(&dap_lock); - return p; -} - -void debug_kfree(void *p) -{ - struct debug_alloc_header *h; - unsigned int h_offset; - if (!p) - return; - if ((char *)p < debug_alloc_pool || - (char *)p >= debug_alloc_pool + sizeof(debug_alloc_pool_aligned)) { - kfree(p); - return; - } - if (!get_dap_lock()) { - __release(dap_lock); /* we never actually got it */ - return; /* memory leak, cannot be helped */ - } - h = (struct debug_alloc_header *)((char *)p - dah_overhead); - memset(p, POISON_FREE, h->size - 1); - *((char *)p + h->size - 1) = POISON_END; - h->caller = NULL; - dah_used -= h->size; - h_offset = (char *)h - debug_alloc_pool; - if (h_offset < dah_first) { - h->next = dah_first; - dah_first = h_offset; - } else { - struct debug_alloc_header *prev; - unsigned int prev_offset; - prev = (struct debug_alloc_header *)(debug_alloc_pool + - dah_first); - while (1) { - if (!prev->next || prev->next > h_offset) - break; - prev = (struct debug_alloc_header *) - (debug_alloc_pool + prev->next); - } - prev_offset = (char *)prev - debug_alloc_pool; - if (prev_offset + dah_overhead + prev->size == h_offset) { - prev->size += dah_overhead + h->size; - memset(h, POISON_FREE, dah_overhead - 1); - *((char *)h + dah_overhead - 1) = POISON_END; - h = prev; - h_offset = prev_offset; - } else { - h->next = prev->next; - prev->next = h_offset; - } - } - if (h_offset + dah_overhead + h->size == h->next) { - struct debug_alloc_header *next; - next = (struct debug_alloc_header *) - (debug_alloc_pool + h->next); - h->size += dah_overhead + next->size; - h->next = next->next; - memset(next, POISON_FREE, dah_overhead - 1); - *((char *)next + dah_overhead - 1) = POISON_END; - } - spin_unlock(&dap_lock); -} - -void debug_kusage(void) -{ - struct debug_alloc_header *h_free, *h_used; -#ifdef CONFIG_IA64 - /* FIXME: using dah for ia64 unwind always results in a memory leak. - * Fix that memory leak first, then set debug_kusage_one_time = 1 for - * all architectures. - */ - static int debug_kusage_one_time; -#else - static int debug_kusage_one_time = 1; -#endif - if (!get_dap_lock()) { - __release(dap_lock); /* we never actually got it */ - return; - } - h_free = (struct debug_alloc_header *)(debug_alloc_pool + dah_first); - if (dah_first == 0 && - (h_free->size == sizeof(debug_alloc_pool_aligned) - dah_overhead || - dah_first_call)) - goto out; - if (!debug_kusage_one_time) - goto out; - debug_kusage_one_time = 0; - kdb_printf("%s: debug_kmalloc memory leak dah_first %d\n", - __func__, dah_first); - if (dah_first) { - h_used = (struct debug_alloc_header *)debug_alloc_pool; - kdb_printf("%s: h_used %px size %d\n", __func__, h_used, - h_used->size); - } - do { - h_used = (struct debug_alloc_header *) - ((char *)h_free + dah_overhead + h_free->size); - kdb_printf("%s: h_used %px size %d caller %px\n", - __func__, h_used, h_used->size, h_used->caller); - h_free = (struct debug_alloc_header *) - (debug_alloc_pool + h_free->next); - } while (h_free->next); - h_used = (struct debug_alloc_header *) - ((char *)h_free + dah_overhead + h_free->size); - if ((char *)h_used - debug_alloc_pool != - sizeof(debug_alloc_pool_aligned)) - kdb_printf("%s: h_used %px size %d caller %px\n", - __func__, h_used, h_used->size, h_used->caller); -out: - spin_unlock(&dap_lock); -} - /* Maintain a small stack of kdb_flags to allow recursion without disturbing * the global kdb state. */ -- 2.25.1