Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp5382325yba; Wed, 8 May 2019 12:21:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqyA56e4uyUtL6ZzHOKyQZQOKNhSaoGl8oqVpSVmJdh5EiJl0hYZBvfxBoidAIaXhI1qh0v+ X-Received: by 2002:a63:6942:: with SMTP id e63mr48657654pgc.102.1557343307449; Wed, 08 May 2019 12:21:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557343307; cv=none; d=google.com; s=arc-20160816; b=XZqWGmm6Rh57qS91dSIj+eKeeIJfkKTWcYgsnhJvLeu2/IVv9VWWADUxYYZg/8cX3Q wCfTS+C3LMthEDMLayUOY0RkOstfxfeCQnVxvE32Gv0WEgYTSkukWqtdWl84jWemJo6z sCW70gETrfaclIzp7uHY+FSGhbPDCXi82A5HQtuUH+cg9bFYVrxYOlHCs6CdC4C3Vnrc I3wrZ2DzkvkbRNT8583ZShdlUpHznauogVp6+nyFsUVcpGqKPQZ2yaes+n28AVD1zH5L vcXkKHlqNzj5F/MT7Xm0u98KMHMiAElDkcjnmqgoCd2S1+fqebjKS4s4AHRu2tLUK6hl TILw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=qjSy2g2nVVj6BCdNMVaDCCqCXAust5dkZBT7nTUp9+Y=; b=c1l/ZklZ8Ux8yACIjT4eDYt3cOnS2V4sVenq2c8YRgPMwkcEQGmD1SrYNbYQAAwbLT 7mrHzteWhjve/L/67waWlYVg1qzfxKdO9MiaQtKi6YtZmioRcRWw6xBzjDSJkd3rp5vj Bb+mP2BZVA7Wg6U6mNzQAjGqn7FDVH3II3OcqZhGRR1McUGPqBfFTM1bZWIU/07fkp2r XFrkrEAipHSKHAhWEdexjIrOi7F6pk/oOZ2PNuENzFEDgUfpypv7BoSNKmXZETj2UK+b ETc2oR5MM8Dq3lc48gQws30iHRRGQEPAB5+yThIcVY/4FdD2TRR/n9AT1Pq6wdLHlbKT 27LQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ed6YBkns; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 70si25025574plc.88.2019.05.08.12.21.30; Wed, 08 May 2019 12:21:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Ed6YBkns; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727167AbfEHTEZ (ORCPT + 99 others); Wed, 8 May 2019 15:04:25 -0400 Received: from mail-yw1-f67.google.com ([209.85.161.67]:38338 "EHLO mail-yw1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726558AbfEHTEZ (ORCPT ); Wed, 8 May 2019 15:04:25 -0400 Received: by mail-yw1-f67.google.com with SMTP id b74so16989127ywe.5 for ; Wed, 08 May 2019 12:04:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qjSy2g2nVVj6BCdNMVaDCCqCXAust5dkZBT7nTUp9+Y=; b=Ed6YBkns1HRHDeVo6Z3Z7u0qlthRGEA/YbvoFs5FNS4sBcJhDmu/RlVV5YngZm8krj M2IAXv/DtHcB5qrWfcyKXTl06cZ22QXFcnsJ+kPtZxALlEkDwevdlgQ9oz5zTpuEQqZa O5ZoK6RMpVmgyFS6acCS7s9BOxKevhNUTIpRDJbSx/XpyTG/kNgGkLq8fHCHLA++d2Xn asDZ/55oWSJvp1UZgo+xI0bJimACzS5X5tMyh7t1ZwgHGAraYEf4tr/nMQjqvitdQDFf Am4QtCO6G4SSPtZjOgjgWeqbhTH1WQRM84u63IGOdM+beDLGPeU84nkhqmX5+Xy6919C SoHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qjSy2g2nVVj6BCdNMVaDCCqCXAust5dkZBT7nTUp9+Y=; b=g3wAqtOv3LblWKd5ItuE4d6n71wLdnpGqF6k9+EKly4d5DfFLE7ZKaFsTC4+Zvcsn8 lzAq9dPgbSaaf4pRrY1q+jbRyJ7oY7Tf9SeiybpdaVfgETABmOP9jXoMrxcx0UD/DzMt 0/dcTGLhahOH2TrmQjgPxhH9NOfyJfRR4M+Hea3RSv67WgkZaSJnhmW53QKNxPZ5Z/p5 Du+m6zl2nEZ6LiaUX3/+VIHmNWPd4V8lAY+qwYuopR5bgSqnFf/CTBxNulpZufHsyssh XYYumaiz/gLE40J1QcpB1Ous5GjgLGWdfQEb/sKUGR8w1ZKO7KdK0yeH5e8IRYL4PnGu Lp9Q== X-Gm-Message-State: APjAAAUEi9p3RcJN082Aa1j/SwmNP8y/pCQnlpYL6C3nGvS8UECBqdm/ ehfnmmchSwN1PgGS0L5I84G65vkvfk9COUxD/vnSJA== X-Received: by 2002:a81:2941:: with SMTP id p62mr11064125ywp.430.1557342263816; Wed, 08 May 2019 12:04:23 -0700 (PDT) MIME-Version: 1.0 References: <20190508014320.55404-1-jsperbeck@google.com> <20190508185932.GA68786@dennisz-mbp> In-Reply-To: <20190508185932.GA68786@dennisz-mbp> From: Eric Dumazet Date: Wed, 8 May 2019 12:04:08 -0700 Message-ID: Subject: Re: [PATCH] percpu: remove spurious lock dependency between percpu and sched To: Dennis Zhou , Alexei Starovoitov Cc: John Sperbeck , Tejun Heo , Christoph Lameter , linux-mm , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 8, 2019 at 11:59 AM Dennis Zhou wrote: > > On Tue, May 07, 2019 at 06:43:20PM -0700, John Sperbeck wrote: > > In free_percpu() we sometimes call pcpu_schedule_balance_work() to > > queue a work item (which does a wakeup) while holding pcpu_lock. > > This creates an unnecessary lock dependency between pcpu_lock and > > the scheduler's pi_lock. There are other places where we call > > pcpu_schedule_balance_work() without hold pcpu_lock, and this case > > doesn't need to be different. > > > > Moving the call outside the lock prevents the following lockdep splat > > when running tools/testing/selftests/bpf/{test_maps,test_progs} in > > sequence with lockdep enabled: > > > > ====================================================== > > WARNING: possible circular locking dependency detected > > 5.1.0-dbg-DEV #1 Not tainted > > ------------------------------------------------------ > > kworker/23:255/18872 is trying to acquire lock: > > 000000000bc79290 (&(&pool->lock)->rlock){-.-.}, at: __queue_work+0xb2/0x520 > > > > but task is already holding lock: > > 00000000e3e7a6aa (pcpu_lock){..-.}, at: free_percpu+0x36/0x260 > > > > which lock already depends on the new lock. > > > > the existing dependency chain (in reverse order) is: > > > > -> #4 (pcpu_lock){..-.}: > > lock_acquire+0x9e/0x180 > > _raw_spin_lock_irqsave+0x3a/0x50 > > pcpu_alloc+0xfa/0x780 > > __alloc_percpu_gfp+0x12/0x20 > > alloc_htab_elem+0x184/0x2b0 > > __htab_percpu_map_update_elem+0x252/0x290 > > bpf_percpu_hash_update+0x7c/0x130 > > __do_sys_bpf+0x1912/0x1be0 > > __x64_sys_bpf+0x1a/0x20 > > do_syscall_64+0x59/0x400 > > entry_SYSCALL_64_after_hwframe+0x49/0xbe > > > > -> #3 (&htab->buckets[i].lock){....}: > > lock_acquire+0x9e/0x180 > > _raw_spin_lock_irqsave+0x3a/0x50 > > htab_map_update_elem+0x1af/0x3a0 > > > > -> #2 (&rq->lock){-.-.}: > > lock_acquire+0x9e/0x180 > > _raw_spin_lock+0x2f/0x40 > > task_fork_fair+0x37/0x160 > > sched_fork+0x211/0x310 > > copy_process.part.43+0x7b1/0x2160 > > _do_fork+0xda/0x6b0 > > kernel_thread+0x29/0x30 > > rest_init+0x22/0x260 > > arch_call_rest_init+0xe/0x10 > > start_kernel+0x4fd/0x520 > > x86_64_start_reservations+0x24/0x26 > > x86_64_start_kernel+0x6f/0x72 > > secondary_startup_64+0xa4/0xb0 > > > > -> #1 (&p->pi_lock){-.-.}: > > lock_acquire+0x9e/0x180 > > _raw_spin_lock_irqsave+0x3a/0x50 > > try_to_wake_up+0x41/0x600 > > wake_up_process+0x15/0x20 > > create_worker+0x16b/0x1e0 > > workqueue_init+0x279/0x2ee > > kernel_init_freeable+0xf7/0x288 > > kernel_init+0xf/0x180 > > ret_from_fork+0x24/0x30 > > > > -> #0 (&(&pool->lock)->rlock){-.-.}: > > __lock_acquire+0x101f/0x12a0 > > lock_acquire+0x9e/0x180 > > _raw_spin_lock+0x2f/0x40 > > __queue_work+0xb2/0x520 > > queue_work_on+0x38/0x80 > > free_percpu+0x221/0x260 > > pcpu_freelist_destroy+0x11/0x20 > > stack_map_free+0x2a/0x40 > > bpf_map_free_deferred+0x3c/0x50 > > process_one_work+0x1f7/0x580 > > worker_thread+0x54/0x410 > > kthread+0x10f/0x150 > > ret_from_fork+0x24/0x30 > > > > other info that might help us debug this: > > > > Chain exists of: > > &(&pool->lock)->rlock --> &htab->buckets[i].lock --> pcpu_lock > > > > Possible unsafe locking scenario: > > > > CPU0 CPU1 > > ---- ---- > > lock(pcpu_lock); > > lock(&htab->buckets[i].lock); > > lock(pcpu_lock); > > lock(&(&pool->lock)->rlock); > > > > *** DEADLOCK *** > > > > 3 locks held by kworker/23:255/18872: > > #0: 00000000b36a6e16 ((wq_completion)events){+.+.}, > > at: process_one_work+0x17a/0x580 > > #1: 00000000dfd966f0 ((work_completion)(&map->work)){+.+.}, > > at: process_one_work+0x17a/0x580 > > #2: 00000000e3e7a6aa (pcpu_lock){..-.}, > > at: free_percpu+0x36/0x260 > > > > stack backtrace: > > CPU: 23 PID: 18872 Comm: kworker/23:255 Not tainted 5.1.0-dbg-DEV #1 > > Hardware name: ... > > Workqueue: events bpf_map_free_deferred > > Call Trace: > > dump_stack+0x67/0x95 > > print_circular_bug.isra.38+0x1c6/0x220 > > check_prev_add.constprop.50+0x9f6/0xd20 > > __lock_acquire+0x101f/0x12a0 > > lock_acquire+0x9e/0x180 > > _raw_spin_lock+0x2f/0x40 > > __queue_work+0xb2/0x520 > > queue_work_on+0x38/0x80 > > free_percpu+0x221/0x260 > > pcpu_freelist_destroy+0x11/0x20 > > stack_map_free+0x2a/0x40 > > bpf_map_free_deferred+0x3c/0x50 > > process_one_work+0x1f7/0x580 > > worker_thread+0x54/0x410 > > kthread+0x10f/0x150 > > ret_from_fork+0x24/0x30 > > > > Signed-off-by: John Sperbeck > > --- > > mm/percpu.c | 6 +++++- > > 1 file changed, 5 insertions(+), 1 deletion(-) > > > > diff --git a/mm/percpu.c b/mm/percpu.c > > index 68dd2e7e73b5..d832793bf83a 100644 > > --- a/mm/percpu.c > > +++ b/mm/percpu.c > > @@ -1738,6 +1738,7 @@ void free_percpu(void __percpu *ptr) > > struct pcpu_chunk *chunk; > > unsigned long flags; > > int off; > > + bool need_balance = false; > > > > if (!ptr) > > return; > > @@ -1759,7 +1760,7 @@ void free_percpu(void __percpu *ptr) > > > > list_for_each_entry(pos, &pcpu_slot[pcpu_nr_slots - 1], list) > > if (pos != chunk) { > > - pcpu_schedule_balance_work(); > > + need_balance = true; > > break; > > } > > } > > @@ -1767,6 +1768,9 @@ void free_percpu(void __percpu *ptr) > > trace_percpu_free_percpu(chunk->base_addr, off, ptr); > > > > spin_unlock_irqrestore(&pcpu_lock, flags); > > + > > + if (need_balance) > > + pcpu_schedule_balance_work(); > > } > > EXPORT_SYMBOL_GPL(free_percpu); > > > > -- > > 2.21.0.1020.gf2820cf01a-goog > > > > Hi John, > > The free_percpu() function hasn't changed in a little under 2 years. So, > either lockdep has gotten smarter or something else has changed. There > was a workqueue change recently merged: 6d25be5782e4 ("sched/core, > workqueues: Distangle worker accounting from rq lock"). Would you mind > reverting this and then seeing if you still encounter deadlock? > We have the issue even without 6d25be5782e4 in the picture. I sent the splat months ago to Alexei, because I thought it was BPF related at first