Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1189894rwb; Tue, 29 Nov 2022 10:04:14 -0800 (PST) X-Google-Smtp-Source: AA0mqf69xmCo3+UAFQpgTbsZvejWlIcfU18v+gPNr4UxeejXxbtj+JW6ryrOb31/nWjp7fIJZiea X-Received: by 2002:a17:902:9881:b0:188:62b8:2278 with SMTP id s1-20020a170902988100b0018862b82278mr44087633plp.96.1669745054626; Tue, 29 Nov 2022 10:04:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669745054; cv=none; d=google.com; s=arc-20160816; b=gzh113ytRYUCAzwOdU+R7XVmARuM0pI7gVT+mNcKpaDYgn6kboHvM2br0RmcXZBO1m ZUDtUS4R4dP3jqpI2OHlav3FTeRDD261gdu/wXYfrEWF4qzqPaebRtPHMYWiS12UbAaW ZnLwmnFsTrNLYiJMWhhXrQJZVgCR71FijiP2b213l+VkqDc5fvqpwtRXClDV3qvM/cVZ nwld/gpI+Yyfy9WvcoPHUBme152c7l5xtnK1RNuEs4midEUbenrT5Iiq56+Fz2ymBNY8 rW+mFACvZ7lH4mjtIKq5mIY1b9d74yH5RlYUzmWtcwaQTVdjO26HnAlYL1RGIhw4yAP4 s4MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:feedback-id:dkim-signature; bh=i5+X2S0sQTT/dz8zVpuwfVxr/4pRPCeBbFwlDo4hLyw=; b=au8E87YjydjxgGv2yjpVeB6Is5NwuuaZKsZkC+Cnhr3Xh0l5a3IZd4+uV3LCXRWSHN YVHE1oF73AicNElwGL4b5RwVdnzg50eAWlPH2HcfXR9NY6b4SjJalw+x/CRIwCFYNP9y zSqgFHiFcbUOkACFVa8KRO0QQbBh4FB4GCyYGhB2lGNLYf/199uaOMPQMNYXy60VENW+ ua4NcSxcx4VlmPatNV/PrhzkKHHe9jToPwSDIN6Wgwpv0z0d6EvEyxtZhmZt58lNN0HD G8ErLBOtmYmRyqYSdntpoNuoQc23RbplDLIB53VuDLB3Au9at/koUI2djfh9rWnNQ1HJ NSpA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=PtWu6S4J; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h2-20020a170902f54200b001836e51050esi16336392plf.572.2022.11.29.10.04.02; Tue, 29 Nov 2022 10:04:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=PtWu6S4J; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236631AbiK2Rcg (ORCPT + 85 others); Tue, 29 Nov 2022 12:32:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235494AbiK2Rce (ORCPT ); Tue, 29 Nov 2022 12:32:34 -0500 Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com [IPv6:2607:f8b0:4864:20::735]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58D1D69328; Tue, 29 Nov 2022 09:32:31 -0800 (PST) Received: by mail-qk1-x735.google.com with SMTP id i9so10322110qkl.5; Tue, 29 Nov 2022 09:32:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :feedback-id:from:to:cc:subject:date:message-id:reply-to; bh=i5+X2S0sQTT/dz8zVpuwfVxr/4pRPCeBbFwlDo4hLyw=; b=PtWu6S4Jbm6gGgywBiM/IulfkXcTmra1mAPOI+9Zxsh6W0XAptrnIPv/VKr1pVXyBh a1WIyyc85eAfl2tcv/nzLrYWjp/B1rJN8jiKR7PQi2xjOzdL/6CK7Z28nFOo7BatyIez OP3CTYPTgFbCxPH1I2GUzQKkodRYZs0yqLCVj1BxPuMVhR8XR1ct7BoTzajzclM54ehX g5KyfzvWleDIJJYO7ibMYCJQJN9bJPrh2oVqgxCtSk6ciDestVR66NAAV7n4dngvsuRR jlFr5pm/HFm9mFoqTYGc8KnIgkKIh6Ok6csrlF+bvaZhwDGRtCxSihEy8tl2e8HkHt6b M1lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :feedback-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=i5+X2S0sQTT/dz8zVpuwfVxr/4pRPCeBbFwlDo4hLyw=; b=t2o4FdEHaQCbhd3AafkRTc6JrNIFo8yvgTL8tVnv4dSeozN9933rG5lCwsv2fzgwLh 9RAPU+Qz2JcLHEThF8LOMpLvsQKlPo5wm0nFl8oOrTmboDB7/3we26BOiF+R6L2VBDqS /AtoiGDunlXy3dGXKyScTwntJ2M4BgKw+d4PajbjdYWyjEMQAusjlixwblSfcbNCLOuH AI5Um1DnO/YQt+JJwcgTry/bzM1VxtP/RoMil7ehnifsH5AdhcNH0kSPXkyg+DpDe5xZ WWMOyzUHnaVWbSzTbfzMM87EImH7rM3kbZRw6Eb11DLRfUYf39Rbot0Mz7fC3JEej3+Q AN9g== X-Gm-Message-State: ANoB5pnBPjb1Fk7BAJ2Gz5Pdjjc04QdQGNVe5EQvtK5zfPWMrUC9+Jmp oHw0gr0qQgFldgOaif15krE= X-Received: by 2002:a05:620a:15b7:b0:6fa:3f37:5af with SMTP id f23-20020a05620a15b700b006fa3f3705afmr52317324qkk.572.1669743150198; Tue, 29 Nov 2022 09:32:30 -0800 (PST) Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id bq38-20020a05620a46a600b006fc40dafaa2sm10958685qkb.8.2022.11.29.09.32.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Nov 2022 09:32:29 -0800 (PST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailauth.nyi.internal (Postfix) with ESMTP id E761C27C005A; Tue, 29 Nov 2022 12:32:28 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 29 Nov 2022 12:32:28 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrtddtgdejvdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpeffhffvvefukfhfgggtugfgjgesthekredttddtudenucfhrhhomhepuehoqhhu nhcuhfgvnhhguceosghoqhhunhdrfhgvnhhgsehgmhgrihhlrdgtohhmqeenucggtffrrg htthgvrhhnpeevuedtteetledvhfdtudekfffggeelhfejlefhgffgfedviefhgeeifeel vddtgeenucffohhmrghinheplhhkmhhlrdhorhhgpdhqvghmuhdrohhrghenucevlhhush htvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpegsohhquhhnodhmvghs mhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtdeigedqudejjeekheehhe dvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehfihigmhgvrdhnrghmvg X-ME-Proxy: Feedback-ID: iad51458e:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 29 Nov 2022 12:32:27 -0500 (EST) Date: Tue, 29 Nov 2022 09:32:25 -0800 From: Boqun Feng To: Waiman Long Cc: Hou Tao , Tonghao Zhang , Peter Zijlstra , Ingo Molnar , Will Deacon , netdev@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Jiri Olsa , bpf , Hao Luo , "houtao1@huawei.com" , LKML Subject: Re: [net-next] bpf: avoid hashtab deadlock with try_lock Message-ID: References: <41eda0ea-0ed4-1ffb-5520-06fda08e5d38@huawei.com> <07a7491e-f391-a9b2-047e-cab5f23decc5@huawei.com> <59fc54b7-c276-2918-6741-804634337881@huaweicloud.com> <541aa740-dcf3-35f5-9f9b-e411978eaa06@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 29, 2022 at 09:23:18AM -0800, Boqun Feng wrote: > On Tue, Nov 29, 2022 at 11:06:51AM -0500, Waiman Long wrote: > > On 11/29/22 07:45, Hou Tao wrote: > > > Hi, > > > > > > On 11/29/2022 2:06 PM, Tonghao Zhang wrote: > > > > On Tue, Nov 29, 2022 at 12:32 PM Hou Tao wrote: > > > > > Hi, > > > > > > > > > > On 11/29/2022 5:55 AM, Hao Luo wrote: > > > > > > On Sun, Nov 27, 2022 at 7:15 PM Tonghao Zhang wrote: > > > > > > Hi Tonghao, > > > > > > > > > > > > With a quick look at the htab_lock_bucket() and your problem > > > > > > statement, I agree with Hou Tao that using hash & > > > > > > min(HASHTAB_MAP_LOCK_MASK, n_bucket - 1) to index in map_locked seems > > > > > > to fix the potential deadlock. Can you actually send your changes as > > > > > > v2 so we can take a look and better help you? Also, can you explain > > > > > > your solution in your commit message? Right now, your commit message > > > > > > has only a problem statement and is not very clear. Please include > > > > > > more details on what you do to fix the issue. > > > > > > > > > > > > Hao > > > > > It would be better if the test case below can be rewritten as a bpf selftests. > > > > > Please see comments below on how to improve it and reproduce the deadlock. > > > > > > > Hi > > > > > > > only a warning from lockdep. > > > > > Thanks for your details instruction. I can reproduce the warning by using your > > > > > setup. I am not a lockdep expert, it seems that fixing such warning needs to set > > > > > different lockdep class to the different bucket. Because we use map_locked to > > > > > protect the acquisition of bucket lock, so I think we can define lock_class_key > > > > > array in bpf_htab (e.g., lockdep_key[HASHTAB_MAP_LOCK_COUNT]) and initialize the > > > > > bucket lock accordingly. > > > The proposed lockdep solution doesn't work. Still got lockdep warning after > > > that, so cc +locking expert +lkml.org for lockdep help. > > > > > > Hi lockdep experts, > > > > > > We are trying to fix the following lockdep warning from bpf subsystem: > > > > > > [?? 36.092222] ================================ > > > [?? 36.092230] WARNING: inconsistent lock state > > > [?? 36.092234] 6.1.0-rc5+ #81 Tainted: G??????????? E > > > [?? 36.092236] -------------------------------- > > > [?? 36.092237] inconsistent {INITIAL USE} -> {IN-NMI} usage. > > > [?? 36.092238] perf/1515 [HC1[1]:SC0[0]:HE0:SE1] takes: > > > [?? 36.092242] ffff888341acd1a0 (&htab->lockdep_key){....}-{2:2}, at: > > > htab_lock_bucket+0x4d/0x58 > > > [?? 36.092253] {INITIAL USE} state was registered at: > > > [?? 36.092255]?? mark_usage+0x1d/0x11d > > > [?? 36.092262]?? __lock_acquire+0x3c9/0x6ed > > > [?? 36.092266]?? lock_acquire+0x23d/0x29a > > > [?? 36.092270]?? _raw_spin_lock_irqsave+0x43/0x7f > > > [?? 36.092274]?? htab_lock_bucket+0x4d/0x58 > > > [?? 36.092276]?? htab_map_delete_elem+0x82/0xfb > > > [?? 36.092278]?? map_delete_elem+0x156/0x1ac > > > [?? 36.092282]?? __sys_bpf+0x138/0xb71 > > > [?? 36.092285]?? __do_sys_bpf+0xd/0x15 > > > [?? 36.092288]?? do_syscall_64+0x6d/0x84 > > > [?? 36.092291]?? entry_SYSCALL_64_after_hwframe+0x63/0xcd > > > [?? 36.092295] irq event stamp: 120346 > > > [?? 36.092296] hardirqs last? enabled at (120345): [] > > > _raw_spin_unlock_irq+0x24/0x39 > > > [?? 36.092299] hardirqs last disabled at (120346): [] > > > generic_exec_single+0x40/0xb9 > > > [?? 36.092303] softirqs last? enabled at (120268): [] > > > __do_softirq+0x347/0x387 > > > [?? 36.092307] softirqs last disabled at (120133): [] > > > __irq_exit_rcu+0x67/0xc6 > > > [?? 36.092311] > > > [?? 36.092311] other info that might help us debug this: > > > [?? 36.092312]? Possible unsafe locking scenario: > > > [?? 36.092312] > > > [?? 36.092313]??????? CPU0 > > > [?? 36.092313]??????? ---- > > > [?? 36.092314]?? lock(&htab->lockdep_key); > > > [?? 36.092315]?? > > > [?? 36.092316]???? lock(&htab->lockdep_key); > > > [?? 36.092318] > > > [?? 36.092318]? *** DEADLOCK *** > > > [?? 36.092318] > > > [?? 36.092318] 3 locks held by perf/1515: > > > [?? 36.092320]? #0: ffff8881b9805cc0 (&cpuctx_mutex){+.+.}-{4:4}, at: > > > perf_event_ctx_lock_nested+0x8e/0xba > > > [?? 36.092327]? #1: ffff8881075ecc20 (&event->child_mutex){+.+.}-{4:4}, at: > > > perf_event_for_each_child+0x35/0x76 > > > [?? 36.092332]? #2: ffff8881b9805c20 (&cpuctx_lock){-.-.}-{2:2}, at: > > > perf_ctx_lock+0x12/0x27 > > > [?? 36.092339] > > > [?? 36.092339] stack backtrace: > > > [?? 36.092341] CPU: 0 PID: 1515 Comm: perf Tainted: G??????????? E > > > 6.1.0-rc5+ #81 > > > [?? 36.092344] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS > > > rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 > > > [?? 36.092349] Call Trace: > > > [?? 36.092351]? > > > [?? 36.092354]? dump_stack_lvl+0x57/0x81 > > > [?? 36.092359]? lock_acquire+0x1f4/0x29a > > > [?? 36.092363]? ? handle_pmi_common+0x13f/0x1f0 > > > [?? 36.092366]? ? htab_lock_bucket+0x4d/0x58 > > > [?? 36.092371]? _raw_spin_lock_irqsave+0x43/0x7f > > > [?? 36.092374]? ? htab_lock_bucket+0x4d/0x58 > > > [?? 36.092377]? htab_lock_bucket+0x4d/0x58 > > > [?? 36.092379]? htab_map_update_elem+0x11e/0x220 > > > [?? 36.092386]? bpf_prog_f3a535ca81a8128a_bpf_prog2+0x3e/0x42 > > > [?? 36.092392]? trace_call_bpf+0x177/0x215 > > > [?? 36.092398]? perf_trace_run_bpf_submit+0x52/0xaa > > > [?? 36.092403]? ? x86_pmu_stop+0x97/0x97 > > > [?? 36.092407]? perf_trace_nmi_handler+0xb7/0xe0 > > > [?? 36.092415]? nmi_handle+0x116/0x254 > > > [?? 36.092418]? ? x86_pmu_stop+0x97/0x97 > > > [?? 36.092423]? default_do_nmi+0x3d/0xf6 > > > [?? 36.092428]? exc_nmi+0xa1/0x109 > > > [?? 36.092432]? end_repeat_nmi+0x16/0x67 > > > [?? 36.092436] RIP: 0010:wrmsrl+0xd/0x1b > > > > So the lock is really taken in a NMI context. In general, we advise again > > using lock in a NMI context unless it is a lock that is used only in that > > context. Otherwise, deadlock is certainly a possibility as there is no way > > to mask off again NMI. > > > > I think here they use a percpu counter as an "outer lock" to make the > accesses to the real lock exclusive: > > preempt_disable(); > a = __this_cpu_inc(->map_locked); > if (a != 1) { > __this_cpu_dec(->map_locked); > preempt_enable(); > return -EBUSY; > } > preempt_enable(); > return -EBUSY; > > raw_spin_lock_irqsave(->raw_lock); > > and lockdep is not aware that ->map_locked acts as a lock. > > However, I feel this may be just a reinvented try_lock pattern, Hou Tao, > could you see if this can be refactored with a try_lock? Otherwise, you Just to be clear, I meant to refactor htab_lock_bucket() into a try lock pattern. Also after a second thought, the below suggestion doesn't work. I think the proper way is to make htab_lock_bucket() as a raw_spin_trylock_irqsave(). Regards, Boqun > may need to introduce a virtual lockclass for ->map_locked. > > Regards, > Boqun > > > Cheers, > > Longman > >