Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1809134rwb; Tue, 29 Nov 2022 20:23:03 -0800 (PST) X-Google-Smtp-Source: AA0mqf55G/OV/xtW9e+VN9q8qnA0vdoKuFQxaYm3TAOVbhq/C6PZN1ppbrB74ridy8aAsGLzqRad X-Received: by 2002:a17:906:32ce:b0:78d:9022:f146 with SMTP id k14-20020a17090632ce00b0078d9022f146mr33732089ejk.656.1669782182752; Tue, 29 Nov 2022 20:23:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669782182; cv=none; d=google.com; s=arc-20160816; b=jlWq18gbWjuo9WpwLudZYfZIaY8S6dfTYYKaczd7fGM5RI1YgIwJwpvyk0L8cjKpet OHjcwHgY2TRqtuBaZMI1fvFYUIv0uSr0y8FzOXyeVEWKyeVnJo6TcvXrvG272q2kcbsX /uq0Bb8WErISVThIfmUsZcs8z+BYPWl49DsgcoMcm9rmeZMCUaVwhKq63pLAYWBFOxid dpYmrAEy3U2q7ecsVAtWJnJS+PKKIy7kRBWcROv2hi5fz+yk5oVbyQS7y7Rk5c9dA0Tr 4my4vVSBbRbOJX0ynPV2pL7yPV9RFbSEa5DdvbvFnvf73ueP2VQLzYnvEkjPQ1DR7BDh Aszg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=7dVxXAZufrpSRVdfUpGT875/FvocolhoDTU8t05kbp8=; b=kep3LCj8pI0B72XUKznpUzFZdwdFWTaz7YnJDjBOQlyqJtuXlzHX1sdlxogLGMnJRJ 7u0ggbCj0MAu5+QAuBQTgh0pSYX2aOzw/yMfITU+Y7AQGTJdoi+OZLQz62YkDwXj0WXE nPJK/Ynw34KQnXb9r6c4z41RAq6SkuOMz9Ax/aWi1JybcM6SCDJz+rrYDSKlhk5Q/eNQ kgePL1JoRvTx7P/UR8/tnDYyHR3EwR8jYTOCdO2uXHq/5dGReteWxeZlWq8UyWZDf/Qv OaS45WuvQyz6fHakdh7iclIkq9hVB6GRjNxPqOABkMG3KaEIKn8f55Fdlic4L0LDNEys 6ZHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jIvuO6qd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gn26-20020a1709070d1a00b0078dc7f7d0ebsi406732ejc.822.2022.11.29.20.22.42; Tue, 29 Nov 2022 20:23:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=jIvuO6qd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233019AbiK3EIW (ORCPT + 84 others); Tue, 29 Nov 2022 23:08:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231363AbiK3EIQ (ORCPT ); Tue, 29 Nov 2022 23:08:16 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D9642B633 for ; Tue, 29 Nov 2022 20:07:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669781241; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7dVxXAZufrpSRVdfUpGT875/FvocolhoDTU8t05kbp8=; b=jIvuO6qdUI6mHKC5ma08YlPWTmUQa1KvVYBn2Xo4NznwF3+p20VJpM24TQytlUjMRUbvNw NTNwZ2QwpIbNpmOOKoaFOB4bat2lgxsKb3LyhsrjOmNQF1/WpjtBmNtSBJJbxPVDtr8GGL 4iHjyULzVZ/OymSBqXuNnwOZ7+i4Ovw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-499-tCXDdzctOgqdSuZOCqFYUA-1; Tue, 29 Nov 2022 23:07:18 -0500 X-MC-Unique: tCXDdzctOgqdSuZOCqFYUA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4B67A1C05EAA; Wed, 30 Nov 2022 04:07:17 +0000 (UTC) Received: from [10.22.17.30] (unknown [10.22.17.30]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2F5722024CBE; Wed, 30 Nov 2022 04:07:16 +0000 (UTC) Message-ID: Date: Tue, 29 Nov 2022 23:07:13 -0500 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.4.0 Subject: Re: [net-next] bpf: avoid hashtab deadlock with try_lock Content-Language: en-US To: Tonghao Zhang , Hou Tao Cc: Hou Tao , Hao Luo , Peter Zijlstra , Ingo Molnar , Will Deacon , netdev@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Jiri Olsa , bpf , LKML , Boqun Feng References: <41eda0ea-0ed4-1ffb-5520-06fda08e5d38@huawei.com> <07a7491e-f391-a9b2-047e-cab5f23decc5@huawei.com> <59fc54b7-c276-2918-6741-804634337881@huaweicloud.com> <541aa740-dcf3-35f5-9f9b-e411978eaa06@redhat.com> <23b5de45-1a11-b5c9-d0d3-4dbca0b7661e@huaweicloud.com> <9455ff51-098c-87f0-dc83-2303921032a2@redhat.com> From: Waiman Long In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/29/22 22:32, Tonghao Zhang wrote: > On Wed, Nov 30, 2022 at 11:07 AM Waiman Long wrote: >> On 11/29/22 21:47, Tonghao Zhang wrote: >>> On Wed, Nov 30, 2022 at 9:50 AM Hou Tao wrote: >>>> Hi Hao, >>>> >>>> On 11/30/2022 3:36 AM, Hao Luo wrote: >>>>> On Tue, Nov 29, 2022 at 9:32 AM Boqun Feng wrote: >>>>>> Just to be clear, I meant to refactor htab_lock_bucket() into a try >>>>>> lock pattern. Also after a second thought, the below suggestion doesn't >>>>>> work. I think the proper way is to make htab_lock_bucket() as a >>>>>> raw_spin_trylock_irqsave(). >>>>>> >>>>>> Regards, >>>>>> Boqun >>>>>> >>>>> The potential deadlock happens when the lock is contended from the >>>>> same cpu. When the lock is contended from a remote cpu, we would like >>>>> the remote cpu to spin and wait, instead of giving up immediately. As >>>>> this gives better throughput. So replacing the current >>>>> raw_spin_lock_irqsave() with trylock sacrifices this performance gain. >>>>> >>>>> I suspect the source of the problem is the 'hash' that we used in >>>>> htab_lock_bucket(). The 'hash' is derived from the 'key', I wonder >>>>> whether we should use a hash derived from 'bucket' rather than from >>>>> 'key'. For example, from the memory address of the 'bucket'. Because, >>>>> different keys may fall into the same bucket, but yield different >>>>> hashes. If the same bucket can never have two different 'hashes' here, >>>>> the map_locked check should behave as intended. Also because >>>>> ->map_locked is per-cpu, execution flows from two different cpus can >>>>> both pass. >>>> The warning from lockdep is due to the reason the bucket lock A is used in a >>>> no-NMI context firstly, then the same bucke lock is used a NMI context, so >>> Yes, I tested lockdep too, we can't use the lock in NMI(but only >>> try_lock work fine) context if we use them no-NMI context. otherwise >>> the lockdep prints the warning. >>> * for the dead-lock case: we can use the >>> 1. hash & min(HASHTAB_MAP_LOCK_MASK, htab->n_buckets -1) >>> 2. or hash bucket address. >>> >>> * for lockdep warning, we should use in_nmi check with map_locked. >>> >>> BTW, the patch doesn't work, so we can remove the lock_key >>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c50eb518e262fa06bd334e6eec172eaf5d7a5bd9 >>> >>> static inline int htab_lock_bucket(const struct bpf_htab *htab, >>> struct bucket *b, u32 hash, >>> unsigned long *pflags) >>> { >>> unsigned long flags; >>> >>> hash = hash & min(HASHTAB_MAP_LOCK_MASK, htab->n_buckets -1); >>> >>> preempt_disable(); >>> if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { >>> __this_cpu_dec(*(htab->map_locked[hash])); >>> preempt_enable(); >>> return -EBUSY; >>> } >>> >>> if (in_nmi()) { >>> if (!raw_spin_trylock_irqsave(&b->raw_lock, flags)) >>> return -EBUSY; >> That is not right. You have to do the same step as above by decrementing >> the percpu count and enable preemption. So you may want to put all these >> busy_out steps after the return 0 and use "goto busy_out;" to jump there. > Yes, thanks Waiman, I should add the busy_out label. >>> } else { >>> raw_spin_lock_irqsave(&b->raw_lock, flags); >>> } >>> >>> *pflags = flags; >>> return 0; >>> } >> BTW, with that change, I believe you can actually remove all the percpu >> map_locked count code. > there are some case, for example, we run the bpf_prog A B in task > context on the same cpu. > bpf_prog A > update map X > htab_lock_bucket > raw_spin_lock_irqsave() > lookup_elem_raw() > // bpf prog B is attached on lookup_elem_raw() > bpf prog B > update map X again and update the element > htab_lock_bucket() > // dead-lock > raw_spinlock_irqsave() I see, so nested locking is possible in this case. Beside using the percpu map_lock, another way is to have cpumask associated with each bucket lock and use each bit in the cpumask for to control access using test_and_set_bit() for each cpu. That will allow more concurrency and you can actually find out how contended is the lock. Anyway, it is just a thought. Cheers, Longman