Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp5997514ybv; Tue, 18 Feb 2020 07:57:03 -0800 (PST) X-Google-Smtp-Source: APXvYqyJdGUDmZqbUISwpMe6vN646rBy0ACTgGHEvUHC/xk4C0vc80E4zZ9QFc3UgUClKVvZpcct X-Received: by 2002:aca:2b0a:: with SMTP id i10mr1662063oik.137.1582041423264; Tue, 18 Feb 2020 07:57:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582041423; cv=none; d=google.com; s=arc-20160816; b=ApZu8YUpAncBzfbi2C5WRMkQkZpVKgs5i+arf8/Hr1xqmHsduJUj+MkrG/Kr92tAr0 ZzoUX6cJq/wt9pT1AK89tDDlbQ9hjLCNENKbtXn2/PvR+S4ajreQWhIoWfN7cW41YT1e wnk4AtXyHgVmlmL/F3I3S5fn5DdLjwdU9k4FHAl9Zvii0rmyg3UkS/q6tdHVFXIlytNd lueGy/61ku9t+JOe3XyApAjmYMCY+3+Nvwsxq2n3S3LFAhm5EXChHOkZviocQTKOPb+d T+XRjieHdF0S2TqpL28Ik5fNvaVDIJ6Jjl8ZtYL5nqZzm76SYrLMANIC01bQI00VBbDM p/7w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=PrnKDLb6egWEyB+z56N9L6adGUFdVMR6mAJt+VuCET0=; b=eYHXKeaWoAClYCMDXMgWk0k7PlQdu1IxJR4D/JS/RBR+rvuNS8T6qi/epmmm8BD8DG 6ghI/uvkpUEADVgqWcw8cq+MLIX416BA83EFOKStHYjH2m0iOKYXHM63/xoGt9In+YYf 6sRCmrrBTE38I4Y4G7KZwV/WI0v9BlTXJPvTDRNUaEXKZZ1cx0jYZzxWy6USZxnLUawr Xp6vVcygqnzT+voIijZT+G0W5R9Gdt8eMCA0O+48EgDcHh9kFFdk8j8IgLz286DE9hiH i2l2GEbtbWbAWktypjNcjiqJe24Ia8pQhZ83UdAMTDlD6kSz5/rT2aQZ5gj/NbnxEsUy 1ndA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a124si7640023oii.138.2020.02.18.07.56.50; Tue, 18 Feb 2020 07:57:03 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726651AbgBRP4c (ORCPT + 99 others); Tue, 18 Feb 2020 10:56:32 -0500 Received: from www62.your-server.de ([213.133.104.62]:38758 "EHLO www62.your-server.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726360AbgBRP4c (ORCPT ); Tue, 18 Feb 2020 10:56:32 -0500 Received: from sslproxy03.your-server.de ([88.198.220.132]) by www62.your-server.de with esmtpsa (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.89_1) (envelope-from ) id 1j45EX-00081J-6U; Tue, 18 Feb 2020 16:56:25 +0100 Received: from [2001:1620:665:0:5795:5b0a:e5d5:5944] (helo=linux-3.fritz.box) by sslproxy03.your-server.de with esmtpsa (TLSv1.3:TLS_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1j45EW-000VnX-Sy; Tue, 18 Feb 2020 16:56:24 +0100 Subject: Re: [PATCH bpf] bpf: Do not grab the bucket spinlock by default on htab batch ops To: Yonghong Song , Brian Vazquez , Brian Vazquez , Alexei Starovoitov , "David S . Miller" Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org References: <20200214224302.229920-1-brianvv@google.com> <8ac06749-491f-9a77-3899-641b4f40afe2@fb.com> From: Daniel Borkmann Message-ID: <63fa17bf-a109-65c1-6cc5-581dd84fc93b@iogearbox.net> Date: Tue, 18 Feb 2020 16:56:24 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <8ac06749-491f-9a77-3899-641b4f40afe2@fb.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Authenticated-Sender: daniel@iogearbox.net X-Virus-Scanned: Clear (ClamAV 0.102.1/25727/Tue Feb 18 15:05:00 2020) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2/18/20 4:43 PM, Yonghong Song wrote: > On 2/14/20 2:43 PM, Brian Vazquez wrote: >> Grabbing the spinlock for every bucket even if it's empty, was causing >> significant perfomance cost when traversing htab maps that have only a >> few entries. This patch addresses the issue by checking first the >> bucket_cnt, if the bucket has some entries then we go and grab the >> spinlock and proceed with the batching. >> >> Tested with a htab of size 50K and different value of populated entries. >> >> Before: >>    Benchmark             Time(ns)        CPU(ns) >>    --------------------------------------------- >>    BM_DumpHashMap/1       2759655        2752033 >>    BM_DumpHashMap/10      2933722        2930825 >>    BM_DumpHashMap/200     3171680        3170265 >>    BM_DumpHashMap/500     3639607        3635511 >>    BM_DumpHashMap/1000    4369008        4364981 >>    BM_DumpHashMap/5k     11171919       11134028 >>    BM_DumpHashMap/20k    69150080       69033496 >>    BM_DumpHashMap/39k   190501036      190226162 >> >> After: >>    Benchmark             Time(ns)        CPU(ns) >>    --------------------------------------------- >>    BM_DumpHashMap/1        202707         200109 >>    BM_DumpHashMap/10       213441         210569 >>    BM_DumpHashMap/200      478641         472350 >>    BM_DumpHashMap/500      980061         967102 >>    BM_DumpHashMap/1000    1863835        1839575 >>    BM_DumpHashMap/5k      8961836        8902540 >>    BM_DumpHashMap/20k    69761497       69322756 >>    BM_DumpHashMap/39k   187437830      186551111 >> >> Fixes: 057996380a42 ("bpf: Add batch ops to all htab bpf map") >> Cc: Yonghong Song >> Signed-off-by: Brian Vazquez > > Acked-by: Yonghong Song I must probably be missing something, but how is this safe? Presume we traverse in the walk with bucket_cnt = 0. Meanwhile a different CPU added entries to this bucket since not locked. Same reader on the other CPU with bucket_cnt = 0 then starts to traverse the second hlist_nulls_for_each_entry_safe() unlocked e.g. deleting entries?