Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp4737280pxv; Tue, 29 Jun 2021 14:28:08 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzrV7mNqGNfQVRcCQfVuoudN8peMvnm1nMr3JTPm2hmvOggSAv3LCLDDm7DulR+kWELpwHk X-Received: by 2002:a17:907:2648:: with SMTP id ar8mr31864607ejc.77.1625002088521; Tue, 29 Jun 2021 14:28:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625002088; cv=none; d=google.com; s=arc-20160816; b=Lb/mCnAOySwMgQ61+C5D6N9rbztKhBgjIZLG+Ruvf+P46LOTpxg6rUTCs573cT1p20 e/Ba0VPz2ck6SxJfx6hmUs8zToy5letcBcv9Akqc24NhQ/EzzTTevDnq9r300tXL01nW 61HMwTuAwdCmGv1Pj2mVDHCBOVRAmrOXwhcBWow84vtOBV9tunrMAQFgApaKF+EhJ/6u fnZ2BQGP69n5GT29vz95IR7EBfq11v6qVDJZhF7MDUfMpbA0WR3/Q0TY8Kku3mmsLf58 pm0oV6W8A13jx7BL5pcjYhqhPCcaMQ7fkNCCZPEKU9WpuMr2CfyIl8ZyLO3sqKVTjVX5 nXNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=FrqFQPQ1Ye3XKJAdpfCoReQR+zXi3R08N7WEX3DldRo=; b=jQDC1IqXOMkCF+2B/mIsOHiQXqz/7L1zHLBvMSnqwl4vPtGZ19iAmQvRnE0cmLgIX/ prmkzklpVYKVYZZAd5SjifPE3xaiE8bsj3aw4tUBH50PUa+Jb/wW4SThvqoUxDXP1y7f zN8L9yX5kHAhnsUsNJZdmuHT8UF3Xl+xSKSfsyNk7ZeR/24+6v467KiM2bwWjCBVJjcy jJ79S7WZkVoVuvnCHQIhm4LLGOJMXsYEi/G6LIpzXVv3yFiHe9iieKbjWevfTuHwEbHQ LDpUwojI2VpwbCreZ1tNUiB2Q0wKMpNDOzL7FS9T341IvtdYnYS6Tc/gHjGVI0o2plt9 frqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=hi1jAmwR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g12si17138425edp.580.2021.06.29.14.27.45; Tue, 29 Jun 2021 14:28:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=hi1jAmwR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235193AbhF2VL3 (ORCPT + 99 others); Tue, 29 Jun 2021 17:11:29 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:54795 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234958AbhF2VL1 (ORCPT ); Tue, 29 Jun 2021 17:11:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625000939; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FrqFQPQ1Ye3XKJAdpfCoReQR+zXi3R08N7WEX3DldRo=; b=hi1jAmwRj1voBEnxNwO7SloihATaQvl7njgWjQ03Aawh/xgq8gPjWF7nOHh1iTUWFle2EX vOkOYJTMsBVv3iv56wx7jQaAMWepyXr456IvwisSYrzivpCgddRWzYab14y/3qNp6qzcBE qXP3jUHX1YCH1F4gU7qd3J2UJVKVQMI= Received: from mail-ej1-f69.google.com (mail-ej1-f69.google.com [209.85.218.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-77-cmkEcaZJMZuVAXDALpSxeA-1; Tue, 29 Jun 2021 17:08:57 -0400 X-MC-Unique: cmkEcaZJMZuVAXDALpSxeA-1 Received: by mail-ej1-f69.google.com with SMTP id u4-20020a1709061244b02904648b302151so39461eja.17 for ; Tue, 29 Jun 2021 14:08:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=FrqFQPQ1Ye3XKJAdpfCoReQR+zXi3R08N7WEX3DldRo=; b=r7g6xmAD5ewLeaVNIJdXqSmL+hXdBq2bCPb6AZJMDyCu2fGLy5+ln07RlJ5w9k6gmV y2eFSHEEWWV8b6ysmRizGkI8E1ct0OTFSkQkCWWVIhOlpRA7PFNzTmkwmekQ2bKyMi8I 9nwn6UajVRyqJpSqU9mKPqaqihc/OWTcKEWdoZ/5HD40Yls0jeu1M+qV9YNbT9Mr1I6T Pd9DMkSeC+JriXAwsvYFSLDeaBsyjYpe45uCSJnpHVY3rjgYsKHJlcboI9PdRqr/T8qT diS8xBN6tQjGAXm3j05VlhQTVH+yhN7qUEvEYBIQbkZH4TgNW8rQC5sC+342jEhEDwmp DLKw== X-Gm-Message-State: AOAM531a4JyGWH/Fg9BF9BBt8b5UPH8sIYMHokonSHXGID7sPnb1XmKB oEshP+/k46SOfmPHCIazfzYCjjng0DWxNhIw09Lhvt6uTvACzCzmNbhve8btBKfkoQxA9ZhomNO pJywmTG08DOtOM5oiMRErrgbB X-Received: by 2002:a17:907:2d0e:: with SMTP id gs14mr4894496ejc.49.1625000935279; Tue, 29 Jun 2021 14:08:55 -0700 (PDT) X-Received: by 2002:a17:907:2d0e:: with SMTP id gs14mr4894468ejc.49.1625000935004; Tue, 29 Jun 2021 14:08:55 -0700 (PDT) Received: from krava ([185.153.78.55]) by smtp.gmail.com with ESMTPSA id hz14sm8669156ejc.107.2021.06.29.14.08.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Jun 2021 14:08:54 -0700 (PDT) Date: Tue, 29 Jun 2021 23:08:51 +0200 From: Jiri Olsa To: Brendan Jackman Cc: bpf , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Florent Revest , John Fastabend , LKML , "Naveen N. Rao" , Sandipan Das Subject: Re: [BUG soft lockup] Re: [PATCH bpf-next v3] bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH Message-ID: References: <20210202135002.4024825-1-jackmanb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jun 29, 2021 at 06:41:24PM +0200, Jiri Olsa wrote: > On Tue, Jun 29, 2021 at 06:25:33PM +0200, Brendan Jackman wrote: > > On Tue, 29 Jun 2021 at 18:04, Jiri Olsa wrote: > > > > > > On Tue, Jun 29, 2021 at 04:10:12PM +0200, Jiri Olsa wrote: > > > > On Mon, Jun 28, 2021 at 11:21:42AM +0200, Brendan Jackman wrote: > > > > > On Sun, 27 Jun 2021 at 17:34, Jiri Olsa wrote: > > > > > > > > > > > > On Tue, Feb 02, 2021 at 01:50:02PM +0000, Brendan Jackman wrote: > > [snip] > > > > > Hmm, is the test prog from atomic_bounds.c getting JITed there (my > > > > > dumb guess at what '0xc0000000119efb30 (unreliable)' means)? That > > > > > shouldn't happen - should get 'eBPF filter atomic op code %02x (@%d) > > > > > unsupported\n' in dmesg instead. I wonder if I missed something in > > > > > commit 91c960b0056 (bpf: Rename BPF_XADD and prepare to encode other > > > > > > I see that for all the other atomics tests: > > > > > > [root@ibm-p9z-07-lp1 bpf]# ./test_verifier 21 > > > #21/p BPF_ATOMIC_AND without fetch FAIL > > > Failed to load prog 'Unknown error 524'! > > > verification time 32 usec > > > stack depth 8 > > > processed 10 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1 > > > Summary: 0 PASSED, 0 SKIPPED, 2 FAILED > > > > Hm that's also not good - failure to JIT shouldn't mean failure to > > load. Are there other test_verifier failures or is it just the atomics > > ones? > > I have CONFIG_BPF_JIT_ALWAYS_ON=y so I think that's fine > > > > > > console: > > > > > > [ 51.850952] eBPF filter atomic op code db (@2) unsupported > > > [ 51.851134] eBPF filter atomic op code db (@2) unsupported > > > > > > > > > [root@ibm-p9z-07-lp1 bpf]# ./test_verifier 22 > > > #22/u BPF_ATOMIC_AND with fetch FAIL > > > Failed to load prog 'Unknown error 524'! > > > verification time 38 usec > > > stack depth 8 > > > processed 14 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1 > > > #22/p BPF_ATOMIC_AND with fetch FAIL > > > Failed to load prog 'Unknown error 524'! > > > verification time 26 usec > > > stack depth 8 > > > processed 14 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 1 > > > > > > console: > > > [ 223.231420] eBPF filter atomic op code db (@3) unsupported > > > [ 223.231596] eBPF filter atomic op code db (@3) unsupported > > > > > > ... > > > > > > > > > but no such console output for: > > > > > > [root@ibm-p9z-07-lp1 bpf]# ./test_verifier 24 > > > #24/u BPF_ATOMIC bounds propagation, mem->reg OK > > > > > > > > > > > atomics in .imm). Any idea if this test was ever passing on PowerPC? > > > > > > > > > > > > > hum, I guess not.. will check > > > > > > nope, it locks up the same: > > > > Do you mean it locks up at commit 91c960b0056 too? > > > > I tried this one: > 37086bfdc737 bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH > > I will check also 91c960b0056, but I think it's the new test issue for i91c960b0056 in HEAD I'm getting just 2 fails: #1097/p xadd/w check whether src/dst got mangled, 1 FAIL Failed to load prog 'Unknown error 524'! verification time 25 usec stack depth 8 processed 12 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 0 #1098/p xadd/w check whether src/dst got mangled, 2 FAIL Failed to load prog 'Unknown error 524'! verification time 30 usec stack depth 8 processed 12 insns (limit 1000000) max_states_per_insn 0 total_states 1 peak_states 1 mark_read 0 with console output: [ 289.499341] eBPF filter atomic op code db (@4) unsupported [ 289.499510] eBPF filter atomic op code c3 (@4) unsupported no lock up jirka