Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp509909pxv; Thu, 1 Jul 2021 03:19:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyhp1zgRh550tu3wx6zaSCKsPxFLhou8a2nzDzxd/aIDJZc5kcJjPtJM947scH4U4XlC1LD X-Received: by 2002:a05:6e02:11ac:: with SMTP id 12mr30411175ilj.173.1625134790910; Thu, 01 Jul 2021 03:19:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625134790; cv=none; d=google.com; s=arc-20160816; b=EqkF7hZxbaJ1DHFbeu1XmarUiV010n2W2APTD3Pa3c22B0ciaf8BMJ3e+PfSCshAOz PApbddLYeJindtwy72n8H8KTX0o7tg2oW2UlwSPJ8m/4pUHxSqTK2LaKq+k5ByrYPeSX Q7UH+05TUS03dLUVtHbH0Lay8gkIGadXofYxNi2nqBAFlMNI1GTWfKG5IJHKnp3whZyQ 5xEP/g+0Nh1IRuFiE3dBzVrwDj7OoGvsl7Jd9dEkZbijGaXVhKgN9SFAYjgnWM+lA3Zk GKg8LfRQgWfOu1bsxTQC56LEXEHNMBf0aA82yhooPE3ZuqbDJfS767eigVY73ANaspWd wDjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=A3sKDRuSMeLomK82BHUwYIxcC2r0lOgJd1SJHmFm4GE=; b=IhiZ1LRUPiYkYY1DqrsxEv2L36bsQpzNxheYBOJXtWyS/NmVKfLxTpJzUcJJZ7L7mz PfZTmCMHdOkJ1rCRpC45LpapQRmhoz5e/buc1y039KoAvcUNP7+dM5XtsF+KsJI+FyG1 vqov32Oej1q0ezDgce6tn9bKplauY30786T/PhH+ENsK8e5xB9Uam/sz+Y4LMv6Sk9ez 77GfhfI2uJQfW3Wbt8ytgOQPnEvqLl3LvPESI/vAtGXp2+6yVpZqI16XakYyuA5A7cIE T5vvOh9PjS1I+MNPKJPyxuM6RZySK1/e4lcG1ktqDOtZb60aXPrwfuM9RTOzMouWv6PP hbRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fMLrzjOu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t14si29139905jan.114.2021.07.01.03.19.38; Thu, 01 Jul 2021 03:19:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fMLrzjOu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236020AbhGAKTO (ORCPT + 99 others); Thu, 1 Jul 2021 06:19:14 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:32382 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236000AbhGAKTN (ORCPT ); Thu, 1 Jul 2021 06:19:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625134602; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=A3sKDRuSMeLomK82BHUwYIxcC2r0lOgJd1SJHmFm4GE=; b=fMLrzjOujuoUE7+TqBYA6Tk/Od9MMXuzzDIvbsc0yIwR1wxUC7IhHYGF0RY36qBE7i1030 TIODrYJ0sfjCxdw5XXCWfyELtPixpayGokZ33Mhhf9rRC3kUTBxef4+FnrhaSX45m1CZ1d pv5vmJvoAiyV2ZjK5cJ6jgqZFfxq72A= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-459-vUWK5dDPPXSDLWyclOsWXg-1; Thu, 01 Jul 2021 06:16:41 -0400 X-MC-Unique: vUWK5dDPPXSDLWyclOsWXg-1 Received: by mail-ej1-f72.google.com with SMTP id d2-20020a1709072722b02904c99c7e6ddfso1915786ejl.15 for ; Thu, 01 Jul 2021 03:16:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=A3sKDRuSMeLomK82BHUwYIxcC2r0lOgJd1SJHmFm4GE=; b=eQqZoab7UlvYel/IcmJGUousqqQkTR8Ea925Nh1xQBH6DBc5LVgrlVItW9TzkLfWnq xHcV4BGvpIwi4jIRSzLoXr0GLt8h87T4i58GPRv6o6z3qf7es6/81gIxkqgOyBQeDjGJ QyswhOVWsnavq97i9ZZBLx4VYQjfzZJ5QrOHMH4l4RoKWT2rykSRxJtQoTn0gD2qEjrp /YoeWx7cgD3iKcboKl8tSXJVObv2AtHOpAMLV4ipVsJ0NTddhyRZaQI/rpnQTe8BDeV0 X9wqgF/jC+B6Dz+57vZghNn3/cuyqaEINrd9Sa7+qzpDVD9IppDM/PZNl5iQpESMBaCT aMIw== X-Gm-Message-State: AOAM532dI+M+4YPllDeUm+eL2wyqVG44t7no+dF1Rht9nXuDVJsWw54D ncMFNoJTGw8WVOZp6qZiE5Hi8gVo8d/wpXlmpW8hmbz2PArwUkWht+QxniresPOrzMJg/lTDVU7 FbJNe2cT7JrrqbvyhTXshxKvS X-Received: by 2002:a05:6402:520c:: with SMTP id s12mr54484280edd.357.1625134600316; Thu, 01 Jul 2021 03:16:40 -0700 (PDT) X-Received: by 2002:a05:6402:520c:: with SMTP id s12mr54484263edd.357.1625134600168; Thu, 01 Jul 2021 03:16:40 -0700 (PDT) Received: from krava ([185.153.78.55]) by smtp.gmail.com with ESMTPSA id b15sm2046767eja.82.2021.07.01.03.16.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Jul 2021 03:16:39 -0700 (PDT) Date: Thu, 1 Jul 2021 12:16:36 +0200 From: Jiri Olsa To: Brendan Jackman Cc: Sandipan Das , "Naveen N. Rao" , bpf , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Florent Revest , John Fastabend , LKML Subject: Re: [BUG soft lockup] Re: [PATCH bpf-next v3] bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 01, 2021 at 10:18:39AM +0200, Brendan Jackman wrote: > On Wed, 30 Jun 2021 at 14:42, Jiri Olsa wrote: > > > > On Wed, Jun 30, 2021 at 12:34:58PM +0200, Brendan Jackman wrote: > > > On Tue, 29 Jun 2021 at 23:09, Jiri Olsa wrote: > > > > > > > > On Tue, Jun 29, 2021 at 06:41:24PM +0200, Jiri Olsa wrote: > > > > > On Tue, Jun 29, 2021 at 06:25:33PM +0200, Brendan Jackman wrote: > > > > > > On Tue, 29 Jun 2021 at 18:04, Jiri Olsa wrote: > > > > > > > On Tue, Jun 29, 2021 at 04:10:12PM +0200, Jiri Olsa wrote: > > > > > > > > On Mon, Jun 28, 2021 at 11:21:42AM +0200, Brendan Jackman wrote: > > > > > > > > > > > > atomics in .imm). Any idea if this test was ever passing on PowerPC? > > > > > > > > > > > > > > > > > > > > > > > > > hum, I guess not.. will check > > > > > > > > > > > > > > nope, it locks up the same: > > > > > > > > > > > > Do you mean it locks up at commit 91c960b0056 too? > > > > > > Sorry I was being stupid here - the test didn't exist at this commit > > > > > > > > I tried this one: > > > > > 37086bfdc737 bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH > > > > > > > > > > I will check also 91c960b0056, but I think it's the new test issue > > > > > > So yeah hard to say whether this was broken on PowerPC all along. How > > > hard is it for me to get set up to reproduce the failure? Is there a > > > rootfs I can download, and some instructions for running a PowerPC > > > QEMU VM? If so if you can also share your config and I'll take a look. > > > > > > If it's not as simple as that, I'll stare at the code for a while and > > > see if anything jumps out. > > > > > > > I have latest fedora ppc server and compile/install latest bpf-next tree > > I think it will be reproduced also on vm, I attached my config > > OK, getting set up to boot a PowerPC QEMU isn't practical here unless > someone's got commands I can copy-paste (suspect it will need .config > hacking too). Looks like you need to build a proper bootloader, and > boot an installer disk. > > Looked at the code for a bit but nothing jumped out. It seems like the > verifier is seeing a BPF_ADD | BPF_FETCH, which means it doesn't > detect an infinite loop, but then we lose the BPF_FETCH flag somewhere > between do_check in verifier.c and bpf_jit_build_body in > bpf_jit_comp64.c. That would explain why we don't get the "eBPF filter > atomic op code %02x (@%d) unsupported", and would also explain the > lockup because a normal atomic add without fetch would leave BPF R1 > unchanged. > > We should be able to confirm that theory by disassembling the JITted > code that gets hexdumped by bpf_jit_dump when bpf_jit_enable is set to > 2... at least for PowerPC 32-bit... maybe you could paste those lines > into the 64-bit version too? Here's some notes I made for > disassembling the hexdump on x86, I guess you'd just need to change > the objdump flags: > > -- > > - Enable console JIT output: > ```shell > echo 2 > /proc/sys/net/core/bpf_jit_enable > ``` > - Load & run the program of interest. > - Copy the hex code from the kernel console to `/tmp/jit.txt`. Here's what a > short program looks like. This includes a line of context - don't paste the > `flen=` line. > ``` > [ 79.381020] flen=8 proglen=54 pass=4 image=000000001af6f390 > from=test_verifier pid=258 > [ 79.389568] JIT code: 00000000: 0f 1f 44 00 00 66 90 55 48 89 e5 48 81 ec 08 00 > [ 79.397411] JIT code: 00000010: 00 00 48 c7 45 f8 64 00 00 00 bf 04 00 00 00 48 > [ 79.405965] JIT code: 00000020: f7 df f0 48 29 7d f8 8b 45 f8 48 83 f8 60 74 02 > [ 79.414719] JIT code: 00000030: c9 c3 31 c0 eb fa > ``` > - This incantation will split out and decode the hex, then disassemble the > result: > ```shell > cat /tmp/jit.txt | cut -d: -f2- | xxd -r >/tmp/obj && objdump -D -b > binary -m i386:x86-64 /tmp/obj > ``` that's where I decided to write to list and ask for help before googling ppc assembly ;-) I changed the test_verifier to stop before executing the test so I can dump the program via bpftool: [root@ibm-p9z-07-lp1 bpf-next]# bpftool prog dump xlated id 48 0: (b7) r0 = 0 1: (7b) *(u64 *)(r10 -8) = r0 2: (b7) r1 = 1 3: (db) r1 = atomic64_fetch_add((u64 *)(r10 -8), r1) 4: (55) if r1 != 0x0 goto pc-1 5: (95) exit [root@ibm-p9z-07-lp1 bpf-next]# bpftool prog dump jited id 48 bpf_prog_a2eb9104e5e8a5bf: 0: nop 4: nop 8: stdu r1,-112(r1) c: std r31,104(r1) 10: addi r31,r1,48 14: li r8,0 18: std r8,-8(r31) 1c: li r3,1 20: addi r9,r31,-8 24: ldarx r10,0,r9 28: add r10,r10,r3 2c: stdcx. r10,0,r9 30: bne 0x0000000000000024 34: cmpldi r3,0 38: bne 0x0000000000000034 3c: nop 40: ld r31,104(r1) 44: addi r1,r1,112 48: mr r3,r8 4c: blr I wanted to also do it through bpf_jit_enable and bpf_jit_dump, but I need to check the setup, because I can't set bpf_jit_enable to 2 at the moment.. might take some time [root@ibm-p9z-07-lp1 bpf-next]# echo 2 > /proc/sys/net/core/bpf_jit_enable -bash: echo: write error: Invalid argument jirka > > -- > > Sandipan, Naveen, do you know of anything in the PowerPC code that > might be leading us to drop the BPF_FETCH flag from the atomic > instruction in tools/testing/selftests/bpf/verifier/atomic_bounds.c? >