Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp694307pxv; Thu, 1 Jul 2021 07:20:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyMbE7nQmtAqFjfu0cYiyUHMa1mWtNNXw487HTolD6/YYhrZbKthOnO/sLs05hM7hqDIgPQ X-Received: by 2002:a17:906:f285:: with SMTP id gu5mr96023ejb.226.1625149242791; Thu, 01 Jul 2021 07:20:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1625149242; cv=none; d=google.com; s=arc-20160816; b=bs0JGEc7LRV8Am5m1PbVok8tUqw4UsQAxFcvHGzLOvpfiV/7M3T23vj5kbubbzsuTp UOXWuQgNlN1+XnFE7owwxmUPTGUf/tLtRJIlfEBAy83QZppe32Ehg/FeemgjdLv6vyax GTyepfdZuAY7AmDTgtKr3pW+NrXUqds8rYKbYYeSbFofGL8/vkGjaUjiBIY3FxD9IT33 R4BUpst0DxssECm4dFNGzgm3HhMB92SlMfOBEH8MjGL8JzAmiVaNdAepLK7P/kpY1MOa B437CPI+R2XOB8Om9wKbSmUB0udzpdulXHDNNoD10rR7aZIyNt/4ASLo1P/5U7gkTaJ7 +Uow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=BQuBNqnXtrucufI+PPJzux6WeHo7CQIrHe9vbJeS+xA=; b=rgbNRWxq2TaBGbQ0FG0GcZj/ucAc8iNaBt+aAG+GxiLnK2ZWsLlRIybvH2gDz1phnf QZctz5mhN51v+dummTrTSVtLpaYImHtueVgMkhDy3c8hAWvVpXKpRKn8wvh+0m05Qy4G Dmcuom2xBHq8bF0CL3LgT5wZHMN1Ud6M6oml3fNX3wti/U/MGE7oo1l2BBNkmko1r+8z VYQQ2qeL3K2tMd0FbWI5yYXg7pDr/YaRsYrTe4N1fFubfHfJu5yCEFDbAn3M0grrQkLc Ik34LwWOowrKnSrszSHFwsQ0YT/ry6o/0RcF9jZcEVgsDJq2Vzc7XI4rv15HSnDZbdQr kOYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=J6tRCSLO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dz9si2635976edb.354.2021.07.01.07.20.17; Thu, 01 Jul 2021 07:20:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=J6tRCSLO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232717AbhGAOSc (ORCPT + 99 others); Thu, 1 Jul 2021 10:18:32 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:55102 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232618AbhGAOSc (ORCPT ); Thu, 1 Jul 2021 10:18:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625148961; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=BQuBNqnXtrucufI+PPJzux6WeHo7CQIrHe9vbJeS+xA=; b=J6tRCSLO6ooW/jEfdM7ooq9LIvCpGQIYvPsDzh3Z8fGTtEor1mU6vtiVHyHBE+91omJmmY kzlwIjqW79kXfG319Q5JFUlrIc+6VgwgGa7hZ0kYIyw7Pgr9l84hXaCruOQ6owOoGXkZwH txYxxWPC9vN9sKfR6QeWd160V/AXdVo= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-516-8F-erN-YPweywPmMybxmtw-1; Thu, 01 Jul 2021 10:15:53 -0400 X-MC-Unique: 8F-erN-YPweywPmMybxmtw-1 Received: by mail-ed1-f72.google.com with SMTP id n13-20020a05640206cdb029039589a2a771so3174880edy.5 for ; Thu, 01 Jul 2021 07:15:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=BQuBNqnXtrucufI+PPJzux6WeHo7CQIrHe9vbJeS+xA=; b=eYRxYNAo7ICnuQUi4D69fXPHa2jYJg8SuPYSQKH/rGWbWCfbeyn35ahZ3DD9F/e7W7 Ehc/7zK7TyiDmbkHQrGBQOIUhV4Qx717afA3KwRym2XJnEzzoGiLfNp1Xovj/lOeEwWL MGNUEgvP3aFSJA1SOwmUBrIeHCnGl9a93xFyfjbBk/NYDBuevZ+zLn5ZDgEW4ybWmwHJ 6bNpJCoYrHK0qjcq1dmvnDw473iffepkv6lRFv5V4To+XsjTGmz31LeblgJuLvfgZERn Ujup+wbbmP1odnzX/B2rgaf1pGUX8Tb8THPGlZUS3+z8Z85bGJoPgy/bViOng1XPxzjP NL4Q== X-Gm-Message-State: AOAM531wA5CpEgZ+5ZknNMPWBAnusMNsrGyQynvOeZhygQubmwsVvkZM jpGIAfwRUo7Iz6Xa4z0kch8wbByV7d7jU1C0Ko9/p7on8Wp/W3cZCJHDOHtU9HArNTuziES8fEl 9qbQoOzYN/nDgO/D9fLrvOY+2 X-Received: by 2002:a17:906:7315:: with SMTP id di21mr10130851ejc.511.1625148950310; Thu, 01 Jul 2021 07:15:50 -0700 (PDT) X-Received: by 2002:a17:906:7315:: with SMTP id di21mr10130826ejc.511.1625148950118; Thu, 01 Jul 2021 07:15:50 -0700 (PDT) Received: from krava ([185.153.78.55]) by smtp.gmail.com with ESMTPSA id cd4sm10870515ejb.104.2021.07.01.07.15.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Jul 2021 07:15:49 -0700 (PDT) Date: Thu, 1 Jul 2021 16:15:45 +0200 From: Jiri Olsa To: "Naveen N. Rao" Cc: Brendan Jackman , Sandipan Das , Andrii Nakryiko , Alexei Starovoitov , bpf , Daniel Borkmann , John Fastabend , KP Singh , LKML , Florent Revest Subject: Re: [BUG soft lockup] Re: [PATCH bpf-next v3] bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH Message-ID: References: <1625133383.8r6ttp782l.naveen@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1625133383.8r6ttp782l.naveen@linux.ibm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 01, 2021 at 04:32:03PM +0530, Naveen N. Rao wrote: > Hi Brendan, Hi Jiri, > > > Brendan Jackman wrote: > > On Wed, 30 Jun 2021 at 14:42, Jiri Olsa wrote: > > > > > > On Wed, Jun 30, 2021 at 12:34:58PM +0200, Brendan Jackman wrote: > > > > On Tue, 29 Jun 2021 at 23:09, Jiri Olsa wrote: > > > > > > > > > > On Tue, Jun 29, 2021 at 06:41:24PM +0200, Jiri Olsa wrote: > > > > > > On Tue, Jun 29, 2021 at 06:25:33PM +0200, Brendan Jackman wrote: > > > > > > > On Tue, 29 Jun 2021 at 18:04, Jiri Olsa wrote: > > > > > > > > On Tue, Jun 29, 2021 at 04:10:12PM +0200, Jiri Olsa wrote: > > > > > > > > > On Mon, Jun 28, 2021 at 11:21:42AM +0200, Brendan Jackman wrote: > > > > > > > > > > > > > > atomics in .imm). Any idea if this test was ever passing on PowerPC? > > > > > > > > > > > > > > > > > > > > > > > > > > > > hum, I guess not.. will check > > > > > > > > > > > > > > > > nope, it locks up the same: > > > > > > > > > > > > > > Do you mean it locks up at commit 91c960b0056 too? > > > > > > > > Sorry I was being stupid here - the test didn't exist at this commit > > > > > > > > > > I tried this one: > > > > > > 37086bfdc737 bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH > > > > > > > > > > > > I will check also 91c960b0056, but I think it's the new test issue > > > > > > > > So yeah hard to say whether this was broken on PowerPC all along. How > > > > hard is it for me to get set up to reproduce the failure? Is there a > > > > rootfs I can download, and some instructions for running a PowerPC > > > > QEMU VM? If so if you can also share your config and I'll take a look. > > > > > > > > If it's not as simple as that, I'll stare at the code for a while and > > > > see if anything jumps out. > > > > > > > > > > I have latest fedora ppc server and compile/install latest bpf-next tree > > > I think it will be reproduced also on vm, I attached my config > > > > OK, getting set up to boot a PowerPC QEMU isn't practical here unless > > someone's got commands I can copy-paste (suspect it will need .config > > hacking too). Looks like you need to build a proper bootloader, and > > boot an installer disk. > > There are some notes put up here, though we can do better: > https://github.com/linuxppc/wiki/wiki/Booting-with-Qemu > > If you are familiar with ubuntu/fedora cloud images (and cloud-init), you > should be able to grab one of the ppc64le images and boot it in qemu: > https://cloud-images.ubuntu.com/releases/hirsute/release/ > https://alt.fedoraproject.org/alt/ > > > > > Looked at the code for a bit but nothing jumped out. It seems like the > > verifier is seeing a BPF_ADD | BPF_FETCH, which means it doesn't > > detect an infinite loop, but then we lose the BPF_FETCH flag somewhere > > between do_check in verifier.c and bpf_jit_build_body in > > bpf_jit_comp64.c. That would explain why we don't get the "eBPF filter > > atomic op code %02x (@%d) unsupported", and would also explain the > > lockup because a normal atomic add without fetch would leave BPF R1 > > unchanged. > > > > We should be able to confirm that theory by disassembling the JITted > > code that gets hexdumped by bpf_jit_dump when bpf_jit_enable is set to > > 2... at least for PowerPC 32-bit... maybe you could paste those lines > > into the 64-bit version too? Here's some notes I made for > > disassembling the hexdump on x86, I guess you'd just need to change > > the objdump flags: > > > > -- > > > > - Enable console JIT output: > > ```shell > > echo 2 > /proc/sys/net/core/bpf_jit_enable > > ``` > > - Load & run the program of interest. > > - Copy the hex code from the kernel console to `/tmp/jit.txt`. Here's what a > > short program looks like. This includes a line of context - don't paste the > > `flen=` line. > > ``` > > [ 79.381020] flen=8 proglen=54 pass=4 image=000000001af6f390 > > from=test_verifier pid=258 > > [ 79.389568] JIT code: 00000000: 0f 1f 44 00 00 66 90 55 48 89 e5 48 81 ec 08 00 > > [ 79.397411] JIT code: 00000010: 00 00 48 c7 45 f8 64 00 00 00 bf 04 00 00 00 48 > > [ 79.405965] JIT code: 00000020: f7 df f0 48 29 7d f8 8b 45 f8 48 83 f8 60 74 02 > > [ 79.414719] JIT code: 00000030: c9 c3 31 c0 eb fa > > ``` > > - This incantation will split out and decode the hex, then disassemble the > > result: > > ```shell > > cat /tmp/jit.txt | cut -d: -f2- | xxd -r >/tmp/obj && objdump -D -b > > binary -m i386:x86-64 /tmp/obj > > ``` > > > > -- > > > > Sandipan, Naveen, do you know of anything in the PowerPC code that > > might be leading us to drop the BPF_FETCH flag from the atomic > > instruction in tools/testing/selftests/bpf/verifier/atomic_bounds.c? > > Yes, I think I just found the issue. We aren't looking at the correct BPF > instruction when checking the IMM value. great, nice catch! :-) that fixes it for me.. Tested-by: Jiri Olsa thanks, jirka > > > --- a/arch/powerpc/net/bpf_jit_comp64.c > +++ b/arch/powerpc/net/bpf_jit_comp64.c > @@ -673,7 +673,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * > * BPF_STX ATOMIC (atomic ops) > */ > case BPF_STX | BPF_ATOMIC | BPF_W: > - if (insn->imm != BPF_ADD) { > + if (insn[i].imm != BPF_ADD) { > pr_err_ratelimited( > "eBPF filter atomic op code %02x (@%d) unsupported\n", > code, i); > @@ -695,7 +695,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * > PPC_BCC_SHORT(COND_NE, tmp_idx); > break; > case BPF_STX | BPF_ATOMIC | BPF_DW: > - if (insn->imm != BPF_ADD) { > + if (insn[i].imm != BPF_ADD) { > pr_err_ratelimited( > "eBPF filter atomic op code %02x (@%d) unsupported\n", > code, i); > > > > Thanks, > Naveen >