Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp5983283ybg; Tue, 22 Oct 2019 11:13:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqyzYzQAN5xg2cT3muqr1BHd9t9iBU1Kp9benLt+EQJmRkMYty5QnRB7xJfbFxitjS+9FYnY X-Received: by 2002:a17:906:2961:: with SMTP id x1mr28732855ejd.91.1571767997480; Tue, 22 Oct 2019 11:13:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571767997; cv=none; d=google.com; s=arc-20160816; b=u3pYly9pZu51COaVxhosfmHYG6dY7BpVkI3u6yRA0x09TMrjRbFNv5tgMpFA1i9710 qxitsIxXugHI7l5OHXxlR9pJ0McWLNoj39HaKFVvavCj++2wG0l0oy6+MGTaLG7MADRB FcMeeopq7YwNGL7KrwBCuGM9VZUWHiOWV6kFuGLld/BzJWLfNZ0o/KAAbPSkLPM9+c9I IlBIlG6JWELlsEKjjI1S8cm+dqxgdMUZfRUSbPvOtKxc2ias+fOVHKbfIjJWUjw3gHss zqaWrjVo+Rg4XupetMk4MO0gSjL/1sXGByaH7mLOEHs7G6SLs1W9MEtXmT+eZHnONhbj Qyeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=0xT/tdU/mGP7/LsSda81mugB3XEdLB32Rall5ak2gQI=; b=AUwatz+E87Kx5mA1rHvFfIhiqC9+6NMcKLT0EBQCehWjmua1lSs9sXTiS/W1K90Olx bs4hGytv79s5gyJLNLI6jaDeFjJfuOpT/yPo6idmQdSoV/3Mdyd3Q9TpSq9Gsyc9a06n S69ITnZpZ6BMseMkrAJ+6EchlPIfvC3qdipX5N6ncAep5vGUiXNKIrLCeKH52rAGGotL JxJ1en8l6RCFW0f7jhQDGYGiAo94IamS2taKGbIguVnlobCvGHqyBnqXzbKTNBOHcqpd mkQAEzvAs0zE/hnE9pLW3F/ZHFglIGHEFPotY89etgVyw1LkYKyw7VystxN6F8Cm2Bea BmYQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z16si12564721edb.420.2019.10.22.11.12.51; Tue, 22 Oct 2019 11:13:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731979AbfJVSKZ (ORCPT + 99 others); Tue, 22 Oct 2019 14:10:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:51576 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726702AbfJVSKZ (ORCPT ); Tue, 22 Oct 2019 14:10:25 -0400 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id ED0CA20B7C; Tue, 22 Oct 2019 18:10:22 +0000 (UTC) Date: Tue, 22 Oct 2019 14:10:21 -0400 From: Steven Rostedt To: Alexei Starovoitov Cc: Peter Zijlstra , Daniel Bristot de Oliveira , LKML , X86 ML , Nadav Amit , Andy Lutomirski , Dave Hansen , Song Liu , Masami Hiramatsu Subject: Re: [PATCH 3/3] x86/ftrace: Use text_poke() Message-ID: <20191022141021.2c4496c2@gandalf.local.home> In-Reply-To: <20191022175052.frjzlnjjfwwfov64@ast-mbp.dhcp.thefacebook.com> References: <20191004112237.GA19463@hirez.programming.kicks-ass.net> <20191004094228.5a5774fe@gandalf.local.home> <20191021204310.3c26f730@oasis.local.home> <20191021231630.49805757@oasis.local.home> <20191021231904.4b968dc1@oasis.local.home> <20191022040532.fvpxcs74i4mn4rc6@ast-mbp.dhcp.thefacebook.com> <20191022071956.07e21543@gandalf.local.home> <20191022094455.6a0a1a27@gandalf.local.home> <20191022175052.frjzlnjjfwwfov64@ast-mbp.dhcp.thefacebook.com> X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 22 Oct 2019 10:50:56 -0700 Alexei Starovoitov wrote: > > +static void my_hijack_func(unsigned long ip, unsigned long pip, > > + struct ftrace_ops *ops, struct pt_regs *regs) > > 1. > To pass regs into the callback ftrace_regs_caller() has huge amount > of stores to do. Saving selector registers is not fast. pushf/popf are even slower. > ~200 bytes of stack is being used for save/restore. > This is pure overhead that bpf execution cannot afford. > bpf is different from traditional ftrace and other tracing, since > it's often active 24/7. Every nanosecond counts. Live patching is the same as what you have. If not even more critical. Note, it would be easy to also implement a "just give me IP regs" flag, or have that be the default if IPMODIFY is set and REGS is not. > So for bpf I'm generating assembler trampoline tailored to specific kernel > function that has its fentry nop replaced with a call to that trampoline. > Instead of 20+ register save I save only arguments of that kernel function. > For example the trampoline that attaches to kfree_skb() will save only two registers > (rbp and rdi on x86) and will use 16 bytes of stack. > > 2. > The common ftrace callback api allows ftrace infra to use generic ftrace_ops_list_func() > that works for all ftracers, but it doesn't scale. That only happens if you have more than one callback to a same function. Otherwise you get a dedicated trampoline. > We see different teams writing bpf services that attach to the same function. > In addition there are 30+ kprobes active in other places, so for every > fentry the ftrace_ops_list_func() walks long linked list and does hash > lookup for each. That's not efficient and we see this slowdown in practice. > Because of unique trampoline for every kernel function single > generic list caller is not possible. > Instead generated unique trampoline handles all attached bpf program > for this particular kernel function in a sequence of calls. Why not have a single ftrace_ops() that calls this utility and do the multiplexing then? > No link lists to walk, no hash tables to lookup. > All overhead is gone. > > 3. > The existing kprobe bpf progs are using pt_regs to read arguments. Due to That was done because kprobes in general work off of int3. And the saving of pt_regs was to reuse the code and allow kprobes to work both with or without a ftrace helper. > that ugliness all of them are written for single architecture (x86-64). > Porting them to arm64 is not that hard, but porting to 32-bit arch is close > to impossible. With custom generated trampoline we'll have bpf progs that > work as-is on all archs. raw_tracepoint bpf progs already demonstrated > that such portability is possible. This new kprobe++ progs will be similar. > > 4. > Due to uniqueness of bpf trampoline sharing trampoline between ftracers > and bpf progs is not possible, so users would have to choose whether to > ftrace that particular kernel function or attach bpf to it. > Attach is not going to stomp on each other. I'm reusing ftrace_make_call/nop > approach that checks that its a 'nop' being replaced. What about the approach I showed here? Just register a ftrace_ops with ip modify set, and then call you unique trampoline directly. It would keep the modification all in one place instead of having multiple implementations of it. We can make ftrace call your trampoline just like it was called directly, without writing a whole new infrastructure. -- Steve > > There probably will be some gotchas and unforeseen issues, since prototype > is very rough and not in reviewable form yet. Will share when it's ready.