Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751866AbdF1VjX (ORCPT ); Wed, 28 Jun 2017 17:39:23 -0400 Received: from mail-pg0-f49.google.com ([74.125.83.49]:36133 "EHLO mail-pg0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751533AbdF1VjP (ORCPT ); Wed, 28 Jun 2017 17:39:15 -0400 Date: Wed, 28 Jun 2017 14:37:03 -0700 From: Alexei Starovoitov To: Daniel Borkmann Cc: Edward Cree , davem@davemloft.net, Alexei Starovoitov , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, iovisor-dev Subject: Re: [PATCH v3 net-next 00/12] bpf: rewrite value tracking in verifier Message-ID: <20170628213701.32krfuipzngsmt4k@ast-mbp> References: <5953B436.6030506@iogearbox.net> <788035e1-1974-b48e-3008-d294194a8b05@solarflare.com> <595413AA.40502@iogearbox.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <595413AA.40502@iogearbox.net> User-Agent: NeoMutt/20170421 (1.8.2) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3018 Lines: 51 On Wed, Jun 28, 2017 at 10:38:02PM +0200, Daniel Borkmann wrote: > On 06/28/2017 04:11 PM, Edward Cree wrote: > > On 28/06/17 14:50, Daniel Borkmann wrote: > > > Hi Edward, > > > > > > Did you also have a chance in the meantime to look at reducing complexity > > > along with your unification? I did run the cilium test suite with your > > > latest set from here and current # worst case processed insns that > > > verifier has to go through for cilium progs increases from ~53k we have > > > right now to ~76k. I'm a bit worried that this quickly gets us close to > > > the upper ~98k max limit starting to reject programs again. Alternative > > > is to bump the complexity limit again in near future once run into it, > > > but preferably there's a way to optimize it along with the rewrite? Do > > > you see any possibilities worth exploring? > > The trouble, I think, is that as we're now tracking more information about > > each register value, we're less able to prune branches. But often that > > information is not actually being used in reaching the exit state. So it > > Agree. > > > seems like the way to tackle this would be to track what information is > > used — or at least, which registers are read from (including e.g. writing > > through them or passing them to helper calls) — in reaching a safe state. > > Then only registers which are used are required to match for pruning. > > But that tracking would presumably have to propagate backwards through the > > verifier stack, and I'm not sure how easily that could be done. Someone > > (was it you?) was talking about replacing the current DAG walking and > > pruning with some kind of basic-block thing, which would help with this. > > Summary: I think it could be done, but I haven't looked into the details > > of implementation yet; if it's not actually breaking your programs (yet), > > maybe leave it for a followup patch series? > > Could we adapt the limit to 128k perhaps as part of this set > given we know that we're tracking more meta data here anyway? Increasing the limit is must have, since pruning suffered so much. Going from 53k to 76k is pretty substantial. What is the % increase for tests in selftests/ ? I think we need to pin point exactly the reason. Saying we just track more data is not enough. We've tried v2 set on our load balancer and also saw ~20% increase. I don't remember the absolute numbers. These jumps don't make me comfortable with these extra tracking. Can you try to roll back ptr&const and full negative/positive tracking and see whether it gets back to what we had before? I agree that long term it's better to do proper basic block based liveness, but we need to do understand what's causing the increase today. If tnum is causing it that would be reasonable trade off to make, but if it's full neg/pos tracking that has no use today other than (the whole thing is cleaner) I would rather drop it then. We can always come back to it later once pruning issues are solved.