Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753741AbbKBQMH (ORCPT ); Mon, 2 Nov 2015 11:12:07 -0500 Received: from mail-pa0-f51.google.com ([209.85.220.51]:34820 "EHLO mail-pa0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752669AbbKBQME (ORCPT ); Mon, 2 Nov 2015 11:12:04 -0500 Message-ID: <1446480722.23275.14.camel@edumazet-glaptop2.roam.corp.google.com> Subject: Re: kernel panic in 4.2.3, rb_erase in sch_fq From: Eric Dumazet To: Denys Fedoryshchenko Cc: Jamal Hadi Salim , "David S. Miller" , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Date: Mon, 02 Nov 2015 08:12:02 -0800 In-Reply-To: References: <1446477897.23275.6.camel@edumazet-glaptop2.roam.corp.google.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.10.4-0ubuntu2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2120 Lines: 49 On Mon, 2015-11-02 at 17:58 +0200, Denys Fedoryshchenko wrote: > On 2015-11-02 17:24, Eric Dumazet wrote: > > On Mon, 2015-11-02 at 16:11 +0200, Denys Fedoryshchenko wrote: > >> Hi! > >> > >> Actually seems i was getting this panic for a while (once per week) on > >> loaded pppoe server, but just now was able to get full panic message. > >> After checking commit logs on sch_fq.c i didnt seen any fixes, so > >> probably upgrading to newer kernel wont help? > > > > I do not think we support sch_fq as a HTB leaf. > > > > If you want both HTB and sch_fq, you need to setup a bonding device. > > > > HTB on bond0 > > > > sch_fq on the slaves > > > > Sure, the kernel should not crash, but HTB+sch_fq on same net device is > > certainly not something that will work anyway. > Strange, because except ppp, on static devices it works really very well > in such scheme. It is the only solution that can throttle incoming > bandwidth, when bandwidth is very overbooked - reliably, for my use > cases, such as 256k+ flows/2.5Gbps and several different classes of > traffic, so using DRR will end up in just not enough classes. > > On latest kernels i had to patch tc to provide parameter for orphan mask > in fq, to increase number for flows for transit traffic. > None of other qdiscs able to solve this problem, incoming bandwidth > simply flowing 10-20% more than set, but fq is doing magic. > The only device that was working with similar efficiency for such cases > - proprietary PacketShaper, but is modifying tcp window size, and can't > be called transparent, and also has stability issues over 1Gbps. Ah, I was thinking you needed more like 10Gb traffic ;) with HTB on bonding, we can use MQ+FQ on the slaves in order to use many cpus to serve local traffic. But yes, if you use HTB+FQ for forwarding, I guess the bonding setup is not really needed. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/