Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753427AbcLLQeF (ORCPT ); Mon, 12 Dec 2016 11:34:05 -0500 Received: from [195.159.176.226] ([195.159.176.226]:41841 "EHLO blaine.gmane.org" rhost-flags-FAIL-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1752611AbcLLQeC (ORCPT ); Mon, 12 Dec 2016 11:34:02 -0500 X-Injected-Via-Gmane: http://gmane.org/ To: linux-kernel@vger.kernel.org From: Holger =?iso-8859-1?q?Hoffst=E4tte?= Subject: Re: [PATCH] btrfs: limit async_work allocation and worker func duration Date: Mon, 12 Dec 2016 16:33:46 +0000 (UTC) Message-ID: References: <148072986343.13061.16191252239168261528.stgit@maxim-thinkpad> <20161212145443.GT12522@twin.jikos.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Complaints-To: usenet@blaine.gmane.org User-Agent: Pan/0.140 (Chocolate Salty Balls; GIT b8fc14e git.gnome.org/git/pan2) Cc: linux-btrfs@vger.kernel.org Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1201 Lines: 34 On Mon, 12 Dec 2016 15:54:43 +0100, David Sterba wrote: > On Fri, Dec 02, 2016 at 05:51:36PM -0800, Maxim Patlasov wrote: >> Problem statement: unprivileged user who has read-write access to more than >> one btrfs subvolume may easily consume all kernel memory (eventually >> triggering oom-killer). >> [..snip..] >> >> +bool btrfs_workqueue_normal_congested(struct btrfs_workqueue *wq) >> +{ >> + int thresh = wq->normal->thresh != NO_THRESHOLD ? >> + wq->normal->thresh : num_possible_cpus(); > > Why not num_online_cpus? I vaguely remember we should be checking online > cpus, but don't have the mails for reference. We use it elsewhere for +1 > spreading the work over cpus, but it's still not bullet proof regarding > cpu onlining/offlining. > > Otherwise looks good to me, as far as I can imagine the possible > behaviour of the various async parameters just from reading the code. If it's any help I have been running with this for a few days now; regular day-to-day work, snapshots, balancing, defrags etc. with no obvious problems, though I haven't tried to break it with the reproducer either. Anyway: Tested-by: Holger Hoffstätte cheers, Holger