Received: by 2002:a25:6193:0:0:0:0:0 with SMTP id v141csp3125438ybb; Mon, 6 Apr 2020 02:36:31 -0700 (PDT) X-Google-Smtp-Source: APiQypJaNiYd4q+nAk1FAbTY89+9E03lPhheieMF/lolovgNTdc/jAcBCXPdJ47/Y3Sz1BJncIMF X-Received: by 2002:a05:6830:22d9:: with SMTP id q25mr16081909otc.164.1586165791600; Mon, 06 Apr 2020 02:36:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1586165791; cv=none; d=google.com; s=arc-20160816; b=Xs677dKWT/bw8SrL4MOyR5TnU0tH0P6IE8TqTM3vJoKXpDf6f8sbdkPHBuVxeBPvqW verJrWzGt4nkLaq68VRMuzQ3nDP+Q8p9GbK/20zK+DmRplQu0jVYFwfBnf6ZDsF8NrRP 37Zl8fO3uHAcCTwH8GkcYu1riGGl6m+HWhov58ZmRBjzH1q9qMNAHpplhjAj1vDrJfez h6nowOZAv+Iltsy5ICOvLM3SXuy9uPHb0jSekzDmOWCizhWiwVl4c4VmsbyQA94ChY9D DqsSgrcHKWnZ69l1G7i4HGCB50wTqBJa9FLnHGJWnBUIqT7lKiC+bJzS6tRD2PCFBlzH 3muw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=vZPMf9bImdiBpHIiR8oHKnuiPAv2QXskqfHoMztHXeE=; b=nSY+JMBxVsmhXLINUQzOyFGIxf+qpxTPJzcOvTapV5RW4fTL1iulFQji4KCGDMNL7t mBk6s7zVaTYnc1j4S6zy0faxqc91NxCJyM7m1XcXFBCBQfNYvAXMOkZ3rIPNnUZyj9wJ LOvDU2V2Do3ujCIBX0GzVU7qbJVinoGtgpT9DyQE3DPgQOxC4syNDSrOtBqC3EUXgMkU D4qsv1EXgDCTGpjDavOUCtvpZgdspAUFJuJH9oDvMDKA10Ly1VSUWWRC1iE0dqJAEfs3 gbjpBCIApHiX8Uxr4IQEB4Gd34tHwGnz+RDwvoPEnm1Yff28Pv5lQ9IXuf1Ev3cJs2GF +g5g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 97si6955430oth.209.2020.04.06.02.36.08; Mon, 06 Apr 2020 02:36:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-nfs-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726706AbgDFJgH (ORCPT + 99 others); Mon, 6 Apr 2020 05:36:07 -0400 Received: from mx2.suse.de ([195.135.220.15]:51342 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726687AbgDFJgH (ORCPT ); Mon, 6 Apr 2020 05:36:07 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 6BA0FAE71; Mon, 6 Apr 2020 09:36:04 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 671AA1E1244; Mon, 6 Apr 2020 11:36:01 +0200 (CEST) Date: Mon, 6 Apr 2020 11:36:01 +0200 From: Jan Kara To: Michal Hocko Cc: NeilBrown , Trond Myklebust , "Anna.Schumaker@Netapp.com" , Andrew Morton , Jan Kara , linux-mm@kvack.org, linux-nfs@vger.kernel.org, LKML Subject: Re: [PATCH 1/2] MM: replace PF_LESS_THROTTLE with PF_LOCAL_THROTTLE Message-ID: <20200406093601.GA1143@quack2.suse.cz> References: <87tv2b7q72.fsf@notabene.neil.brown.name> <87v9miydai.fsf@notabene.neil.brown.name> <87sghmyd8v.fsf@notabene.neil.brown.name> <20200403151534.GG22681@dhcp22.suse.cz> <878sjcxn7i.fsf@notabene.neil.brown.name> <20200406074453.GH19426@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200406074453.GH19426@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Mon 06-04-20 09:44:53, Michal Hocko wrote: > On Sat 04-04-20 08:40:17, Neil Brown wrote: > > On Fri, Apr 03 2020, Michal Hocko wrote: > > > > > On Thu 02-04-20 10:53:20, Neil Brown wrote: > > >> > > >> PF_LESS_THROTTLE exists for loop-back nfsd, and a similar need in the > > >> loop block driver, where a daemon needs to write to one bdi in > > >> order to free up writes queued to another bdi. > > >> > > >> The daemon sets PF_LESS_THROTTLE and gets a larger allowance of dirty > > >> pages, so that it can still dirty pages after other processses have been > > >> throttled. > > >> > > >> This approach was designed when all threads were blocked equally, > > >> independently on which device they were writing to, or how fast it was. > > >> Since that time the writeback algorithm has changed substantially with > > >> different threads getting different allowances based on non-trivial > > >> heuristics. This means the simple "add 25%" heuristic is no longer > > >> reliable. > > >> > > >> This patch changes the heuristic to ignore the global limits and > > >> consider only the limit relevant to the bdi being written to. This > > >> approach is already available for BDI_CAP_STRICTLIMIT users (fuse) and > > >> should not introduce surprises. This has the desired result of > > >> protecting the task from the consequences of large amounts of dirty data > > >> queued for other devices. > > > > > > While I understand that you want to have per bdi throttling for those > > > "special" files I am still missing how this is going to provide the > > > additional room that the additnal 25% gave them previously. I might > > > misremember or things have changed (what you mention as non-trivial > > > heuristics) but PF_LESS_THROTTLE really needed that room to guarantee a > > > forward progress. Care to expan some more on how this is handled now? > > > Maybe we do not need it anymore but calling that out explicitly would be > > > really helpful. > > > > The 25% was a means to an end, not an end in itself. > > > > The problem is that the NFS server needs to be able to write to the > > backing filesystem when the dirty memory limits have been reached by > > being totally consumed by dirty pages on the NFS filesystem. > > > > The 25% was just a way of giving an allowance of dirty pages to nfsd > > that could not be consumed by processes writing to an NFS filesystem. > > i.e. it doesn't need 25% MORE, it needs 25% PRIVATELY. Actually it only > > really needs 1 page privately, but a few pages give better throughput > > and 25% seemed like a good idea at the time. > > Yes this part is clear to me. > > > per-bdi throttling focuses on the "PRIVATELY" (the important bit) and > > de-emphasises the 25% (the irrelevant detail). > > It is still not clear to me how this patch is going to behave when the > global dirty throttling is essentially equal to the per-bdi - e.g. there > is only a single bdi and now the PF_LOCAL_THROTTLE process doesn't have > anything private. Let me think out loud so see whether I understand this properly. There are two BDIs involved in NFS loop mount - the NFS virtual BDI (let's call it simply NFS-bdi) and the bdi of the real filesystem that is backing NFS (let's call this real-bdi). The case we are concerned about is when NFS-bdi is full of dirty pages so that global dirty limit of the machine is exceeded. Then flusher thread will take dirty pages from NFS-bdi and send them over localhost to nfsd. Nfsd, which has PF_LOCAL_THROTTLE set, will take these pages and write them to real-bdi. Now because PF_LOCAL_THROTTLE is set for nfsd, the fact that we are over global limit does not take effect and nfsd is still able to write to real-bdi until dirty limit on real-bdi is reached. So things should work as Neil writes AFAIU. Honza -- Jan Kara SUSE Labs, CR