Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932129Ab0G3JWY (ORCPT ); Fri, 30 Jul 2010 05:22:24 -0400 Received: from fgwmail5.fujitsu.co.jp ([192.51.44.35]:35471 "EHLO fgwmail5.fujitsu.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755286Ab0G3JWX (ORCPT ); Fri, 30 Jul 2010 05:22:23 -0400 X-SecurityPolicyCheck-FJ: OK by FujitsuOutboundMailChecker v1.3.1 From: KOSAKI Motohiro To: Wu Fengguang Subject: Re: [PATCH 0/5] [RFC] transfer ASYNC vmscan writeback IO to the flusher threads Cc: kosaki.motohiro@jp.fujitsu.com, Dave Chinner , Andrew Morton , LKML , "linux-fsdevel@vger.kernel.org" , "linux-mm@kvack.org" , Chris Mason , Nick Piggin , Rik van Riel , Johannes Weiner , Christoph Hellwig , KAMEZAWA Hiroyuki , Andrea Arcangeli , Mel Gorman , Minchan Kim In-Reply-To: <20100730075819.GE8811@localhost> References: <20100729232330.GO655@dastard> <20100730075819.GE8811@localhost> Message-Id: <20100730181014.4AEA.A69D9226@jp.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Mailer: Becky! ver. 2.50.07 [ja] Date: Fri, 30 Jul 2010 18:22:18 +0900 (JST) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4622 Lines: 103 > On Fri, Jul 30, 2010 at 07:23:30AM +0800, Dave Chinner wrote: > > On Thu, Jul 29, 2010 at 07:51:42PM +0800, Wu Fengguang wrote: > > > Andrew, > > > > > > It's possible to transfer ASYNC vmscan writeback IOs to the flusher threads. > > > This simple patchset shows the basic idea. Since it's a big behavior change, > > > there are inevitably lots of details to sort out. I don't know where it will > > > go after tests and discussions, so the patches are intentionally kept simple. > > > > > > sync livelock avoidance (need more to be complete, but this is minimal required for the last two patches) > > > [PATCH 1/5] writeback: introduce wbc.for_sync to cover the two sync stages > > > [PATCH 2/5] writeback: stop periodic/background work on seeing sync works > > > [PATCH 3/5] writeback: prevent sync livelock with the sync_after timestamp > > > > > > let the flusher threads do ASYNC writeback for pageout() > > > [PATCH 4/5] writeback: introduce bdi_start_inode_writeback() > > > [PATCH 5/5] vmscan: transfer async file writeback to the flusher > > > > I really do not like this - all it does is transfer random page writeback > > from vmscan to the flusher threads rather than avoiding random page > > writeback altogether. Random page writeback is nasty - just say no. > > There are cases we have to do pageout(). > > - a stressed memcg with lots of dirty pages > - a large NUMA system whose nodes have unbalanced vmscan rate and dirty pages - 32bit highmem system too can you please see following commit? this describe current design. commit c4e2d7ddde9693a4c05da7afd485db02c27a7a09 Author: akpm Date: Sun Dec 22 01:07:33 2002 +0000 [PATCH] Give kswapd writeback higher priority than pdflush The `low latency page reclaim' design works by preventing page allocators from blocking on request queues (and by preventing them from blocking against writeback of individual pages, but that is immaterial here). This has a problem under some situations. pdflush (or a write(2) caller) could be saturating the queue with highmem pages. This prevents anyone from writing back ZONE_NORMAL pages. We end up doing enormous amounts of scenning. A test case is to mmap(MAP_SHARED) almost all of a 4G machine's memory, then kill the mmapping applications. The machine instantly goes from 0% of memory dirty to 95% or more. pdflush kicks in and starts writing the least-recently-dirtied pages, which are all highmem. The queue is congested so nobody will write back ZONE_NORMAL pages. kswapd chews 50% of the CPU scanning past dirty ZONE_NORMAL pages and page reclaim efficiency (pages_reclaimed/pages_scanned) falls to 2%. So this patch changes the policy for kswapd. kswapd may use all of a request queue, and is prepared to block on request queues. What will now happen in the above scenario is: 1: The page alloctor scans some pages, fails to reclaim enough memory and takes a nap in blk_congetion_wait(). 2: kswapd() will scan the ZONE_NORMAL LRU and will start writing back pages. (These pages will be rotated to the tail of the inactive list at IO-completion interrupt time). This writeback will saturate the queue with ZONE_NORMAL pages. Conveniently, pdflush will avoid the congested queues. So we end up writing the correct pages. In this test, kswapd CPU utilisation falls from 50% to 2%, page reclaim efficiency rises from 2% to 40% and things are generally a lot happier. The downside is that kswapd may now do a lot less page reclaim, increasing page allocation latency, causing more direct reclaim, increasing lock contention in the VM, etc. But I have not been able to demonstrate that in testing. The other problem is that there is only one kswapd, and there are lots of disks. That is a generic problem - without being able to co-opt user processes we don't have enough threads to keep lots of disks saturated. One fix for this would be to add an additional "really congested" threshold in the request queues, so kswapd can still perform nonblocking writeout. This gives kswapd priority over pdflush while allowing kswapd to feed many disk queues. I doubt if this will be called for. BKrev: 3e051055aitHp3bZBPSqmq21KGs5aQ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/