Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751600AbbFYVqL (ORCPT ); Thu, 25 Jun 2015 17:46:11 -0400 Received: from jazz.pogo.org.uk ([213.138.114.167]:36413 "EHLO jazz.pogo.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750993AbbFYVqD (ORCPT ); Thu, 25 Jun 2015 17:46:03 -0400 Date: Thu, 25 Jun 2015 22:45:57 +0100 (BST) From: Mark Hills To: Michal Hocko cc: Vlastimil Babka , linux-mm@kvack.org, Mel Gorman , Johannes Weiner , LKML Subject: Re: Write throughput impaired by touching dirty_ratio In-Reply-To: <20150625092056.GB17237@dhcp22.suse.cz> Message-ID: <1506252136260.2115@stax.localdomain> References: <1506191513210.2879@stax.localdomain> <558A69F8.2080304@suse.cz> <1506242140070.1867@stax.localdomain> <20150625092056.GB17237@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4215 Lines: 116 On Thu, 25 Jun 2015, Michal Hocko wrote: > On Wed 24-06-15 23:26:49, Mark Hills wrote: > [...] > > To test, I flipped the vm_highmem_is_dirtyable (which had no effect until > > I forced it to re-evaluate ratelimit_pages): > > > > $ echo 1 > /proc/sys/vm/highmem_is_dirtyable > > $ echo 21 > /proc/sys/vm/dirty_ratio > > $ echo 20 > /proc/sys/vm/dirty_ratio > > > > crash> rd -d ratelimit_pages > > c148b618: 2186 > > > > The value is now healthy, more so than even the value we started > > with on bootup. > > From your /proc/zoneinfo: > > Node 0, zone HighMem > > pages free 2536526 > > min 128 > > low 37501 > > high 74874 > > scanned 0 > > spanned 3214338 > > present 3017668 > > managed 3017668 > > You have 11G of highmem. Which is a lot wrt. the the lowmem > > > Node 0, zone Normal > > pages free 37336 > > min 4789 > > low 5986 > > high 7183 > > scanned 0 > > spanned 123902 > > present 123902 > > managed 96773 > > which is only 378M! So something had to eat portion of the lowmem. > I think it is a bad idea to use 32b kernel with that amount of memory in > general. The lowmem pressure is even worse by the fact that something is > eating already precious amount of lowmem. Yup, that's the ""vmalloc=512M" kernel parameter. That was a requirement for my NVidia GPU to work, but now I have an AMD card so I have been able to remove that. It now gives me ~730M, and provided some relieve to ratelimit_pages; now at 63 (when dirty_ratio is set to 20 after boot) > What is the reason to stick with 32b kernel anyway? Because it's ideal for finding edge cases and bugs in kernels :-) The real reason is more practical. I never had a problem with the 32-bit one, and as my OS is quite home-grown and evolved over 10+ years, I haven't wanted to start again or reinstall. This is the first time I've been aware of any problem or notable performance impact -- the PAE kernel has worked very well for me. The only reason I have so much RAM is that RAM is cheap, and it's a great disk cache. I'd be more likely to remove some of the RAM than reinstall! Perhaps someone could kindly explain why don't I have the same problem if I have, say 1.5G of RAM? Is it because the page table for 12G is large and sits in the lowmem? > > My questions and observations are: > > > > * What does highmem_is_dirtyable actually mean, and should it really > > default to 1? > > It says whether highmem should be considered dirtyable. It is not by > default. See more for motivation in 195cf453d2c3 ("mm/page-writeback: > highmem_is_dirtyable option"). Thank you, this explanation is useful. I know very little about the constraints on highmem and lowmem, though I can make an educated guess (and reading http://linux-mm.org/HighMemory) I do have some questions though, perhaps if someone would be happy to explain. What is the "excessive scanning" mentioned in that patch, and why it is any more than I would expect a 64-bit kernel to be doing? ie. what is the practical downside of me doing: $ echo 1073741824 > /proc/sys/vm/dirty_bytes Also, is VMSPLIT_2G likely to be appropriate here if the kernel is managing larger amounts of total RAM? I enabled it and it increases the lowmem. Is this a simple tradeoff I am making now between user and kernel space? I'm not trying to sit in the dark ages, but the bad I/O throttling is the only real problem I have suffered by staying 32-bit, and a small tweak has restored sanity. So it's reasonable to question the logic that is in use. For example, if we're saying that ratelimit_pages is dependent truly on free lowmem, then surely it needs to be periodically re-evaluated as the system is put to use? Setting 'dirty_ratio' implies that it's a ratio of a fixed, unchanging value. Many thanks -- Mark -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/