Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754280AbaBRHTo (ORCPT ); Tue, 18 Feb 2014 02:19:44 -0500 Received: from e7.ny.us.ibm.com ([32.97.182.137]:54216 "EHLO e7.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753958AbaBRHTm (ORCPT ); Tue, 18 Feb 2014 02:19:42 -0500 From: Raghavendra K T To: Andrew Morton , Fengguang Wu , David Cohen , Al Viro , Damien Ramonda , Jan Kara , rientjes@google.com, Linus , nacc@linux.vnet.ibm.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Raghavendra K T Subject: [PATCH V6 ] mm readahead: Fix readahead fail for memoryless cpu and limit readahead pages Date: Tue, 18 Feb 2014 12:55:38 +0530 Message-Id: <1392708338-19685-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.11.7 X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14021807-5806-0000-0000-00002427B3BD Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently max_sane_readahead() returns zero on the cpu having no local memory node which leads to readahead failure. Fix the readahead failure by returning minimum of (requested pages, 512). Users running application on a memory-less cpu which needs readahead such as streaming application see considerable boost in the performance. Result: fadvise experiment with FADV_WILLNEED on a PPC machine having memoryless CPU with 1GB testfile ( 12 iterations) yielded around 46.66% improvement. fadvise experiment with FADV_WILLNEED on a x240 machine with 1GB testfile 32GB* 4G RAM numa machine ( 12 iterations) showed no impact on the normal NUMA cases w/ patch. Kernel Avg Stddev base 7.4975 3.92% patched 7.4174 3.26% Suggested-by: Linus Torvalds [Andrew: making return value PAGE_SIZE independent] Signed-off-by: Raghavendra K T --- I would like to thank Honza, David for their valuable suggestions and patiently reviewing the patches. Changes in V6: - Just limit the readahead to 2MB on 4k pages system as suggested by Linus. and make it independent of PAGE_SIZE. Changes in V5: - Drop the 4k limit for normal readahead. (Jan Kara) Changes in V4: - Check for total node memory to decide whether we don't have local memory (jan Kara) - Add 4k page limit on readahead for normal and remote readahead (Linus) (Linus suggestion was 16MB limit). Changes in V3: - Drop iterating over numa nodes that calculates total free pages (Linus) Agree that we do not have control on allocation for readahead on a particular numa node and hence for remote readahead we can not further sanitize based on potential free pages of that node. and also we do not want to itererate through all nodes to find total free pages. Suggestions and comments welcome mm/readahead.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 0de2360..1fa0d6f 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -233,14 +233,14 @@ int force_page_cache_readahead(struct address_space *mapping, struct file *filp, return 0; } +#define MAX_READAHEAD ((512*4096)/PAGE_CACHE_SIZE) /* * Given a desired number of PAGE_CACHE_SIZE readahead pages, return a * sensible upper limit. */ unsigned long max_sane_readahead(unsigned long nr) { - return min(nr, (node_page_state(numa_node_id(), NR_INACTIVE_FILE) - + node_page_state(numa_node_id(), NR_FREE_PAGES)) / 2); + return min(nr, MAX_READAHEAD); } /* -- 1.7.11.7 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/