Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S266395AbUAIBGe (ORCPT ); Thu, 8 Jan 2004 20:06:34 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S266394AbUAIBGe (ORCPT ); Thu, 8 Jan 2004 20:06:34 -0500 Received: from e2.ny.us.ibm.com ([32.97.182.102]:19599 "EHLO e2.ny.us.ibm.com") by vger.kernel.org with ESMTP id S266395AbUAIBGa (ORCPT ); Thu, 8 Jan 2004 20:06:30 -0500 Subject: Re: Strange IDE performance change in 2.6.1-rc1 (again) From: Ram Pai To: Andrew Morton Cc: Paolo Ornati , gandalf@wlug.westbo.se, linux-kernel@vger.kernel.org In-Reply-To: <20040107155729.7e737c36.akpm@osdl.org> References: <200401021658.41384.ornati@lycos.it> <200401071559.16130.ornati@lycos.it> <1073503421.10018.17.camel@dyn319250.beaverton.ibm.com> <200401072112.35334.ornati@lycos.it> <20040107155729.7e737c36.akpm@osdl.org> Content-Type: text/plain Organization: Message-Id: <1073610357.12720.20.camel@dyn319250.beaverton.ibm.com> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.2.2 (1.2.2-5) Date: 08 Jan 2004 17:05:58 -0800 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2386 Lines: 66 Ok, I did some analysis and found that 'hdparm -t ' generates reads which are of size 1M. This means 256 page requests are generated by a single read. do_generic_mapping_read() gets the request to read 256 pages. But with the latest change, this function calls do_pagecahce_readahead() to keep 256 pages ready in cache. And after having done that do_generic_mapping_read() tries to access those 256 pages. But by then some of the pages may have been replaced under low pagecache conditions. Hence we end up spending extra time reading those pages again into the page cache. I think the same problem must exist while reading files too. Paulo Ornati used cat command to read the file. cat just generates 1 page request per read and hence the problem did not show up. The problem must show up if 'dd if=big_file of=/dev/null bs=1M count=256' is used. To conclude, I think the bug is with the changes to filemap.c If the changes are reverted the regression seen with blockdevices should go away. Well this is my theory, somebody should validate it, RP On Wed, 2004-01-07 at 15:57, Andrew Morton wrote: > Paolo Ornati wrote: > > > > I haven't done a lot of tests but it seems to me that the changes in > > mm/filemap.c are the only things that influence the sequential read > > performance on my disk. > > The fact that this only happens when reading a blockdev (true?) is a big > hint. Maybe it is because regular files implement ->readpages. > > If the below patch makes read throughput worse on regular files too then > that would confirm the idea. > > diff -puN mm/readahead.c~a mm/readahead.c > --- 25/mm/readahead.c~a Wed Jan 7 15:56:32 2004 > +++ 25-akpm/mm/readahead.c Wed Jan 7 15:56:36 2004 > @@ -103,11 +103,6 @@ static int read_pages(struct address_spa > struct pagevec lru_pvec; > int ret = 0; > > - if (mapping->a_ops->readpages) { > - ret = mapping->a_ops->readpages(filp, mapping, pages, nr_pages); > - goto out; > - } > - > pagevec_init(&lru_pvec, 0); > for (page_idx = 0; page_idx < nr_pages; page_idx++) { > struct page *page = list_to_page(pages); > > _ > > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/