Since the VM_MAX_READAHEAD is greatly enlarged and the algorithm become more
complex, it becomes necessary to insert some cond_resched() calls in the
read-ahead path.
If desktop users still feel audio jitters with the new read-ahead code,
please try one of the following ways to get rid of it:
1) compile kernel with CONFIG_PREEMPT_VOLUNTARY/CONFIG_PREEMPT
2) reduce the read-ahead request size by running
blockdev --setra 256 /dev/hda # or whatever device you are using
Signed-off-by: Wu Fengguang <[email protected]>
---
This patch is recommended by Con Kolivas to improve respond time for desktop.
Thanks!
fs/mpage.c | 4 +++-
mm/readahead.c | 9 +++++++--
2 files changed, 10 insertions(+), 3 deletions(-)
--- linux-2.6.16-rc6-mm2.orig/mm/readahead.c
+++ linux-2.6.16-rc6-mm2/mm/readahead.c
@@ -145,8 +145,10 @@ int read_cache_pages(struct address_spac
continue;
}
ret = filler(data, page);
- if (!pagevec_add(&lru_pvec, page))
+ if (!pagevec_add(&lru_pvec, page)) {
+ cond_resched();
__pagevec_lru_add(&lru_pvec);
+ }
if (ret) {
while (!list_empty(pages)) {
struct page *victim;
@@ -184,8 +186,10 @@ static int read_pages(struct address_spa
page->index, GFP_KERNEL)) {
ret = mapping->a_ops->readpage(filp, page);
if (ret != AOP_TRUNCATED_PAGE) {
- if (!pagevec_add(&lru_pvec, page))
+ if (!pagevec_add(&lru_pvec, page)) {
+ cond_resched();
__pagevec_lru_add(&lru_pvec);
+ }
continue;
} /* else fall through to release */
}
@@ -299,6 +303,7 @@ __do_page_cache_readahead(struct address
continue;
read_unlock_irq(&mapping->tree_lock);
+ cond_resched();
page = page_cache_alloc_cold(mapping);
read_lock_irq(&mapping->tree_lock);
if (!page)
--- linux-2.6.16-rc6-mm2.orig/fs/mpage.c
+++ linux-2.6.16-rc6-mm2/fs/mpage.c
@@ -407,8 +407,10 @@ mpage_readpages(struct address_space *ma
&last_block_in_bio, &map_bh,
&first_logical_block,
get_block);
- if (!pagevec_add(&lru_pvec, page))
+ if (!pagevec_add(&lru_pvec, page)) {
+ cond_resched();
__pagevec_lru_add(&lru_pvec);
+ }
} else {
page_cache_release(page);
}
--
On Sun, 2006-03-19 at 10:34 +0800, Wu Fengguang wrote:
> plain text document attachment
> (readahead-insert-cond_resched-calls.patch)
> Since the VM_MAX_READAHEAD is greatly enlarged and the algorithm become more
> complex, it becomes necessary to insert some cond_resched() calls in the
> read-ahead path.
>
> If desktop users still feel audio jitters with the new read-ahead code,
> please try one of the following ways to get rid of it:
>
> 1) compile kernel with CONFIG_PREEMPT_VOLUNTARY/CONFIG_PREEMPT
> 2) reduce the read-ahead request size by running
> blockdev --setra 256 /dev/hda # or whatever device you are using
Do you have any numbers on this (say, from Ingo's latency tracer)? Have
you confirmed it's not a latency regression with the default settings?
Lee
On Sat, Mar 18, 2006 at 10:50:42PM -0500, Lee Revell wrote:
> On Sun, 2006-03-19 at 10:34 +0800, Wu Fengguang wrote:
> > plain text document attachment
> > (readahead-insert-cond_resched-calls.patch)
> > Since the VM_MAX_READAHEAD is greatly enlarged and the algorithm become more
> > complex, it becomes necessary to insert some cond_resched() calls in the
> > read-ahead path.
> >
> > If desktop users still feel audio jitters with the new read-ahead code,
> > please try one of the following ways to get rid of it:
> >
> > 1) compile kernel with CONFIG_PREEMPT_VOLUNTARY/CONFIG_PREEMPT
> > 2) reduce the read-ahead request size by running
> > blockdev --setra 256 /dev/hda # or whatever device you are using
>
> Do you have any numbers on this (say, from Ingo's latency tracer)? Have
> you confirmed it's not a latency regression with the default settings?
Sorry, I did the testing simply by playing mp3s while doing heavy I/O.
The 128KB window size do not need these cond_resched()s, yet the
1024KB window size do. With this patch, I do not feel any latency
issues with CONFIG_PREEMPT_NONE.
But I'm not sure it is the case for others, for there has been user
reports of audio jitters(without further informations, hence still an
open problem).
Anyway, numbers will be collected soon ...
Cheers,
Wu
On Sat, Mar 18, 2006 at 10:50:42PM -0500, Lee Revell wrote:
> On Sun, 2006-03-19 at 10:34 +0800, Wu Fengguang wrote:
> > Since the VM_MAX_READAHEAD is greatly enlarged and the algorithm become more
> > complex, it becomes necessary to insert some cond_resched() calls in the
> > read-ahead path.
>
> Do you have any numbers on this (say, from Ingo's latency tracer)? Have
> you confirmed it's not a latency regression with the default settings?
Now I have some from Ingo's latency tracer. When doing 1M sized readaheads,
the original kernel have ~5ms latency. This patch reduced that number to ~1ms.
The trace data are collected on the following command:
cp 3G_sparse_file /dev/null
The resulted maximum latencies are:
wfg ~% head -n5 latency_trace1
preemption latency trace v1.1.5 on 2.6.16
--------------------------------------------------------------------
latency: 1169 us, #319/319, CPU#0 | (M:server VP:0, KP:0, SP:0 HP:0 #P:1)
-----------------
| task: xmms-5077 (uid:1000 nice:0 policy:0 rt_prio:0)
wfg ~% head -n5 latency_trace2
preemption latency trace v1.1.5 on 2.6.16
--------------------------------------------------------------------
latency: 4835 us, #14102/14102, CPU#0 | (M:server VP:0, KP:0, SP:0 HP:0 #P:1)
-----------------
| task: xmms-5065 (uid:1000 nice:0 policy:0 rt_prio:0)
The folloing commands show that the 4835us latency is mainly caused by
__do_page_cache_readahead(): from 575us to 1723us
mpage_readpages(): from 1724us to 4795us
wfg ~% grep readahead latency_trace1
wfg ~% grep readpages latency_trace1
wfg ~% grep readahead latency_trace2
cp-5068 0dn.. 575us : _read_lock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 576us : radix_tree_lookup_node (__do_page_cache_readahead)
cp-5068 0dn.. 576us : _read_unlock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 577us : __alloc_pages (__do_page_cache_readahead)
cp-5068 0dn.. 580us : _read_lock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 580us : radix_tree_lookup_node (__do_page_cache_readahead)
cp-5068 0dn.. 581us : _read_unlock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 581us : __alloc_pages (__do_page_cache_readahead)
cp-5068 0dn.. 583us : _read_lock_irq (__do_page_cache_readahead)
......[899 LINES SKIPPED]
cp-5068 0dn.. 1712us : radix_tree_lookup_node (__do_page_cache_readahead)
cp-5068 0dn.. 1712us : _read_unlock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 1713us : __alloc_pages (__do_page_cache_readahead)
cp-5068 0dn.. 1715us : _read_lock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 1716us : radix_tree_lookup_node (__do_page_cache_readahead)
cp-5068 0dn.. 1716us : _read_unlock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 1716us : __alloc_pages (__do_page_cache_readahead)
cp-5068 0dn.. 1719us : _read_lock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 1719us : radix_tree_lookup_node (__do_page_cache_readahead)
cp-5068 0dn.. 1719us : _read_unlock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 1720us : __alloc_pages (__do_page_cache_readahead)
cp-5068 0dn.. 1722us : _read_lock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 1723us : _read_unlock_irq (__do_page_cache_readahead)
cp-5068 0dn.. 1723us : read_pages (__do_page_cache_readahead)
cp-5068 0dn.. 4800us : readahead_cache_hit (do_generic_mapping_read)
wfg ~% grep readpages latency_trace2
cp-5068 0dn.. 1724us : ext3_readpages (read_pages)
cp-5068 0dn.. 1724us : mpage_readpages (ext3_readpages)
cp-5068 0dn.. 1725us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 1728us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 1742us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 1744us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 1753us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 1755us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 1764us : add_to_page_cache (mpage_readpages)
......[508 LINES SKIPPED]
cp-5068 0dn.. 4716us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 4726us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 4728us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 4737us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 4739us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 4748us : __pagevec_lru_add (mpage_readpages)
cp-5068 0dn.. 4750us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 4752us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 4761us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 4763us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 4772us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 4774us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 4784us : add_to_page_cache (mpage_readpages)
cp-5068 0dn.. 4785us : do_mpage_readpage (mpage_readpages)
cp-5068 0dn.. 4795us : __pagevec_lru_add (mpage_readpages)
Cheers,
Wu