2001-12-16 03:42:06

by David Gómez

[permalink] [raw]
Subject: Copying to loop device hangs up everything


Hi,

I'm using kernel 2.4.17-rc1 and found what i think is a bug, maybe related
to the loop device. This is the situation:

I've created and ext2 image (around 550Mb), and mounted it as loopback.
Then i've tried to copy some files from another ext2 image also mounted in
another loop device, with a 'cp -a'. After some data has been copied, I/O
stopped but the system was still usable, loop, and cp process were in D
state. Loop devices couldn't be umounted, so i rebooted the computer,
e2fsck the images because of the reboot, and tried again to copy the data,
this time successfully.
Next, i had some more data in my root partition to add to the ext2 images,
so i did another cp -a of some directory (around 200mb of data) to the
ext2 image mounted as loop. This time i got a 'full hang' ;), i couldn't
login, a alt+sysrq+t shows that cp and loop were again in D state, and
syncing/umounting with the magic key didn't work at all. I can reproduce
this hang always, copying the data to the mounted loop device.

All the data is in the same disk (hda1), which is an ext2 partition.

Any ideas about what is causing this ?



David G?mez

"The question of whether computers can think is just like the question of
whether submarines can swim." -- Edsger W. Dijkstra



2001-12-16 04:01:00

by Dave Jones

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

On Sun, 16 Dec 2001, David Gomez wrote:

> I'm using kernel 2.4.17-rc1 and found what i think is a bug, maybe related
> to the loop device. This is the situation:

Can you repeat it with this applied ?
ftp://ftp.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.17rc1aa1/00_loop-deadlock-1

regards,
Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-16 11:42:18

by David Gómez

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything


On Sun, 16 Dec 2001, Dave Jones wrote:

> > I'm using kernel 2.4.17-rc1 and found what i think is a bug, maybe related
> > to the loop device. This is the situation:
>
> Can you repeat it with this applied ?
> ftp://ftp.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.17rc1aa1/00_loop-deadlock-1

Thanks ;), this patch solves the problem and copying a lot of data to the
loop device now doesn't hang the computer.

Is this patch going to be applied to the stable kernel ? Marcelo ?



David G?mez

"The question of whether computers can think is just like the question of
whether submarines can swim." -- Edsger W. Dijkstra


2001-12-16 18:47:25

by Momchil Velikov

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

>>>>> "David" == David Gomez <[email protected]> writes:

David> On Sun, 16 Dec 2001, Dave Jones wrote:

>> > I'm using kernel 2.4.17-rc1 and found what i think is a bug, maybe related
>> > to the loop device. This is the situation:
>>
>> Can you repeat it with this applied ?
>> ftp://ftp.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.17rc1aa1/00_loop-deadlock-1

David> Thanks ;), this patch solves the problem and copying a lot of data to the
David> loop device now doesn't hang the computer.

David> Is this patch going to be applied to the stable kernel ? Marcelo ?

I've had exactly the same hangups with or without the patch.

2001-12-16 19:43:37

by David Gómez

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything


On 16 Dec 2001, Momchil Velikov wrote:

> [...]
>
> David> Thanks ;), this patch solves the problem and copying a lot of data to the
> David> loop device now doesn't hang the computer.
>
> David> Is this patch going to be applied to the stable kernel ? Marcelo ?
>
> I've had exactly the same hangups with or without the patch.

I've tested several times after applying the loop-deadlock patch and the
bug seems to be fixed. No more hangups while copying a lot of data to
loopback devices. Post more info about your hangups, maybe is another
different loop device deadlock.


David G?mez

"The question of whether computers can think is just like the question of
whether submarines can swim." -- Edsger W. Dijkstra


2001-12-16 19:55:38

by Momchil Velikov

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

>>>>> "David" == David Gomez <[email protected]> writes:

David> On 16 Dec 2001, Momchil Velikov wrote:

>> [...]
>>
David> Thanks ;), this patch solves the problem and copying a lot of data to the
David> loop device now doesn't hang the computer.
>>
David> Is this patch going to be applied to the stable kernel ? Marcelo ?
>>
>> I've had exactly the same hangups with or without the patch.

David> I've tested several times after applying the loop-deadlock patch and the
David> bug seems to be fixed. No more hangups while copying a lot of data to
David> loopback devices. Post more info about your hangups, maybe is another
David> different loop device deadlock.

Maybe it's different I don't know. Looks like I've found a fix and in
a minute I'll test _without_ the Andrea's patch and post whatever
comes out of it.

Regards,
-velco

2001-12-16 22:06:15

by Momchil Velikov

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

>>>>> "Momchil" == Momchil Velikov <[email protected]> writes:

>>>>> "David" == David Gomez <[email protected]> writes:
David> On 16 Dec 2001, Momchil Velikov wrote:

>>> [...]
>>>
David> Thanks ;), this patch solves the problem and copying a lot of data to the
David> loop device now doesn't hang the computer.
>>>
David> Is this patch going to be applied to the stable kernel ? Marcelo ?
>>>
>>> I've had exactly the same hangups with or without the patch.

David> I've tested several times after applying the loop-deadlock patch and the
David> bug seems to be fixed. No more hangups while copying a lot of data to
David> loopback devices. Post more info about your hangups, maybe is another
David> different loop device deadlock.

Momchil> Maybe it's different I don't know. Looks like I've found a fix and in
Momchil> a minute I'll test _without_ the Andrea's patch and post whatever
Momchil> comes out of it.

It turned out that Andrea's patch is needed and it needs to be
augmented slightly. The loop_thread can do the following:

loop_thread
-> do_bh_filebacked
-> lo_send
-> ...
-> kmem_cache_alloc
-> ...
-> shrink_cache
-> try_to_release_page
-> try_to_free_buffers
-> sync_page_buffers
-> __wait_on_buffer

And if the buffer must be flushed to the loopback device we deadlock.

The following patch is the Andrea's one + one additional change -- we
don't allow the loop_thread to wait in sync_page_buffers.

Regards,
-velco

diff -Nru a/drivers/block/loop.c b/drivers/block/loop.c
--- a/drivers/block/loop.c Sun Dec 16 23:50:25 2001
+++ b/drivers/block/loop.c Sun Dec 16 23:50:25 2001
@@ -578,6 +578,8 @@
atomic_inc(&lo->lo_pending);
spin_unlock_irq(&lo->lo_lock);

+ current->flags |= PF_NOIO;
+
/*
* up sem, we are running
*/
diff -Nru a/fs/buffer.c b/fs/buffer.c
--- a/fs/buffer.c Sun Dec 16 23:50:25 2001
+++ b/fs/buffer.c Sun Dec 16 23:50:25 2001
@@ -1045,7 +1045,7 @@

/* First, check for the "real" dirty limit. */
if (dirty > soft_dirty_limit) {
- if (dirty > hard_dirty_limit)
+ if (dirty > hard_dirty_limit && !(current->flags & PF_NOIO))
return 1;
return 0;
}
@@ -2448,6 +2448,8 @@
/* Second time through we start actively writing out.. */
if (test_and_set_bit(BH_Lock, &bh->b_state)) {
if (!test_bit(BH_launder, &bh->b_state))
+ continue;
+ if (current->flags & PF_NOIO)
continue;
wait_on_buffer(bh);
tryagain = 1;
diff -Nru a/include/linux/sched.h b/include/linux/sched.h
--- a/include/linux/sched.h Sun Dec 16 23:50:25 2001
+++ b/include/linux/sched.h Sun Dec 16 23:50:25 2001
@@ -426,6 +426,7 @@
#define PF_MEMALLOC 0x00000800 /* Allocating memory */
#define PF_MEMDIE 0x00001000 /* Killed for out-of-memory */
#define PF_FREE_PAGES 0x00002000 /* per process page freeing */
+#define PF_NOIO 0x00004000 /* avoid generating further I/O */

#define PF_USEDFPU 0x00100000 /* task used FPU this quantum (SMP) */

2001-12-17 03:30:33

by Dave Jones

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

On 16 Dec 2001, Momchil Velikov wrote:

> >> Can you repeat it with this applied ?
> >> ftp://ftp.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.17rc1aa1/00_loop-deadlock-1
> I've had exactly the same hangups with or without the patch.

You could be hitting a different bug.. highmem box ?

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-18 21:01:13

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything


Momchil,

Your fix does not look right. We _have_ to sync pages at
sync_page_buffers(), we cannot "ignore" them.

On 16 Dec 2001, Momchil Velikov wrote:

> >>>>> "Momchil" == Momchil Velikov <[email protected]> writes:
>
> >>>>> "David" == David Gomez <[email protected]> writes:
> David> On 16 Dec 2001, Momchil Velikov wrote:
>
> >>> [...]
> >>>
> David> Thanks ;), this patch solves the problem and copying a lot of data to the
> David> loop device now doesn't hang the computer.
> >>>
> David> Is this patch going to be applied to the stable kernel ? Marcelo ?
> >>>
> >>> I've had exactly the same hangups with or without the patch.
>
> David> I've tested several times after applying the loop-deadlock patch and the
> David> bug seems to be fixed. No more hangups while copying a lot of data to
> David> loopback devices. Post more info about your hangups, maybe is another
> David> different loop device deadlock.
>
> Momchil> Maybe it's different I don't know. Looks like I've found a fix and in
> Momchil> a minute I'll test _without_ the Andrea's patch and post whatever
> Momchil> comes out of it.
>
> It turned out that Andrea's patch is needed and it needs to be
> augmented slightly. The loop_thread can do the following:
>
> loop_thread
> -> do_bh_filebacked
> -> lo_send
> -> ...
> -> kmem_cache_alloc
> -> ...
> -> shrink_cache
> -> try_to_release_page
> -> try_to_free_buffers
> -> sync_page_buffers
> -> __wait_on_buffer
>
> And if the buffer must be flushed to the loopback device we deadlock.
>
> The following patch is the Andrea's one + one additional change -- we
> don't allow the loop_thread to wait in sync_page_buffers.
>
> Regards,
> -velco
>
> diff -Nru a/drivers/block/loop.c b/drivers/block/loop.c
> --- a/drivers/block/loop.c Sun Dec 16 23:50:25 2001
> +++ b/drivers/block/loop.c Sun Dec 16 23:50:25 2001
> @@ -578,6 +578,8 @@
> atomic_inc(&lo->lo_pending);
> spin_unlock_irq(&lo->lo_lock);
>
> + current->flags |= PF_NOIO;
> +
> /*
> * up sem, we are running
> */
> diff -Nru a/fs/buffer.c b/fs/buffer.c
> --- a/fs/buffer.c Sun Dec 16 23:50:25 2001
> +++ b/fs/buffer.c Sun Dec 16 23:50:25 2001
> @@ -1045,7 +1045,7 @@
>
> /* First, check for the "real" dirty limit. */
> if (dirty > soft_dirty_limit) {
> - if (dirty > hard_dirty_limit)
> + if (dirty > hard_dirty_limit && !(current->flags & PF_NOIO))
> return 1;
> return 0;
> }
> @@ -2448,6 +2448,8 @@
> /* Second time through we start actively writing out.. */
> if (test_and_set_bit(BH_Lock, &bh->b_state)) {
> if (!test_bit(BH_launder, &bh->b_state))
> + continue;
> + if (current->flags & PF_NOIO)
> continue;
> wait_on_buffer(bh);
> tryagain = 1;
> diff -Nru a/include/linux/sched.h b/include/linux/sched.h
> --- a/include/linux/sched.h Sun Dec 16 23:50:25 2001
> +++ b/include/linux/sched.h Sun Dec 16 23:50:25 2001
> @@ -426,6 +426,7 @@
> #define PF_MEMALLOC 0x00000800 /* Allocating memory */
> #define PF_MEMDIE 0x00001000 /* Killed for out-of-memory */
> #define PF_FREE_PAGES 0x00002000 /* per process page freeing */
> +#define PF_NOIO 0x00004000 /* avoid generating further I/O */
>
> #define PF_USEDFPU 0x00100000 /* task used FPU this quantum (SMP) */
>
>

2001-12-18 21:10:43

by Momchil Velikov

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

>>>>> "Marcelo" == Marcelo Tosatti <[email protected]> writes:

Marcelo> Momchil,

Marcelo> Your fix does not look right. We _have_ to sync pages at
Marcelo> sync_page_buffers(), we cannot "ignore" them.

Sure, we don't ignore them, we just don't _wait_ for them, because
maybe _we_ are the one to write them.

Regards,
-velco

2001-12-18 21:14:36

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything



On 18 Dec 2001, Momchil Velikov wrote:

> >>>>> "Marcelo" == Marcelo Tosatti <[email protected]> writes:
>
> Marcelo> Momchil,
>
> Marcelo> Your fix does not look right. We _have_ to sync pages at
> Marcelo> sync_page_buffers(), we cannot "ignore" them.
>
> Sure, we don't ignore them, we just don't _wait_ for them, because
> maybe _we_ are the one to write them.

What if we are not ?

2001-12-18 21:45:13

by Momchil Velikov

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

>>>>> "Marcelo" == Marcelo Tosatti <[email protected]> writes:

Marcelo> Momchil,

Marcelo> Your fix does not look right. We _have_ to sync pages at
Marcelo> sync_page_buffers(), we cannot "ignore" them.

>> Sure, we don't ignore them, we just don't _wait_ for them, because
>> maybe _we_ are the one to write them.

Marcelo> What if we are not ?

Hmm, looks like we pray to find another immediately usable page, to
finish _this_ request first, and then we will ``loop_get_bh'' the
buffer we just avoided waiting on and sync it.

Hmm, _maybe_ it is a good idea buffers submitted for IO by the
loopback threads to themselves go _in front_ of the loopback queue ?

Regards,
-velco

2001-12-19 13:43:06

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

On Wed, Dec 19, 2001 at 12:14:07AM -0800, Andrew Morton wrote:
> Andrew Morton wrote:
> >
> > Andrew Morton wrote:
> > >
> > > I want to know how the loop thread ever hit sync_page_buffers.
> >
> > __block_prepare_write
> > ->get_unused_buffer_head
> > ->kmem_cache_alloc(SLAB_NOFS)
> >
> > Shouldn't we be using the address space's GFP flags for bufferhead
> > allocation, rather than cooking up a new one?
> >
>
> Um. That won't work. There are in fact many ways in which loopback
> can deadlock, and propagating gfp flags through all the fs code
> paths won't cut it.
>
> Here's one such deadlock:
>
> __wait_on_buffer
> sync_page_buffers
> try_to_free_buffers
> try_to_release_page
> shrink_cache
> shrink_caches
> try_to_free_pages
> balance_classzone
> __alloc_pages
> _alloc_pages
> find_or_create_page
> grow_dev_page
> grow_buffers
> getblk
> bread
> ext2_get_branch
> ext2_get_block
> __block_prepare_write
> block_prepare_write
> ext2_prepare_write
> lo_send
> do_bh_filebacked
> loop_thread
> kernel_thread
>
> I was able to get a multithread deadlock where the loop thread was waiting
> on an ext2 buffer which was sitting in the loop thread's input queue,
> waiting to be written by the loop thread. Ugly.
>
> The thing I don't like about the Andrea+Momchil approach is that it
> exposes the risk of flooding the machine with dirty data. A scheme

it doesn't, balance_dirty() has to work only at the highlevel.
sync_page_buffers also is no problem, we'll try again later in those
GFP_NOIO allocations.

furthmore you don't even address the writepage from loop thread on the
loop queue.

The final fix should be in rc2aa1 that I will release in a jiffy. It
takes care now of both the VM and balance_dirty().

this is the incremental fix against rc1aa1:

diff -urN loop-ref/fs/buffer.c loop/fs/buffer.c
--- loop-ref/fs/buffer.c Wed Dec 19 04:17:30 2001
+++ loop/fs/buffer.c Wed Dec 19 03:43:24 2001
@@ -2547,6 +2547,7 @@
/* Uhhuh, start writeback so that we don't end up with all dirty pages */
write_unlock(&hash_table_lock);
spin_unlock(&lru_list_lock);
+ gfp_mask = pf_gfp_mask(gfp_mask);
if (gfp_mask & __GFP_IO && !(current->flags & PF_ATOMICALLOC)) {
if ((gfp_mask & __GFP_HIGHIO) || !PageHighMem(page)) {
if (sync_page_buffers(bh)) {
diff -urN loop-ref/include/linux/mm.h loop/include/linux/mm.h
--- loop-ref/include/linux/mm.h Wed Dec 19 04:17:30 2001
+++ loop/include/linux/mm.h Wed Dec 19 04:15:52 2001
@@ -562,6 +562,15 @@

#define GFP_DMA __GFP_DMA

+static inline unsigned int pf_gfp_mask(unsigned int gfp_mask)
+{
+ /* avoid all memory balancing I/O methods if this task cannot block on I/O */
+ if (current->flags & PF_NOIO)
+ gfp_mask &= ~(__GFP_IO | __GFP_HIGHIO | __GFP_FS);
+
+ return gfp_mask;
+}
+
extern int heap_stack_gap;

/*
diff -urN loop-ref/mm/vmscan.c loop/mm/vmscan.c
--- loop-ref/mm/vmscan.c Wed Dec 19 04:17:30 2001
+++ loop/mm/vmscan.c Wed Dec 19 03:43:24 2001
@@ -611,6 +611,8 @@

int try_to_free_pages(zone_t *classzone, unsigned int gfp_mask, unsigned int order)
{
+ gfp_mask = pf_gfp_mask(gfp_mask);
+
for (;;) {
int tries = vm_scan_ratio << 2;
int failed_swapout = !(gfp_mask & __GFP_IO);


try_to_free_pages needs an explicit wrapping because it can be called
not only from the VM.

Andrea

2001-12-20 07:44:00

by Andrew Morton

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

Andrea Arcangeli wrote:
>
> ...
> > The thing I don't like about the Andrea+Momchil approach is that it
> > exposes the risk of flooding the machine with dirty data. A scheme
>
> it doesn't, balance_dirty() has to work only at the highlevel.
> sync_page_buffers also is no problem, we'll try again later in those
> GFP_NOIO allocations.

Not so. The loop thread *copies* the data. We must throttle it,
otherwise the loop thread gobbles all memory and the box dies. This
is trivial to demonstrate.

> furthmore you don't even address the writepage from loop thread on the
> loop queue.

How can this deadlock? The only path to those buffers is
via the page, and the page is locked.

> The final fix should be in rc2aa1 that I will release in a jiffy. It
> takes care now of both the VM and balance_dirty().
>
> this is the incremental fix against rc1aa1:
>

No. Your patch removes *all* loop thread throttling, it doesn't even start
IO (thus removing the throttling which request starving would provide)
and doesn't even wake up bdflush.

If you set nfract to 70%, nfract_sync to 80% and do a big write, the
machine falls into a VM coma within 15 seconds. The same happens
with both my patches :-(

And it's not legitimate to say "don't do that". If we can't survive
those settings, we don't have a solution. We need to throttle writes
*more*, not less.

I'll keep poking at it. If you have any more suggestions/patches,
please toss them over...

-

2001-12-20 11:44:17

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

On Wed, Dec 19, 2001 at 11:41:47PM -0800, Andrew Morton wrote:
> Andrea Arcangeli wrote:
> >
> > ...
> > > The thing I don't like about the Andrea+Momchil approach is that it
> > > exposes the risk of flooding the machine with dirty data. A scheme
> >
> > it doesn't, balance_dirty() has to work only at the highlevel.
> > sync_page_buffers also is no problem, we'll try again later in those
> > GFP_NOIO allocations.
>
> Not so. The loop thread *copies* the data. We must throttle it,
> otherwise the loop thread gobbles all memory and the box dies. This
> is trivial to demonstrate.

loop thread can generate how many dirty bh it wants, as said it's the
higher layer that takes care of the write throttling, we don't need to
stop into the loop thread. While the loop thread writes the higher layer
will throttle. It's as simple as that.

Also the write throttling isn't about correctness. Once there are too
many dirty bh the VM will take care of flushing them. So whatever you
claim about balance_dirty() is the last of my worries, I just had
reports that the loop write performance are greatly improved with my
recent -aa async flushing changes.

> > furthmore you don't even address the writepage from loop thread on the
> > loop queue.
>
> How can this deadlock? The only path to those buffers is
> via the page, and the page is locked.

loop -> writepage -> get_block -> ll_rw_block -> read metadata and queue
the reads into the loop queue -> wait_on_buffer -> deadlock

In general the loop thread must never do ll_rw_block into unknown
buffers.

> > The final fix should be in rc2aa1 that I will release in a jiffy. It
> > takes care now of both the VM and balance_dirty().
> >
> > this is the incremental fix against rc1aa1:
> >
>
> No. Your patch removes *all* loop thread throttling, it doesn't even start
> IO (thus removing the throttling which request starving would provide)
> and doesn't even wake up bdflush.
>
> If you set nfract to 70%, nfract_sync to 80% and do a big write, the
> machine falls into a VM coma within 15 seconds. The same happens
> with both my patches :-(
>
> And it's not legitimate to say "don't do that". If we can't survive
> those settings, we don't have a solution. We need to throttle writes
> *more*, not less.

we throttle, always, at every write submitted by the user. check
mark_buffer_dirty, and the pagecache write-side operations that calls
balance_dirty explicitly (blkdev on pagecache included of course). that
works fine in practice. The next balance_dirty() will throttle what the
loop written.

>
> I'll keep poking at it. If you have any more suggestions/patches,
> please toss them over...

I don't think other changes are needed, and it works fine in my systems
and all my 100mbyte email folders are on top of cryptoloop, so if I'm
writing this it means it works fine, I tried a cp /dev/zero into the
cryptoloop and it seems fine as well.

The other deadlock report I had may be yet another problem, I believe
the loop is just fine both in terms of balance_dirty() and VM.

If you really want to balance_dirty() at every loop write we can by
simply tracking which bh are on top of the loop thread, but I don't
think it's necessary. So it could be a combination of PF_NOIO and a BH_
bitflag that will trigger the balance_dirty(), same finegrined level
could be used by the vm as well, but again, I don't think it's needed,
the per-process PF_NOIO that avoids all the balance_dirty() and forbids
the loop thread to use I/O methods to release memory should be just
enough to have a stable system performing well.

Andrea

2001-12-20 11:44:27

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything

On Thu, Dec 20, 2001 at 12:27:35PM +0100, Andrea Arcangeli wrote:
> the loop thread to use I/O methods to release memory should be just
> enough to have a stable system performing well.

Forget to tell, I'd really love to be proved wrong on this one on
practice (if you have a patch that makes the system faster than rc2aa1
you will certainly change my mind in a jiffy), just catch rc2aa1 and see
if the loop is performing well for you or not, it'm safisfied with it
here.

My main worry at the moment is the other deadlock report of yesterday
(still waiting the SYSRQ+T to see what gone wrong for him :)

Andrea

2001-12-20 21:21:35

by Momchil Velikov

[permalink] [raw]
Subject: Re: Copying to loop device hangs up everything


Ok, I'm convinced, given that writers are throttled above the loopback
thread. Follows the Andrea's patch, but against 2.4.17-rc2 + removed
unused gfp_mask parameter of sync_page_buffers.

Regards,
-velco

--- 1.5/fs/buffer.c Tue Dec 18 15:40:18 2001
+++ edited/fs/buffer.c Thu Dec 20 22:45:36 2001
@@ -2432,7 +2432,7 @@
return 1;
}

-static int sync_page_buffers(struct buffer_head *head, unsigned int gfp_mask)
+static int sync_page_buffers(struct buffer_head *head)
{
struct buffer_head * bh = head;
int tryagain = 0;
@@ -2533,9 +2533,10 @@
/* Uhhuh, start writeback so that we don't end up with all dirty pages */
write_unlock(&hash_table_lock);
spin_unlock(&lru_list_lock);
+ gfp_mask = pf_gfp_mask(gfp_mask);
if (gfp_mask & __GFP_IO) {
if ((gfp_mask & __GFP_HIGHIO) || !PageHighMem(page)) {
- if (sync_page_buffers(bh, gfp_mask)) {
+ if (sync_page_buffers(bh)) {
/* no IO or waiting next time */
gfp_mask = 0;
goto cleaned_buffers_try_again;
--- 1.2/include/linux/mm.h Sat Dec 8 02:36:12 2001
+++ edited/include/linux/mm.h Thu Dec 20 22:49:04 2001
@@ -547,6 +547,14 @@
platforms, used as appropriate on others */

#define GFP_DMA __GFP_DMA
+static inline unsigned int pf_gfp_mask(unsigned int gfp_mask)
+{
+ /* avoid all memory balancing I/O methods if this task cannot block on I/O */
+ if (current->flags & PF_NOIO)
+ gfp_mask &= ~(__GFP_IO | __GFP_HIGHIO | __GFP_FS);
+
+ return gfp_mask;
+}

/* vma is the first one with address < vma->vm_end,
* and even address < vma->vm_start. Have to extend vma. */
--- 1.2/mm/vmscan.c Tue Dec 18 15:40:23 2001
+++ edited/mm/vmscan.c Thu Dec 20 22:49:47 2001
@@ -588,6 +588,8 @@
int priority = DEF_PRIORITY;
int nr_pages = SWAP_CLUSTER_MAX;

+ gfp_mask = pf_gfp_mask(gfp_mask);
+
do {
nr_pages = shrink_caches(classzone, priority, gfp_mask, nr_pages);
if (nr_pages <= 0)