2007-08-20 21:53:44

by Christoph Lameter

[permalink] [raw]
Subject: [RFC 3/7] shrink_page_list: Support isolating dirty pages on laundry list

If a laundry list is specified then do not write out pages but put
dirty pages on a laundry list for later processing.

Signed-off-by: Christoph Lameter <[email protected]>

---
mm/vmscan.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)

Index: linux-2.6/mm/vmscan.c
===================================================================
--- linux-2.6.orig/mm/vmscan.c 2007-08-19 23:13:28.000000000 -0700
+++ linux-2.6/mm/vmscan.c 2007-08-19 23:27:00.000000000 -0700
@@ -380,16 +380,22 @@ cannot_free:
}

/*
- * shrink_page_list() returns the number of reclaimed pages
+ * shrink_page_list() returns the number of reclaimed pages.
+ *
+ * If laundry is specified then dirty pages are put onto the
+ * laundry list and no writes are triggered.
*/
static unsigned long shrink_page_list(struct list_head *page_list,
- struct scan_control *sc)
+ struct scan_control *sc, struct list_head *laundry)
{
LIST_HEAD(ret_pages);
struct pagevec freed_pvec;
int pgactivate = 0;
unsigned long nr_reclaimed = 0;

+ if (list_empty(page_list))
+ return 0;
+
cond_resched();

pagevec_init(&freed_pvec, 1);
@@ -407,10 +413,11 @@ static unsigned long shrink_page_list(st
if (TestSetPageLocked(page))
goto keep;

- VM_BUG_ON(PageActive(page));
-
sc->nr_scanned++;

+ if (PageActive(page))
+ goto keep_locked;
+
if (!sc->may_swap && page_mapped(page))
goto keep_locked;

@@ -506,6 +513,12 @@ static unsigned long shrink_page_list(st
if (!may_write_to_queue(mapping->backing_dev_info))
goto keep_locked;

+ if (laundry) {
+ list_add(&page->lru, laundry);
+ unlock_page(page);
+ continue;
+ }
+
/* Page is dirty, try to write it out here */
switch(pageout(page, mapping)) {
case PAGE_ACTIVATE:
@@ -817,7 +830,7 @@ static unsigned long shrink_inactive_lis
spin_unlock_irq(&zone->lru_lock);

nr_scanned += nr_scan;
- nr_freed = shrink_page_list(&page_list, sc);
+ nr_freed = shrink_page_list(&page_list, sc, NULL);
nr_reclaimed += nr_freed;
local_irq_disable();
if (current_is_kswapd()) {

--


2007-08-21 15:04:55

by mel

[permalink] [raw]
Subject: Re: [RFC 3/7] shrink_page_list: Support isolating dirty pages on laundry list

On (20/08/07 14:50), Christoph Lameter didst pronounce:
> If a laundry list is specified then do not write out pages but put
> dirty pages on a laundry list for later processing.
>
> Signed-off-by: Christoph Lameter <[email protected]>
>
> ---
> mm/vmscan.c | 23 ++++++++++++++++++-----
> 1 file changed, 18 insertions(+), 5 deletions(-)
>
> Index: linux-2.6/mm/vmscan.c
> ===================================================================
> --- linux-2.6.orig/mm/vmscan.c 2007-08-19 23:13:28.000000000 -0700
> +++ linux-2.6/mm/vmscan.c 2007-08-19 23:27:00.000000000 -0700
> @@ -380,16 +380,22 @@ cannot_free:
> }
>
> /*
> - * shrink_page_list() returns the number of reclaimed pages
> + * shrink_page_list() returns the number of reclaimed pages.
> + *
> + * If laundry is specified then dirty pages are put onto the
> + * laundry list and no writes are triggered.
> */
> static unsigned long shrink_page_list(struct list_head *page_list,
> - struct scan_control *sc)
> + struct scan_control *sc, struct list_head *laundry)
> {
> LIST_HEAD(ret_pages);
> struct pagevec freed_pvec;
> int pgactivate = 0;
> unsigned long nr_reclaimed = 0;
>
> + if (list_empty(page_list))
> + return 0;
> +

This needs a comment to explain why shrink_page_list() would be called
with an empty list.

> cond_resched();
>
> pagevec_init(&freed_pvec, 1);
> @@ -407,10 +413,11 @@ static unsigned long shrink_page_list(st
> if (TestSetPageLocked(page))
> goto keep;
>
> - VM_BUG_ON(PageActive(page));
> -

This needs explanation in the leader. It implies that later you expect active
and inactive pages to be passed to shrink_page. i.e. We now need to keep an
eye out for where shrink_active_list() is sending pages to shrink_page_list()
instead of simply rotating the active list to the inactive.

> sc->nr_scanned++;
>
> + if (PageActive(page))
> + goto keep_locked;
> +
> if (!sc->may_swap && page_mapped(page))
> goto keep_locked;
>
> @@ -506,6 +513,12 @@ static unsigned long shrink_page_list(st
> if (!may_write_to_queue(mapping->backing_dev_info))
> goto keep_locked;
>
> + if (laundry) {
> + list_add(&page->lru, laundry);
> + unlock_page(page);
> + continue;
> + }

This needs a comment. What you are doing is explained in the leader but
it may not help a future reader of the code.

Also, with laundry specified there is no longer a check for PagePrivate
to see if the buffers can be freed and got rid of. According to the
comments in the next code block;

* We do this even if the page is PageDirty().
* try_to_release_page() does not perform I/O, but it is
* possible for a page to have PageDirty set, but it is actually
* clean (all its buffers are clean)

Is this intentional?

> +
> /* Page is dirty, try to write it out here */
> switch(pageout(page, mapping)) {
> case PAGE_ACTIVATE:
> @@ -817,7 +830,7 @@ static unsigned long shrink_inactive_lis
> spin_unlock_irq(&zone->lru_lock);
>
> nr_scanned += nr_scan;
> - nr_freed = shrink_page_list(&page_list, sc);
> + nr_freed = shrink_page_list(&page_list, sc, NULL);
> nr_reclaimed += nr_freed;
> local_irq_disable();
> if (current_is_kswapd()) {
>

--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab

2007-08-21 20:54:07

by Christoph Lameter

[permalink] [raw]
Subject: Re: [RFC 3/7] shrink_page_list: Support isolating dirty pages on laundry list

On Tue, 21 Aug 2007, Mel Gorman wrote:

> > + if (list_empty(page_list))
> > + return 0;
> > +
>
> This needs a comment to explain why shrink_page_list() would be called
> with an empty list.

Ok.

> > @@ -407,10 +413,11 @@ static unsigned long shrink_page_list(st
> > if (TestSetPageLocked(page))
> > goto keep;
> >
> > - VM_BUG_ON(PageActive(page));
> > -
>
> This needs explanation in the leader. It implies that later you expect active
> and inactive pages to be passed to shrink_page. i.e. We now need to keep an
> eye out for where shrink_active_list() is sending pages to shrink_page_list()
> instead of simply rotating the active list to the inactive.

Ok.

>
> > sc->nr_scanned++;
> >
> > + if (PageActive(page))
> > + goto keep_locked;
> > +
> > if (!sc->may_swap && page_mapped(page))
> > goto keep_locked;
> >
> > @@ -506,6 +513,12 @@ static unsigned long shrink_page_list(st
> > if (!may_write_to_queue(mapping->backing_dev_info))
> > goto keep_locked;
> >
> > + if (laundry) {
> > + list_add(&page->lru, laundry);
> > + unlock_page(page);
> > + continue;
> > + }
>
> This needs a comment. What you are doing is explained in the leader but
> it may not help a future reader of the code.
>
> Also, with laundry specified there is no longer a check for PagePrivate
> to see if the buffers can be freed and got rid of. According to the
> comments in the next code block;

The check for buffers comes after the writeout. Writeout occurs when
laundry == NULL.

>
> * We do this even if the page is PageDirty().
> * try_to_release_page() does not perform I/O, but it is
> * possible for a page to have PageDirty set, but it is actually
> * clean (all its buffers are clean)
>
> Is this intentional?

Yes buffers will be removed after writeout. Writeout requires buffers.