2010-07-13 10:17:15

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 00/30] [RFC] swap over nfs -v21

Hi,

Here's the latest version of swap over NFS series since -v20 last October. We decide to push
this feature as it is useful for NAS or virt environment.

The patches are against the mmotm-2010-07-01. We can split the patchset into following parts:

Patch 1 - 12: provides a generic reserve framework. This framework
could also be used to get rid of some of the __GFP_NOFAIL users.

Patch 13 - 15: Provide some generic network infrastructure needed later on.

Patch 16 - 21: reserve a little pool to act as a receive buffer, this allows us to
inspect packets before tossing them.

Patch 22 - 23: Generic vm infrastructure to handle swapping to a filesystem instead of a block
device.

Patch 24 - 27: convert NFS to make use of the new network and vm infrastructure to
provide swap over NFS.

Patch 28 - 30: minor bug fixing with latest -mmotm.

[some history]
v19: http://lwn.net/Articles/301915/
v20: http://lwn.net/Articles/355350/

Changes since v20:
- rebased to mmotm-2010-07-01
- dropped the null pointer deref patch for the root cause is wrong SWP_FILE enum
- some minor build fixes
- fix a null pointer deref with mmotm-2010-07-01
- fix a bug when swap with multi files on the same nfs server

Regards
Xiaotian


2010-07-13 12:53:44

by Cong Wang

[permalink] [raw]
Subject: Re: [PATCH -mmotm 00/30] [RFC] swap over nfs -v21

On Tue, Jul 13, 2010 at 6:16 PM, Xiaotian Feng <[email protected]> wrote:
> Hi,
>
> Here's the latest version of swap over NFS series since -v20 last October. We decide to push
> this feature as it is useful for NAS or virt environment.
>
> The patches are against the mmotm-2010-07-01. We can split the patchset into following parts:
>
> Patch 1 - 12: provides a generic reserve framework. This framework
> could also be used to get rid of some of the __GFP_NOFAIL users.
>
> Patch 13 - 15: Provide some generic network infrastructure needed later on.
>
> Patch 16 - 21: reserve a little pool to act as a receive buffer, this allows us to
> inspect packets before tossing them.
>
> Patch 22 - 23: Generic vm infrastructure to handle swapping to a filesystem instead of a block
> device.
>
> Patch 24 - 27: convert NFS to make use of the new network and vm infrastructure to
> provide swap over NFS.
>
> Patch 28 - 30: minor bug fixing with latest -mmotm.
>
> [some history]
> v19: http://lwn.net/Articles/301915/
> v20: http://lwn.net/Articles/355350/
>
> Changes since v20:
>        - rebased to mmotm-2010-07-01
>        - dropped the null pointer deref patch for the root cause is wrong SWP_FILE enum
>        - some minor build fixes
>        - fix a null pointer deref with mmotm-2010-07-01
>        - fix a bug when swap with multi files on the same nfs server

Please use the "From:" line correctly, as stated in
Documentation/SubmittingPatches:

The "from" line must be the very first line in the message body,
and has the form:

From: Original Author <[email protected]>

The "from" line specifies who will be credited as the author of the
patch in the permanent changelog. If the "from" line is missing,
then the "From:" line from the email header will be used to determine
the patch author in the changelog.


I think you are using git format-patch to generate those patches, please supply
--author=<author> to git commit when you commit them to your local
tree. (or git am
if the patches you received already had the correct From: line.)

Thanks.

2010-07-15 11:51:32

by Xiaotian Feng

[permalink] [raw]
Subject: Re: [PATCH -mmotm 12/30] selinux: tag avc cache alloc as non-critical

On 07/13/2010 06:55 PM, Mitchell Erblich wrote:
>
> On Jul 13, 2010, at 3:19 AM, Xiaotian Feng wrote:
>
>> From 6c3a91091b2910c23908a9f9953efcf3df14e522 Mon Sep 17 00:00:00 2001
>> From: Xiaotian Feng<[email protected]>
>> Date: Tue, 13 Jul 2010 11:02:41 +0800
>> Subject: [PATCH 12/30] selinux: tag avc cache alloc as non-critical
>>
>> Failing to allocate a cache entry will only harm performance not correctness.
>> Do not consume valuable reserve pages for something like that.
>>
>> Signed-off-by: Peter Zijlstra<[email protected]>
>> Signed-off-by: Suresh Jayaraman<[email protected]>
>> Signed-off-by: Xiaotian Feng<[email protected]>
>> ---
>> security/selinux/avc.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/security/selinux/avc.c b/security/selinux/avc.c
>> index 3662b0f..9029395 100644
>> --- a/security/selinux/avc.c
>> +++ b/security/selinux/avc.c
>> @@ -284,7 +284,7 @@ static struct avc_node *avc_alloc_node(void)
>> {
>> struct avc_node *node;
>>
>> - node = kmem_cache_zalloc(avc_node_cachep, GFP_ATOMIC);
>> + node = kmem_cache_zalloc(avc_node_cachep, GFP_ATOMIC|__GFP_NOMEMALLOC);
>> if (!node)
>> goto out;
>>
>> --
>> 1.7.1.1
>>
>
> Why not just replace GFP_ATOMIC with GFP_NOWAIT?
>
> This would NOT consume the valuable last pages.

But replace GFP_ATOMIC with GFP_NOWAIT can not prevent avc_alloc_node
consume reserved pages.

>
> Mitchell Erblich
>> --
>> To unsubscribe from this list: send the line "unsubscribe netdev" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-15 12:37:42

by Xiaotian Feng

[permalink] [raw]
Subject: Re: [PATCH -mmotm 05/30] mm: sl[au]b: add knowledge of reserve pages

On 07/14/2010 04:33 AM, Pekka Enberg wrote:
> Hi Xiaotian!
>
> I would actually prefer that the SLAB, SLOB, and SLUB changes were in
> separate patches to make reviewing easier.
>
> Looking at SLUB:
>
> On Tue, Jul 13, 2010 at 1:17 PM, Xiaotian Feng<[email protected]> wrote:
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 7bb7940..7a5d6dc 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -27,6 +27,8 @@
>> #include<linux/memory.h>
>> #include<linux/math64.h>
>> #include<linux/fault-inject.h>
>> +#include "internal.h"
>> +
>>
>> /*
>> * Lock order:
>> @@ -1139,7 +1141,8 @@ static void setup_object(struct kmem_cache *s, struct page *page,
>> s->ctor(object);
>> }
>>
>> -static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> +static
>> +struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node, int *reserve)
>> {
>> struct page *page;
>> void *start;
>> @@ -1153,6 +1156,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>> if (!page)
>> goto out;
>>
>> + *reserve = page->reserve;
>> +
>> inc_slabs_node(s, page_to_nid(page), page->objects);
>> page->slab = s;
>> page->flags |= 1<< PG_slab;
>> @@ -1606,10 +1611,20 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>> {
>> void **object;
>> struct page *new;
>> + int reserve;
>>
>> /* We handle __GFP_ZERO in the caller */
>> gfpflags&= ~__GFP_ZERO;
>>
>> + if (unlikely(c->reserve)) {
>> + /*
>> + * If the current slab is a reserve slab and the current
>> + * allocation context does not allow access to the reserves we
>> + * must force an allocation to test the current levels.
>> + */
>> + if (!(gfp_to_alloc_flags(gfpflags)& ALLOC_NO_WATERMARKS))
>> + goto grow_slab;
>
> OK, so assume that:
>
> (1) c->reserve is set to one
>
> (2) GFP flags don't allow dipping into the reserves
>
> (3) we've managed to free enough pages so normal
> allocations are fine
>
> (4) the page from reserves is not yet empty
>
> we will call flush_slab() and put the "emergency page" on partial list
> and clear c->reserve. This effectively means that now some other
> allocation can fetch the partial page and start to use it. Is this OK?
> Who makes sure the emergency reserves are large enough for the next
> out-of-memory condition where we swap over NFS?
>

Good catch. I'm just wondering if above check is necessary. For
"emergency page", we don't set c->freelist. How can we get a
reserved slab, if GPF flags don't allow dipping into reserves?

>> + }
>> if (!c->page)
>> goto new_slab;
>>
>> @@ -1623,8 +1638,8 @@ load_freelist:
>> object = c->page->freelist;
>> if (unlikely(!object))
>> goto another_slab;
>> - if (unlikely(SLABDEBUG&& PageSlubDebug(c->page)))
>> - goto debug;
>> + if (unlikely(SLABDEBUG&& PageSlubDebug(c->page) || c->reserve))
>> + goto slow_path;
>>
>> c->freelist = get_freepointer(s, object);
>> c->page->inuse = c->page->objects;
>> @@ -1646,16 +1661,18 @@ new_slab:
>> goto load_freelist;
>> }
>>
>> +grow_slab:
>> if (gfpflags& __GFP_WAIT)
>> local_irq_enable();
>>
>> - new = new_slab(s, gfpflags, node);
>> + new = new_slab(s, gfpflags, node,&reserve);
>>
>> if (gfpflags& __GFP_WAIT)
>> local_irq_disable();
>>
>> if (new) {
>> c = __this_cpu_ptr(s->cpu_slab);
>> + c->reserve = reserve;
>> stat(s, ALLOC_SLAB);
>> if (c->page)
>> flush_slab(s, c);
>> @@ -1667,10 +1684,20 @@ new_slab:
>> if (!(gfpflags& __GFP_NOWARN)&& printk_ratelimit())
>> slab_out_of_memory(s, gfpflags, node);
>> return NULL;
>> -debug:
>> - if (!alloc_debug_processing(s, c->page, object, addr))
>> +
>> +slow_path:
>> + if (!c->reserve&& !alloc_debug_processing(s, c->page, object, addr))
>> goto another_slab;
>>
>> + /*
>> + * Avoid the slub fast path in slab_alloc() by not setting
>> + * c->freelist and the fast path in slab_free() by making
>> + * node_match() fail by setting c->node to -1.
>> + *
>> + * We use this for for debug and reserve checks which need
>> + * to be done for each allocation.
>> + */
>> +
>> c->page->inuse++;
>> c->page->freelist = get_freepointer(s, object);
>> c->node = -1;
>> @@ -2095,10 +2122,11 @@ static void early_kmem_cache_node_alloc(gfp_t gfpflags, int node)
>> struct page *page;
>> struct kmem_cache_node *n;
>> unsigned long flags;
>> + int reserve;
>>
>> BUG_ON(kmalloc_caches->size< sizeof(struct kmem_cache_node));
>>
>> - page = new_slab(kmalloc_caches, gfpflags, node);
>> + page = new_slab(kmalloc_caches, gfpflags, node,&reserve);
>>
>> BUG_ON(!page);
>> if (page_to_nid(page) != node) {
>> --
>> 1.7.1.1
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to [email protected]. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email:<a href=mailto:"[email protected]"> [email protected]</a>
>>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:17:02

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 01/30] mm: serialize access to min_free_kbytes

>From bc55bacd6bcc0f8a69c0d7e0d554c78237233e07 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Mon, 12 Jul 2010 17:58:34 +0800
Subject: [PATCH 01/30] mm: serialize access to min_free_kbytes

There is a small race between the procfs caller and the memory hotplug caller
of setup_per_zone_wmarks(). Not a big deal, but the next patch will add yet
another caller. Time to close the gap.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
mm/page_alloc.c | 16 +++++++++++++---
1 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 50a6d10..ebf0af7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -165,6 +165,7 @@ static char * const zone_names[MAX_NR_ZONES] = {
"Movable",
};

+static DEFINE_SPINLOCK(min_free_lock);
int min_free_kbytes = 1024;

static unsigned long __meminitdata nr_kernel_pages;
@@ -4839,13 +4840,13 @@ static void setup_per_zone_lowmem_reserve(void)
}

/**
- * setup_per_zone_wmarks - called when min_free_kbytes changes
+ * __setup_per_zone_wmarks - called when min_free_kbytes changes
* or when memory is hot-{added|removed}
*
* Ensures that the watermark[min,low,high] values for each zone are set
* correctly with respect to min_free_kbytes.
*/
-void setup_per_zone_wmarks(void)
+static void __setup_per_zone_wmarks(void)
{
unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
unsigned long lowmem_pages = 0;
@@ -4943,6 +4944,15 @@ static void __init setup_per_zone_inactive_ratio(void)
calculate_zone_inactive_ratio(zone);
}

+void setup_per_zone_wmarks(void)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&min_free_lock, flags);
+ __setup_per_zone_wmarks();
+ spin_unlock_irqrestore(&min_free_lock, flags);
+}
+
/*
* Initialise min_free_kbytes.
*
@@ -4978,7 +4988,7 @@ static int __init init_per_zone_wmark_min(void)
min_free_kbytes = 128;
if (min_free_kbytes > 65536)
min_free_kbytes = 65536;
- setup_per_zone_wmarks();
+ __setup_per_zone_wmarks();
setup_per_zone_lowmem_reserve();
setup_per_zone_inactive_ratio();
return 0;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:17:14

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 02/30] Swap over network documentation

>From 8c68e4dc644be32cd82ba9711ba3ef89cb687cdf Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Mon, 12 Jul 2010 17:59:16 +0800
Subject: [PATCH 02/30] Swap over network documentation

Document describing the problem and proposed solution

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
Documentation/network-swap.txt | 268 ++++++++++++++++++++++++++++++++++++++++
1 files changed, 268 insertions(+), 0 deletions(-)
create mode 100644 Documentation/network-swap.txt

diff --git a/Documentation/network-swap.txt b/Documentation/network-swap.txt
new file mode 100644
index 0000000..760af2e
--- /dev/null
+++ b/Documentation/network-swap.txt
@@ -0,0 +1,268 @@
+
+Problem:
+ When Linux needs to allocate memory it may find that there is
+ insufficient free memory so it needs to reclaim space that is in
+ use but not needed at the moment. There are several options:
+
+ 1/ Shrink a kernel cache such as the inode or dentry cache. This
+ is fairly easy but provides limited returns.
+ 2/ Discard 'clean' pages from the page cache. This is easy, and
+ works well as long as there are clean pages in the page cache.
+ Similarly clean 'anonymous' pages can be discarded - if there
+ are any.
+ 3/ Write out some dirty page-cache pages so that they become clean.
+ The VM limits the number of dirty page-cache pages to e.g. 40%
+ of available memory so that (among other reasons) a "sync" will
+ not take excessively long. So there should never be excessive
+ amounts of dirty pagecache.
+ Writing out dirty page-cache pages involves work by the
+ filesystem which may need to allocate memory itself. To avoid
+ deadlock, filesystems use GFP_NOFS when allocating memory on the
+ write-out path. When this is used, cleaning dirty page-cache
+ pages is not an option so if the filesystem finds that memory
+ is tight, another option must be found.
+ 4/ Write out dirty anonymous pages to the "Swap" partition/file.
+ This is the most interesting for a couple of reasons.
+ a/ Unlike dirty page-cache pages, there is no need to write anon
+ pages out unless we are actually short of memory. Thus they
+ tend to be left to last.
+ b/ Anon pages tend to be updated randomly and unpredictably, and
+ flushing them out of memory can have a very significant
+ performance impact on the process using them. This contrasts
+ with page-cache pages which are often written sequentially
+ and often treated as "write-once, read-many".
+ So anon pages tend to be left until last to be cleaned, and may
+ be the only cleanable pages while there are still some dirty
+ page-cache pages (which are waiting on a GFP_NOFS allocation).
+
+[I don't find the above wholly satisfying. There seems to be too much
+ hand-waving. If someone can provide better text explaining why
+ swapout is a special case, that would be great.]
+
+So we need to be able to write to the swap file/partition without
+needing to allocate any memory ... or only a small well controlled
+amount.
+
+The VM reserves a small amount of memory that can only be allocated
+for use as part of the swap-out procedure. It is only available to
+processes with the PF_MEMALLOC flag set, which is typically just the
+memory cleaner.
+
+Traditionally swap-out is performed directly to block devices (swap
+files on block-device filesystems are supported by examining the
+mapping from file offset to device offset in advance, and then using
+the device offsets to write directly to the device). Block devices
+are (required to be) written to pre-allocate any memory that might be
+needed during write-out, and to block when the pre-allocated memory is
+exhausted and no other memory is available. They can be sure not to
+block forever as the pre-allocated memory will be returned as soon as
+the data it is being used for has been written out. The primary
+mechanism for pre-allocating memory is called "mempools".
+
+This approach does not work for writing anonymous pages
+(i.e. swapping) over a network, using e.g NFS or NBD or iSCSI.
+
+
+The main reason that it does not work is that when data from an anon
+page is written to the network, we must wait for a reply to confirm
+the data is safe. Receiving that reply will consume memory and,
+significantly, we need to allocate memory to an incoming packet before
+we can tell if it is the reply we are waiting for or not.
+
+The secondary reason is that the network code is not written to use
+mempools and in most cases does not need to use them. Changing all
+allocations in the networking layer to use mempools would be quite
+intrusive, and would waste memory, and probably cause a slow-down in
+the common case of not swapping over the network.
+
+These problems are addressed by enhancing the system of memory
+reserves used by PF_MEMALLOC and requiring any in-kernel networking
+client that is used for swap-out to indicate which sockets are used
+for swapout so they can be handled specially in low memory situations.
+
+There are several major parts to this enhancement:
+
+1/ page->reserve, GFP_MEMALLOC
+
+ To handle low memory conditions we need to know when those
+ conditions exist. Having a global "low on memory" flag seems easy,
+ but its implementation is problematic. Instead we make it possible
+ to tell if a recent memory allocation required use of the emergency
+ memory pool.
+ For pages returned by alloc_page, the new page->reserve flag
+ can be tested. If this is set, then a low memory condition was
+ current when the page was allocated, so the memory should be used
+ carefully. (Because low memory conditions are transient, this
+ state is kept in an overloaded member instead of in page flags, which
+ would suggest a more permanent state.)
+
+ For memory allocated using slab/slub: If a page that is added to a
+ kmem_cache is found to have page->reserve set, then a s->reserve
+ flag is set for the whole kmem_cache. Further allocations will only
+ be returned from that page (or any other page in the cache) if they
+ are emergency allocation (i.e. PF_MEMALLOC or GFP_MEMALLOC is set).
+ Non-emergency allocations will block in alloc_page until a
+ non-reserve page is available. Once a non-reserve page has been
+ added to the cache, the s->reserve flag on the cache is removed.
+
+ Because slab objects have no individual state its hard to pass
+ reserve state along, the current code relies on a regular alloc
+ failing. There are various allocation wrappers help here.
+
+ This allows us to
+ a/ request use of the emergency pool when allocating memory
+ (GFP_MEMALLOC), and
+ b/ to find out if the emergency pool was used.
+
+2/ SK_MEMALLOC, sk_buff->emergency.
+
+ When memory from the reserve is used to store incoming network
+ packets, the memory must be freed (and the packet dropped) as soon
+ as we find out that the packet is not for a socket that is used for
+ swap-out.
+ To achieve this we have an ->emergency flag for skbs, and an
+ SK_MEMALLOC flag for sockets.
+ When memory is allocated for an skb, it is allocated with
+ GFP_MEMALLOC (if we are currently swapping over the network at
+ all). If a subsequent test shows that the emergency pool was used,
+ ->emergency is set.
+ When the skb is finally attached to its destination socket, the
+ SK_MEMALLOC flag on the socket is tested. If the skb has
+ ->emergency set, but the socket does not have SK_MEMALLOC set, then
+ the skb is immediately freed and the packet is dropped.
+ This ensures that reserve memory is never queued on a socket that is
+ not used for swapout.
+
+ Similarly, if an skb is ever queued for delivery to user-space for
+ example by netfilter, the ->emergency flag is tested and the skb is
+ released if ->emergency is set. (so obviously the storage route may
+ not pass through a userspace helper, otherwise the packets will never
+ arrive and we'll deadlock)
+
+ This ensures that memory from the emergency reserve can be used to
+ allow swapout to proceed, but will not get caught up in any other
+ network queue.
+
+
+3/ pages_emergency
+
+ The above would be sufficient if the total memory below the lowest
+ memory watermark (i.e the size of the emergency reserve) were known
+ to be enough to hold all transient allocations needed for writeout.
+ I'm a little blurry on how big the current emergency pool is, but it
+ isn't big and certainly hasn't been sized to allow network traffic
+ to consume any.
+
+ We could simply make the size of the reserve bigger. However in the
+ common case that we are not swapping over the network, that would be
+ a waste of memory.
+
+ So a new "watermark" is defined: pages_emergency. This is
+ effectively added to the current low water marks, so that pages from
+ this emergency pool can only be allocated if one of PF_MEMALLOC or
+ GFP_MEMALLOC are set.
+
+ pages_emergency can be changed dynamically based on need. When
+ swapout over the network is required, pages_emergency is increased
+ to cover the maximum expected load. When network swapout is
+ disabled, pages_emergency is decreased.
+
+ To determine how much to increase it by, we introduce reservation
+ groups....
+
+3a/ reservation groups
+
+ The memory used transiently for swapout can be in a number of
+ different places. e.g. the network route cache, the network
+ fragment cache, in transit between network card and socket, or (in
+ the case of NFS) in sunrpc data structures awaiting a reply.
+ We need to ensure each of these is limited in the amount of memory
+ they use, and that the maximum is included in the reserve.
+
+ The memory required by the network layer only needs to be reserved
+ once, even if there are multiple swapout paths using the network
+ (e.g. NFS and NDB and iSCSI, though using all three for swapout at
+ the same time would be unusual).
+
+ So we create a tree of reservation groups. The network might
+ register a collection of reservations, but not mark them as being in
+ use. NFS and sunrpc might similarly register a collection of
+ reservations, and attach it to the network reservations as it
+ depends on them.
+ When swapout over NFS is requested, the NFS/sunrpc reservations are
+ activated which implicitly activates the network reservations.
+
+ The total new reservation is added to pages_emergency.
+
+ Provided each memory usage stays beneath the registered limit (at
+ least when allocating memory from reserves), the system will never
+ run out of emergency memory, and swapout will not deadlock.
+
+ It is worth noting here that it is not critical that each usage
+ stays beneath the limit 100% of the time. Occasional excess is
+ acceptable provided that the memory will be freed again within a
+ short amount of time that does *not* require waiting for any event
+ that itself might require memory.
+ This is because, at all stages of transmit and receive, it is
+ acceptable to discard all transient memory associated with a
+ particular writeout and try again later. On transmit, the page can
+ be re-queued for later transmission. On receive, the packet can be
+ dropped assuming that the peer will resend after a timeout.
+
+ Thus allocations that are truly transient and will be freed without
+ blocking do not strictly need to be reserved for. Doing so might
+ still be a good idea to ensure forward progress doesn't take too
+ long.
+
+4/ low-mem accounting
+
+ Most places that might hold on to emergency memory (e.g. route
+ cache, fragment cache etc) already place a limit on the amount of
+ memory that they can use. This limit can simply be reserved using
+ the above mechanism and no more needs to be done.
+
+ However some memory usage might not be accounted with sufficient
+ firmness to allow an appropriate emergency reservation. The
+ in-flight skbs for incoming packets is one such example.
+
+ To support this, a low-overhead mechanism for accounting memory
+ usage against the reserves is provided. This mechanism uses the
+ same data structure that is used to store the emergency memory
+ reservations through the addition of a 'usage' field.
+
+ Before we attempt allocation from the memory reserves, we much check
+ if the resulting 'usage' is below the reservation. If so, we increase
+ the usage and attempt the allocation (which should succeed). If
+ the projected 'usage' exceeds the reservation we'll either fail the
+ allocation, or wait for 'usage' to decrease enough so that it would
+ succeed, depending on __GFP_WAIT.
+
+ When memory that was allocated for that purpose is freed, the
+ 'usage' field is checked again. If it is non-zero, then the size of
+ the freed memory is subtracted from the usage, making sure the usage
+ never becomes less than zero.
+
+ This provides adequate accounting with minimal overheads when not in
+ a low memory condition. When a low memory condition is encountered
+ it does add the cost of a spin lock necessary to serialise updates
+ to 'usage'.
+
+
+
+5/ swapon/swapoff/swap_out/swap_in
+
+ So that a filesystem (e.g. NFS) can know when to set SK_MEMALLOC on
+ any network socket that it uses, and can know when to account
+ reserve memory carefully, new address_space_operations are
+ available.
+ "swapon" requests that an address space (i.e a file) be make ready
+ for swapout. swap_out and swap_in request the actual IO. They
+ together must ensure that each swap_out request can succeed without
+ allocating more emergency memory that was reserved by swapon. swapoff
+ is used to reverse the state changes caused by swapon when we disable
+ the swap file.
+
+
+Thanks for reading this far. I hope it made sense :-)
+
+Neil Brown (with updates from Peter Zijlstra)
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:17:25

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 03/30] mm: expose gfp_to_alloc_flags()

>From 8a554f86ab2fd4d8a1daecaef4cc5bb3901c4423 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Mon, 12 Jul 2010 17:59:52 +0800
Subject: [PATCH 03/30] mm: expose gfp_to_alloc_flags()

Expose the gfp to alloc_flags mapping, so we can use it in other parts
of the vm.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
mm/internal.h | 15 +++++++++++++++
mm/page_alloc.c | 16 +---------------
2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 6a697bb..3e2cc3a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -185,6 +185,21 @@ static inline struct page *mem_map_next(struct page *iter,
#define __paginginit __init
#endif

+/* The ALLOC_WMARK bits are used as an index to zone->watermark */
+#define ALLOC_WMARK_MIN WMARK_MIN
+#define ALLOC_WMARK_LOW WMARK_LOW
+#define ALLOC_WMARK_HIGH WMARK_HIGH
+#define ALLOC_NO_WATERMARKS 0x04 /* don't check watermarks at all */
+
+/* Mask to get the watermark bits */
+#define ALLOC_WMARK_MASK (ALLOC_NO_WATERMARKS-1)
+
+#define ALLOC_HARDER 0x10 /* try to alloc harder */
+#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */
+#define ALLOC_CPUSET 0x40 /* check for correct cpuset */
+
+int gfp_to_alloc_flags(gfp_t gfp_mask);
+
/* Memory initialisation debug and verification */
enum mminit_level {
MMINIT_WARNING,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ebf0af7..72a6be5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1345,19 +1345,6 @@ failed:
return NULL;
}

-/* The ALLOC_WMARK bits are used as an index to zone->watermark */
-#define ALLOC_WMARK_MIN WMARK_MIN
-#define ALLOC_WMARK_LOW WMARK_LOW
-#define ALLOC_WMARK_HIGH WMARK_HIGH
-#define ALLOC_NO_WATERMARKS 0x04 /* don't check watermarks at all */
-
-/* Mask to get the watermark bits */
-#define ALLOC_WMARK_MASK (ALLOC_NO_WATERMARKS-1)
-
-#define ALLOC_HARDER 0x10 /* try to alloc harder */
-#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */
-#define ALLOC_CPUSET 0x40 /* check for correct cpuset */
-
#ifdef CONFIG_FAIL_PAGE_ALLOC

static struct fail_page_alloc_attr {
@@ -1911,8 +1898,7 @@ void wake_all_kswapd(unsigned int order, struct zonelist *zonelist,
wakeup_kswapd(zone, order);
}

-static inline int
-gfp_to_alloc_flags(gfp_t gfp_mask)
+int gfp_to_alloc_flags(gfp_t gfp_mask)
{
struct task_struct *p = current;
int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:17:36

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 04/30] mm: tag reseve pages

>From 14f7823986429b8374d18e3f648cd84575296e03 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Mon, 12 Jul 2010 18:00:37 +0800
Subject: [PATCH 04/30] mm: tag reseve pages

Tag pages allocated from the reserves with a non-zero page->reserve.
This allows us to distinguish and account reserve pages.

Since low-memory situations are transient, and unrelated the the actual
page (any page can be on the freelist when we run low), don't mark the
page in any permanent way - just pass along the information to the
allocatee.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/mm_types.h | 1 +
mm/page_alloc.c | 4 +++-
2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index b8bb9a6..a95a202 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -71,6 +71,7 @@ struct page {
union {
pgoff_t index; /* Our offset within mapping. */
void *freelist; /* SLUB: freelist req. slab lock */
+ int reserve; /* page_alloc: page is a reserve page */
};
struct list_head lru; /* Pageout list, eg. active_list
* protected by zone->lru_lock !
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 72a6be5..b9989c5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1656,8 +1656,10 @@ zonelist_scan:
try_this_zone:
page = buffered_rmqueue(preferred_zone, zone, order,
gfp_mask, migratetype);
- if (page)
+ if (page) {
+ page->reserve = !!(alloc_flags & ALLOC_NO_WATERMARKS);
break;
+ }
this_zone_full:
if (NUMA_BUILD)
zlc_mark_zone_full(zonelist, z);
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:17:47

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 05/30] mm: sl[au]b: add knowledge of reserve pages

>From fba0bdebc34d3db41a2c975eb38e9548ea5c2ed1 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 10:40:05 +0800
Subject: [PATCH 05/30] mm: sl[au]b: add knowledge of reserve pages

Restrict objects from reserve slabs (ALLOC_NO_WATERMARKS) to allocation
contexts that are entitled to it. This is done to ensure reserve pages don't
leak out and get consumed.

The basic pattern used for all # allocators is the following, for each active
slab page we store if it came from an emergency allocation. When we find it
did, make sure the current allocation context would have been able to allocate
page from the emergency reserves as well. In that case allow the allocation. If
not, force a new slab allocation. When that works the memory pressure has
lifted enough to allow this context to get an object, otherwise fail the
allocation.

[[email protected]: Fix use of uninitialized variable in cache_grow]
[[email protected]: Minor fix related with SLABDEBUG]
Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Miklos Szeredi <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/slub_def.h | 1 +
mm/slab.c | 62 +++++++++++++++++++++++++++++++++++++++------
mm/slob.c | 16 +++++++++++-
mm/slub.c | 42 ++++++++++++++++++++++++++-----
4 files changed, 104 insertions(+), 17 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 6447a72..9ef61f4 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -39,6 +39,7 @@ struct kmem_cache_cpu {
void **freelist; /* Pointer to first free per cpu object */
struct page *page; /* The slab from which we are allocating */
int node; /* The node of the page (or -1 for debug) */
+ int reserve; /* Did the current page come from the reserve */
#ifdef CONFIG_SLUB_STATS
unsigned stat[NR_SLUB_STAT_ITEMS];
#endif
diff --git a/mm/slab.c b/mm/slab.c
index 4e9c46f..d8cd757 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -120,6 +120,8 @@
#include <asm/tlbflush.h>
#include <asm/page.h>

+#include "internal.h"
+
/*
* DEBUG - 1 for kmem_cache_create() to honour; SLAB_RED_ZONE & SLAB_POISON.
* 0 for faster, smaller code (especially in the critical paths).
@@ -244,7 +246,8 @@ struct array_cache {
unsigned int avail;
unsigned int limit;
unsigned int batchcount;
- unsigned int touched;
+ unsigned int touched:1,
+ reserve:1;
spinlock_t lock;
void *entry[]; /*
* Must have this definition in here for the proper
@@ -680,6 +683,27 @@ static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
return cachep->array[smp_processor_id()];
}

+/*
+ * If the last page came from the reserves, and the current allocation context
+ * does not have access to them, force an allocation to test the watermarks.
+ */
+static inline int slab_force_alloc(struct kmem_cache *cachep, gfp_t flags)
+{
+ if (unlikely(cpu_cache_get(cachep)->reserve) &&
+ !(gfp_to_alloc_flags(flags) & ALLOC_NO_WATERMARKS))
+ return 1;
+
+ return 0;
+}
+
+static inline void slab_set_reserve(struct kmem_cache *cachep, int reserve)
+{
+ struct array_cache *ac = cpu_cache_get(cachep);
+
+ if (unlikely(ac->reserve != reserve))
+ ac->reserve = reserve;
+}
+
static inline struct kmem_cache *__find_general_cachep(size_t size,
gfp_t gfpflags)
{
@@ -886,6 +910,7 @@ static struct array_cache *alloc_arraycache(int node, int entries,
nc->limit = entries;
nc->batchcount = batchcount;
nc->touched = 0;
+ nc->reserve = 0;
spin_lock_init(&nc->lock);
}
return nc;
@@ -1674,7 +1699,8 @@ __initcall(cpucache_init);
* did not request dmaable memory, we might get it, but that
* would be relatively rare and ignorable.
*/
-static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid)
+static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid,
+ int *reserve)
{
struct page *page;
int nr_pages;
@@ -1696,6 +1722,7 @@ static void *kmem_getpages(struct kmem_cache *cachep, gfp_t flags, int nodeid)
if (!page)
return NULL;

+ *reserve = page->reserve;
nr_pages = (1 << cachep->gfporder);
if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
add_zone_page_state(page_zone(page),
@@ -2128,6 +2155,7 @@ static int __init_refok setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
cpu_cache_get(cachep)->limit = BOOT_CPUCACHE_ENTRIES;
cpu_cache_get(cachep)->batchcount = 1;
cpu_cache_get(cachep)->touched = 0;
+ cpu_cache_get(cachep)->reserve = 0;
cachep->batchcount = 1;
cachep->limit = BOOT_CPUCACHE_ENTRIES;
return 0;
@@ -2813,6 +2841,7 @@ static int cache_grow(struct kmem_cache *cachep,
size_t offset;
gfp_t local_flags;
struct kmem_list3 *l3;
+ int reserve = -1;

/*
* Be lazy and only check for valid flags here, keeping it out of the
@@ -2851,7 +2880,7 @@ static int cache_grow(struct kmem_cache *cachep,
* 'nodeid'.
*/
if (!objp)
- objp = kmem_getpages(cachep, local_flags, nodeid);
+ objp = kmem_getpages(cachep, local_flags, nodeid, &reserve);
if (!objp)
goto failed;

@@ -2868,6 +2897,8 @@ static int cache_grow(struct kmem_cache *cachep,
if (local_flags & __GFP_WAIT)
local_irq_disable();
check_irq_off();
+ if (reserve != -1)
+ slab_set_reserve(cachep, reserve);
spin_lock(&l3->list_lock);

/* Make slab active. */
@@ -3002,7 +3033,8 @@ bad:
#define check_slabp(x,y) do { } while(0)
#endif

-static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
+static void *cache_alloc_refill(struct kmem_cache *cachep,
+ gfp_t flags, int must_refill)
{
int batchcount;
struct kmem_list3 *l3;
@@ -3012,6 +3044,8 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
retry:
check_irq_off();
node = numa_mem_id();
+ if (unlikely(must_refill))
+ goto force_grow;
ac = cpu_cache_get(cachep);
batchcount = ac->batchcount;
if (!ac->touched && batchcount > BATCHREFILL_LIMIT) {
@@ -3081,11 +3115,14 @@ alloc_done:

if (unlikely(!ac->avail)) {
int x;
+force_grow:
x = cache_grow(cachep, flags | GFP_THISNODE, node, NULL);

/* cache_grow can reenable interrupts, then ac could change. */
ac = cpu_cache_get(cachep);
- if (!x && ac->avail == 0) /* no objects in sight? abort */
+
+ /* no objects in sight? abort */
+ if (!x && (ac->avail == 0 || must_refill))
return NULL;

if (!ac->avail) /* objects refilled by interrupt? */
@@ -3175,17 +3212,18 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
{
void *objp;
struct array_cache *ac;
+ int must_refill = slab_force_alloc(cachep, flags);

check_irq_off();

ac = cpu_cache_get(cachep);
- if (likely(ac->avail)) {
+ if (likely(ac->avail && !must_refill)) {
STATS_INC_ALLOCHIT(cachep);
ac->touched = 1;
objp = ac->entry[--ac->avail];
} else {
STATS_INC_ALLOCMISS(cachep);
- objp = cache_alloc_refill(cachep, flags);
+ objp = cache_alloc_refill(cachep, flags, must_refill);
/*
* the 'ac' may be updated by cache_alloc_refill(),
* and kmemleak_erase() requires its correct value.
@@ -3243,7 +3281,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
struct zone *zone;
enum zone_type high_zoneidx = gfp_zone(flags);
void *obj = NULL;
- int nid;
+ int nid, reserve;

if (flags & __GFP_THISNODE)
return NULL;
@@ -3280,10 +3318,12 @@ retry:
if (local_flags & __GFP_WAIT)
local_irq_enable();
kmem_flagcheck(cache, flags);
- obj = kmem_getpages(cache, local_flags, numa_mem_id());
+ obj = kmem_getpages(cache, local_flags, numa_mem_id(),
+ &reserve);
if (local_flags & __GFP_WAIT)
local_irq_disable();
if (obj) {
+ slab_set_reserve(cache, reserve);
/*
* Insert into the appropriate per node queues
*/
@@ -3323,6 +3363,9 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags,
l3 = cachep->nodelists[nodeid];
BUG_ON(!l3);

+ if (unlikely(slab_force_alloc(cachep, flags)))
+ goto force_grow;
+
retry:
check_irq_off();
spin_lock(&l3->list_lock);
@@ -3360,6 +3403,7 @@ retry:

must_grow:
spin_unlock(&l3->list_lock);
+force_grow:
x = cache_grow(cachep, flags | GFP_THISNODE, nodeid, NULL);
if (x)
goto retry;
diff --git a/mm/slob.c b/mm/slob.c
index 3f19a34..b84b611 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -71,6 +71,7 @@
#include <trace/events/kmem.h>

#include <asm/atomic.h>
+#include "internal.h"

/*
* slob_block has a field 'units', which indicates size of block if +ve,
@@ -193,6 +194,11 @@ struct slob_rcu {
static DEFINE_SPINLOCK(slob_lock);

/*
+ * tracks the reserve state for the allocator.
+ */
+static int slob_reserve;
+
+/*
* Encode the given size and next info into a free slob block s.
*/
static void set_slob(slob_t *s, slobidx_t size, slob_t *next)
@@ -242,7 +248,7 @@ static int slob_last(slob_t *s)

static void *slob_new_pages(gfp_t gfp, int order, int node)
{
- void *page;
+ struct page *page;

#ifdef CONFIG_NUMA
if (node != -1)
@@ -254,6 +260,8 @@ static void *slob_new_pages(gfp_t gfp, int order, int node)
if (!page)
return NULL;

+ slob_reserve = page->reserve;
+
return page_address(page);
}

@@ -326,6 +334,11 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
slob_t *b = NULL;
unsigned long flags;

+ if (unlikely(slob_reserve)) {
+ if (!(gfp_to_alloc_flags(gfp) & ALLOC_NO_WATERMARKS))
+ goto grow;
+ }
+
if (size < SLOB_BREAK1)
slob_list = &free_slob_small;
else if (size < SLOB_BREAK2)
@@ -364,6 +377,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
}
spin_unlock_irqrestore(&slob_lock, flags);

+grow:
/* Not enough space: must allocate a new page */
if (!b) {
b = slob_new_pages(gfp & ~__GFP_ZERO, 0, node);
diff --git a/mm/slub.c b/mm/slub.c
index 7bb7940..7a5d6dc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -27,6 +27,8 @@
#include <linux/memory.h>
#include <linux/math64.h>
#include <linux/fault-inject.h>
+#include "internal.h"
+

/*
* Lock order:
@@ -1139,7 +1141,8 @@ static void setup_object(struct kmem_cache *s, struct page *page,
s->ctor(object);
}

-static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
+static
+struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node, int *reserve)
{
struct page *page;
void *start;
@@ -1153,6 +1156,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
if (!page)
goto out;

+ *reserve = page->reserve;
+
inc_slabs_node(s, page_to_nid(page), page->objects);
page->slab = s;
page->flags |= 1 << PG_slab;
@@ -1606,10 +1611,20 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
{
void **object;
struct page *new;
+ int reserve;

/* We handle __GFP_ZERO in the caller */
gfpflags &= ~__GFP_ZERO;

+ if (unlikely(c->reserve)) {
+ /*
+ * If the current slab is a reserve slab and the current
+ * allocation context does not allow access to the reserves we
+ * must force an allocation to test the current levels.
+ */
+ if (!(gfp_to_alloc_flags(gfpflags) & ALLOC_NO_WATERMARKS))
+ goto grow_slab;
+ }
if (!c->page)
goto new_slab;

@@ -1623,8 +1638,8 @@ load_freelist:
object = c->page->freelist;
if (unlikely(!object))
goto another_slab;
- if (unlikely(SLABDEBUG && PageSlubDebug(c->page)))
- goto debug;
+ if (unlikely(SLABDEBUG && PageSlubDebug(c->page) || c->reserve))
+ goto slow_path;

c->freelist = get_freepointer(s, object);
c->page->inuse = c->page->objects;
@@ -1646,16 +1661,18 @@ new_slab:
goto load_freelist;
}

+grow_slab:
if (gfpflags & __GFP_WAIT)
local_irq_enable();

- new = new_slab(s, gfpflags, node);
+ new = new_slab(s, gfpflags, node, &reserve);

if (gfpflags & __GFP_WAIT)
local_irq_disable();

if (new) {
c = __this_cpu_ptr(s->cpu_slab);
+ c->reserve = reserve;
stat(s, ALLOC_SLAB);
if (c->page)
flush_slab(s, c);
@@ -1667,10 +1684,20 @@ new_slab:
if (!(gfpflags & __GFP_NOWARN) && printk_ratelimit())
slab_out_of_memory(s, gfpflags, node);
return NULL;
-debug:
- if (!alloc_debug_processing(s, c->page, object, addr))
+
+slow_path:
+ if (!c->reserve && !alloc_debug_processing(s, c->page, object, addr))
goto another_slab;

+ /*
+ * Avoid the slub fast path in slab_alloc() by not setting
+ * c->freelist and the fast path in slab_free() by making
+ * node_match() fail by setting c->node to -1.
+ *
+ * We use this for for debug and reserve checks which need
+ * to be done for each allocation.
+ */
+
c->page->inuse++;
c->page->freelist = get_freepointer(s, object);
c->node = -1;
@@ -2095,10 +2122,11 @@ static void early_kmem_cache_node_alloc(gfp_t gfpflags, int node)
struct page *page;
struct kmem_cache_node *n;
unsigned long flags;
+ int reserve;

BUG_ON(kmalloc_caches->size < sizeof(struct kmem_cache_node));

- page = new_slab(kmalloc_caches, gfpflags, node);
+ page = new_slab(kmalloc_caches, gfpflags, node, &reserve);

BUG_ON(!page);
if (page_to_nid(page) != node) {
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:17:59

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 06/30] mm: kmem_alloc_estimate()

>From 4a2dff5cf02e9d7f6ee9345c337697c4ab66c6dc Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 10:41:22 +0800
Subject: [PATCH 06/30] mm: kmem_alloc_estimate()

Provide a method to get the upper bound on the pages needed to allocate
a given number of objects from a given kmem_cache.

This lays the foundation for a generic reserve framework as presented in
a later patch in this series. This framework needs to convert object demand
(kmalloc() bytes, kmem_cache_alloc() objects) to pages.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/slab.h | 4 ++
mm/slab.c | 75 +++++++++++++++++++++++++++++++++++++++++++
mm/slob.c | 67 ++++++++++++++++++++++++++++++++++++++
mm/slub.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 233 insertions(+), 0 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 49d1247..b57b9ca 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -108,6 +108,8 @@ unsigned int kmem_cache_size(struct kmem_cache *);
const char *kmem_cache_name(struct kmem_cache *);
int kern_ptr_validate(const void *ptr, unsigned long size);
int kmem_ptr_validate(struct kmem_cache *cachep, const void *ptr);
+unsigned kmem_alloc_estimate(struct kmem_cache *cachep,
+ gfp_t flags, int objects);

/*
* Please use this macro to create slab caches. Simply specify the
@@ -144,6 +146,8 @@ void * __must_check krealloc(const void *, size_t, gfp_t);
void kfree(const void *);
void kzfree(const void *);
size_t ksize(const void *);
+unsigned kmalloc_estimate_objs(size_t, gfp_t, int);
+unsigned kmalloc_estimate_bytes(gfp_t, size_t);

/*
* Allocator specific definitions. These are mainly used to establish optimized
diff --git a/mm/slab.c b/mm/slab.c
index d8cd757..2a0dd0d 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3913,6 +3913,81 @@ const char *kmem_cache_name(struct kmem_cache *cachep)
EXPORT_SYMBOL_GPL(kmem_cache_name);

/*
+ * Calculate the upper bound of pages required to sequentially allocate
+ * @objects objects from @cachep.
+ */
+unsigned kmem_alloc_estimate(struct kmem_cache *cachep,
+ gfp_t flags, int objects)
+{
+ /*
+ * (1) memory for objects,
+ */
+ unsigned nr_slabs = DIV_ROUND_UP(objects, cachep->num);
+ unsigned nr_pages = nr_slabs << cachep->gfporder;
+
+ /*
+ * (2) memory for each per-cpu queue (nr_cpu_ids),
+ * (3) memory for each per-node alien queues (nr_cpu_ids), and
+ * (4) some amount of memory for the slab management structures
+ *
+ * XXX: truely account these
+ */
+ nr_pages += 1 + ilog2(nr_pages);
+
+ return nr_pages;
+}
+
+/*
+ * Calculate the upper bound of pages required to sequentially allocate
+ * @count objects of @size bytes from kmalloc given @flags.
+ */
+unsigned kmalloc_estimate_objs(size_t size, gfp_t flags, int count)
+{
+ struct kmem_cache *s = kmem_find_general_cachep(size, flags);
+ if (!s)
+ return 0;
+
+ return kmem_alloc_estimate(s, flags, count);
+}
+EXPORT_SYMBOL_GPL(kmalloc_estimate_objs);
+
+/*
+ * Calculate the upper bound of pages requires to sequentially allocate @bytes
+ * from kmalloc in an unspecified number of allocations of nonuniform size.
+ */
+unsigned kmalloc_estimate_bytes(gfp_t flags, size_t bytes)
+{
+ unsigned long pages;
+ struct cache_sizes *csizep = malloc_sizes;
+
+ /*
+ * multiply by two, in order to account the worst case slack space
+ * due to the power-of-two allocation sizes.
+ */
+ pages = DIV_ROUND_UP(2 * bytes, PAGE_SIZE);
+
+ /*
+ * add the kmem_cache overhead of each possible kmalloc cache
+ */
+ for (csizep = malloc_sizes; csizep->cs_cachep; csizep++) {
+ struct kmem_cache *s;
+
+#ifdef CONFIG_ZONE_DMA
+ if (unlikely(flags & __GFP_DMA))
+ s = csizep->cs_dmacachep;
+ else
+#endif
+ s = csizep->cs_cachep;
+
+ if (s)
+ pages += kmem_alloc_estimate(s, flags, 0);
+ }
+
+ return pages;
+}
+EXPORT_SYMBOL_GPL(kmalloc_estimate_bytes);
+
+/*
* This initializes kmem_list3 or resizes various caches for all nodes.
*/
static int alloc_kmemlist(struct kmem_cache *cachep, gfp_t gfp)
diff --git a/mm/slob.c b/mm/slob.c
index b84b611..0caf938 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -695,6 +695,73 @@ int slab_is_available(void)
return slob_ready;
}

+static __slob_estimate(unsigned size, unsigned align, unsigned objects)
+{
+ unsigned nr_pages;
+
+ size = SLOB_UNIT * SLOB_UNITS(size + align - 1);
+
+ if (size <= PAGE_SIZE) {
+ nr_pages = DIV_ROUND_UP(objects, PAGE_SIZE / size);
+ } else {
+ nr_pages = objects << get_order(size);
+ }
+
+ return nr_pages;
+}
+
+/*
+ * Calculate the upper bound of pages required to sequentially allocate
+ * @objects objects from @cachep.
+ */
+unsigned kmem_alloc_estimate(struct kmem_cache *c, gfp_t flags, int objects)
+{
+ unsigned size = c->size;
+
+ if (c->flags & SLAB_DESTROY_BY_RCU)
+ size += sizeof(struct slob_rcu);
+
+ return __slob_estimate(size, c->align, objects);
+}
+
+/*
+ * Calculate the upper bound of pages required to sequentially allocate
+ * @count objects of @size bytes from kmalloc given @flags.
+ */
+unsigned kmalloc_estimate_objs(size_t size, gfp_t flags, int count)
+{
+ unsigned align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
+
+ return __slob_estimate(size, align, count);
+}
+EXPORT_SYMBOL_GPL(kmalloc_estimate_objs);
+
+/*
+ * Calculate the upper bound of pages requires to sequentially allocate @bytes
+ * from kmalloc in an unspecified number of allocations of nonuniform size.
+ */
+unsigned kmalloc_estimate_bytes(gfp_t flags, size_t bytes)
+{
+ unsigned long pages;
+
+ /*
+ * Multiply by two, in order to account the worst case slack space
+ * due to the power-of-two allocation sizes.
+ *
+ * While not true for slob, it cannot do worse than that for sequential
+ * allocations.
+ */
+ pages = DIV_ROUND_UP(2 * bytes, PAGE_SIZE);
+
+ /*
+ * Our power of two series starts at PAGE_SIZE, so add one page.
+ */
+ pages++;
+
+ return pages;
+}
+EXPORT_SYMBOL_GPL(kmalloc_estimate_bytes);
+
void __init kmem_cache_init(void)
{
slob_ready = 1;
diff --git a/mm/slub.c b/mm/slub.c
index 7a5d6dc..056545e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2435,6 +2435,42 @@ const char *kmem_cache_name(struct kmem_cache *s)
}
EXPORT_SYMBOL(kmem_cache_name);

+/*
+ * Calculate the upper bound of pages required to sequentially allocate
+ * @objects objects from @cachep.
+ *
+ * We should use s->min_objects because those are the least efficient.
+ */
+unsigned kmem_alloc_estimate(struct kmem_cache *s, gfp_t flags, int objects)
+{
+ unsigned long pages;
+ struct kmem_cache_order_objects x;
+
+ if (WARN_ON(!s) || WARN_ON(!oo_objects(s->min)))
+ return 0;
+
+ x = s->min;
+ pages = DIV_ROUND_UP(objects, oo_objects(x)) << oo_order(x);
+
+ /*
+ * Account the possible additional overhead if the slab holds more that
+ * one object. Use s->max_objects because that's the worst case.
+ */
+ x = s->oo;
+ if (oo_objects(x) > 1) {
+ /*
+ * Account the possible additional overhead if per cpu slabs
+ * are currently empty and have to be allocated. This is very
+ * unlikely but a possible scenario immediately after
+ * kmem_cache_shrink.
+ */
+ pages += num_possible_cpus() << oo_order(x);
+ }
+
+ return pages;
+}
+EXPORT_SYMBOL_GPL(kmem_alloc_estimate);
+
static void list_slab_objects(struct kmem_cache *s, struct page *page,
const char *text)
{
@@ -2868,6 +2904,57 @@ void kfree(const void *x)
EXPORT_SYMBOL(kfree);

/*
+ * Calculate the upper bound of pages required to sequentially allocate
+ * @count objects of @size bytes from kmalloc given @flags.
+ */
+unsigned kmalloc_estimate_objs(size_t size, gfp_t flags, int count)
+{
+ struct kmem_cache *s = get_slab(size, flags);
+ if (!s)
+ return 0;
+
+ return kmem_alloc_estimate(s, flags, count);
+
+}
+EXPORT_SYMBOL_GPL(kmalloc_estimate_objs);
+
+/*
+ * Calculate the upper bound of pages requires to sequentially allocate @bytes
+ * from kmalloc in an unspecified number of allocations of nonuniform size.
+ */
+unsigned kmalloc_estimate_bytes(gfp_t flags, size_t bytes)
+{
+ int i;
+ unsigned long pages;
+
+ /*
+ * multiply by two, in order to account the worst case slack space
+ * due to the power-of-two allocation sizes.
+ */
+ pages = DIV_ROUND_UP(2 * bytes, PAGE_SIZE);
+
+ /*
+ * add the kmem_cache overhead of each possible kmalloc cache
+ */
+ for (i = 1; i < PAGE_SHIFT; i++) {
+ struct kmem_cache *s;
+
+#ifdef CONFIG_ZONE_DMA
+ if (unlikely(flags & SLUB_DMA))
+ s = dma_kmalloc_cache(i, flags);
+ else
+#endif
+ s = &kmalloc_caches[i];
+
+ if (s)
+ pages += kmem_alloc_estimate(s, flags, 0);
+ }
+
+ return pages;
+}
+EXPORT_SYMBOL_GPL(kmalloc_estimate_bytes);
+
+/*
* kmem_cache_shrink removes empty slabs from the partial lists and sorts
* the remaining slabs by the number of items in use. The slabs with the
* most items in use come first. New allocations will then fill those up
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:18:10

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 07/30] mm: allow PF_MEMALLOC from softirq context

>From 32f474cffc76551ddb792454845bd473634219b5 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 10:41:53 +0800
Subject: [PATCH 07/30] mm: allow PF_MEMALLOC from softirq context

This is needed to allow network softirq packet processing to make use of
PF_MEMALLOC.

Currently softirq context cannot use PF_MEMALLOC due to it not being associated
with a task, and therefore not having task flags to fiddle with - thus the gfp
to alloc flag mapping ignores the task flags when in interrupts (hard or soft)
context.

Allowing softirqs to make use of PF_MEMALLOC therefore requires some trickery.
We basically borrow the task flags from whatever process happens to be
preempted by the softirq.

So we modify the gfp to alloc flags mapping to not exclude task flags in
softirq context, and modify the softirq code to save, clear and restore the
PF_MEMALLOC flag.

The save and clear, ensures the preempted task's PF_MEMALLOC flag doesn't
leak into the softirq. The restore ensures a softirq's PF_MEMALLOC flag cannot
leak back into the preempted process.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/sched.h | 7 +++++++
kernel/softirq.c | 3 +++
mm/page_alloc.c | 7 ++++---
3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 1f25798..85b74b0 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1765,6 +1765,13 @@ static inline void rcu_copy_process(struct task_struct *p)

#endif

+static inline void tsk_restore_flags(struct task_struct *p,
+ unsigned long pflags, unsigned long mask)
+{
+ p->flags &= ~mask;
+ p->flags |= pflags & mask;
+}
+
#ifdef CONFIG_SMP
extern int set_cpus_allowed_ptr(struct task_struct *p,
const struct cpumask *new_mask);
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 07b4f1b..0770e78 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -194,6 +194,8 @@ asmlinkage void __do_softirq(void)
__u32 pending;
int max_restart = MAX_SOFTIRQ_RESTART;
int cpu;
+ unsigned long pflags = current->flags;
+ current->flags &= ~PF_MEMALLOC;

pending = local_softirq_pending();
account_system_vtime(current);
@@ -246,6 +248,7 @@ restart:

account_system_vtime(current);
_local_bh_enable();
+ tsk_restore_flags(current, pflags, PF_MEMALLOC);
}

#ifndef __ARCH_HAS_DO_SOFTIRQ
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b9989c5..4b1fa10 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1928,9 +1928,10 @@ int gfp_to_alloc_flags(gfp_t gfp_mask)
alloc_flags |= ALLOC_HARDER;

if (likely(!(gfp_mask & __GFP_NOMEMALLOC))) {
- if (!in_interrupt() &&
- ((p->flags & PF_MEMALLOC) ||
- unlikely(test_thread_flag(TIF_MEMDIE))))
+ if (!in_irq() && (p->flags & PF_MEMALLOC))
+ alloc_flags |= ALLOC_NO_WATERMARKS;
+ else if (!in_interrupt() &&
+ unlikely(test_thread_flag(TIF_MEMDIE)))
alloc_flags |= ALLOC_NO_WATERMARKS;
}

--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:18:21

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 08/30] mm: emergency pool

>From 973ce2ee797025f055ccb61359180e53c3ecc681 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 10:45:48 +0800
Subject: [PATCH 08/30] mm: emergency pool

Provide means to reserve a specific amount of pages.

The emergency pool is separated from the min watermark because ALLOC_HARDER
and ALLOC_HIGH modify the watermark in a relative way and thus do not ensure
a strict minimum.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/mmzone.h | 3 ++
mm/page_alloc.c | 84 ++++++++++++++++++++++++++++++++++++++++++------
mm/vmstat.c | 6 ++--
3 files changed, 80 insertions(+), 13 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 9ed9c45..75a0871 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -279,6 +279,7 @@ struct zone_reclaim_stat {

struct zone {
/* Fields commonly accessed by the page allocator */
+ unsigned long pages_emerg; /* emergency pool */

/* zone watermarks, access with *_wmark_pages(zone) macros */
unsigned long watermark[NR_WMARK];
@@ -770,6 +771,8 @@ int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *, int,
int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *, int,
void __user *, size_t *, loff_t *);

+int adjust_memalloc_reserve(int pages);
+
extern int numa_zonelist_order_handler(struct ctl_table *, int,
void __user *, size_t *, loff_t *);
extern char numa_zonelist_order[];
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4b1fa10..6c85e9e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -167,6 +167,8 @@ static char * const zone_names[MAX_NR_ZONES] = {

static DEFINE_SPINLOCK(min_free_lock);
int min_free_kbytes = 1024;
+static DEFINE_MUTEX(var_free_mutex);
+int var_free_kbytes;

static unsigned long __meminitdata nr_kernel_pages;
static unsigned long __meminitdata nr_all_pages;
@@ -1457,7 +1459,7 @@ int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
if (alloc_flags & ALLOC_HARDER)
min -= min / 4;

- if (free_pages <= min + z->lowmem_reserve[classzone_idx])
+ if (free_pages <= min+z->lowmem_reserve[classzone_idx]+z->pages_emerg)
return 0;
for (o = 0; o < order; o++) {
/* At the next order, this order's pages become unavailable */
@@ -1946,7 +1948,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
{
const gfp_t wait = gfp_mask & __GFP_WAIT;
struct page *page = NULL;
- int alloc_flags;
+ int alloc_flags = 0;
unsigned long pages_reclaimed = 0;
unsigned long did_some_progress;
struct task_struct *p = current;
@@ -2078,8 +2080,8 @@ rebalance:
nopage:
if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) {
printk(KERN_WARNING "%s: page allocation failure."
- " order:%d, mode:0x%x\n",
- p->comm, order, gfp_mask);
+ " order:%d, mode:0x%x, alloc_flags:0x%x, pflags:0x%x\n",
+ p->comm, order, gfp_mask, alloc_flags, p->flags);
dump_stack();
show_mem();
}
@@ -2414,9 +2416,9 @@ void show_free_areas(void)
"\n",
zone->name,
K(zone_page_state(zone, NR_FREE_PAGES)),
- K(min_wmark_pages(zone)),
- K(low_wmark_pages(zone)),
- K(high_wmark_pages(zone)),
+ K(zone->pages_emerg + min_wmark_pages(zone)),
+ K(zone->pages_emerg + low_wmark_pages(zone)),
+ K(zone->pages_emerg + high_wmark_pages(zone)),
K(zone_page_state(zone, NR_ACTIVE_ANON)),
K(zone_page_state(zone, NR_INACTIVE_ANON)),
K(zone_page_state(zone, NR_ACTIVE_FILE)),
@@ -4779,7 +4781,7 @@ static void calculate_totalreserve_pages(void)
}

/* we treat the high watermark as reserved pages. */
- max += high_wmark_pages(zone);
+ max += high_wmark_pages(zone) + zone->pages_emerg;

if (max > zone->present_pages)
max = zone->present_pages;
@@ -4837,7 +4839,8 @@ static void setup_per_zone_lowmem_reserve(void)
*/
static void __setup_per_zone_wmarks(void)
{
- unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
+ unsigned pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
+ unsigned pages_emerg = var_free_kbytes >> (PAGE_SHIFT - 10);
unsigned long lowmem_pages = 0;
struct zone *zone;
unsigned long flags;
@@ -4849,11 +4852,13 @@ static void __setup_per_zone_wmarks(void)
}

for_each_zone(zone) {
- u64 tmp;
+ u64 tmp, tmp_emerg;

spin_lock_irqsave(&zone->lock, flags);
tmp = (u64)pages_min * zone->present_pages;
do_div(tmp, lowmem_pages);
+ tmp_emerg = (u64)pages_emerg * zone->present_pages;
+ do_div(tmp_emerg, lowmem_pages);
if (is_highmem(zone)) {
/*
* __GFP_HIGH and PF_MEMALLOC allocations usually don't
@@ -4872,12 +4877,14 @@ static void __setup_per_zone_wmarks(void)
if (min_pages > 128)
min_pages = 128;
zone->watermark[WMARK_MIN] = min_pages;
+ zone->pages_emerg = 0;
} else {
/*
* If it's a lowmem zone, reserve a number of pages
* proportionate to the zone's size.
*/
zone->watermark[WMARK_MIN] = tmp;
+ zone->pages_emerg = tmp_emerg;
}

zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + (tmp >> 2);
@@ -4942,6 +4949,63 @@ void setup_per_zone_wmarks(void)
spin_unlock_irqrestore(&min_free_lock, flags);
}

+static void __adjust_memalloc_reserve(int pages)
+{
+ var_free_kbytes += pages << (PAGE_SHIFT - 10);
+ BUG_ON(var_free_kbytes < 0);
+ setup_per_zone_wmarks();
+}
+
+static int test_reserve_limits(void)
+{
+ struct zone *zone;
+ int node;
+
+ for_each_zone(zone)
+ wakeup_kswapd(zone, 0);
+
+ for_each_online_node(node) {
+ struct page *page = alloc_pages_node(node, GFP_KERNEL, 0);
+ if (!page)
+ return -ENOMEM;
+
+ __free_page(page);
+ }
+
+ return 0;
+}
+
+/**
+ * adjust_memalloc_reserve - adjust the memalloc reserve
+ * @pages: number of pages to add
+ *
+ * It adds a number of pages to the memalloc reserve; if
+ * the number was positive it kicks reclaim into action to
+ * satisfy the higher watermarks.
+ *
+ * returns -ENOMEM when it failed to satisfy the watermarks.
+ */
+int adjust_memalloc_reserve(int pages)
+{
+ int err = 0;
+
+ mutex_lock(&var_free_mutex);
+ __adjust_memalloc_reserve(pages);
+ if (pages > 0) {
+ err = test_reserve_limits();
+ if (err) {
+ __adjust_memalloc_reserve(-pages);
+ goto unlock;
+ }
+ }
+ printk(KERN_DEBUG "Emergency reserve: %d\n", var_free_kbytes);
+
+unlock:
+ mutex_unlock(&var_free_mutex);
+ return err;
+}
+EXPORT_SYMBOL_GPL(adjust_memalloc_reserve);
+
/*
* Initialise min_free_kbytes.
*
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 15a14b1..af91d5c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -814,9 +814,9 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
"\n spanned %lu"
"\n present %lu",
zone_page_state(zone, NR_FREE_PAGES),
- min_wmark_pages(zone),
- low_wmark_pages(zone),
- high_wmark_pages(zone),
+ zone->pages_emerg + min_wmark_pages(zone),
+ zone->pages_emerg + min_wmark_pages(zone),
+ zone->pages_emerg + high_wmark_pages(zone),
zone->pages_scanned,
zone->spanned_pages,
zone->present_pages);
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:18:32

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 09/30] mm: system wide ALLOC_NO_WATERMARK

>From 6d9ae36dfe34cdbc6e1fc52e6db0b27286eb4b58 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 10:46:50 +0800
Subject: [PATCH 09/30] mm: system wide ALLOC_NO_WATERMARK

The reserve is proportionally distributed over all (!highmem) zones in the
system. So we need to allow an emergency allocation access to all zones. In
order to do that we need to break out of any mempolicy boundaries we might
have.

In my opinion that does not break mempolicies as those are user oriented
and not system oriented. That is, system allocations are not guaranteed to be
within mempolicy boundaries. For instance IRQs don't even have a mempolicy.

So breaking out of mempolicy boundaries for 'rare' emergency allocations,
which are always system allocations (as opposed to user) is ok.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
mm/page_alloc.c | 5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6c85e9e..21d64f7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1995,6 +1995,11 @@ restart:
rebalance:
/* Allocate without watermarks if the context allows */
if (alloc_flags & ALLOC_NO_WATERMARKS) {
+ /*
+ * break out mempolicy boundaries
+ */
+ zonelist = node_zonelist(numa_node_id(), gfp_mask);
+
page = __alloc_pages_high_priority(gfp_mask, order,
zonelist, high_zoneidx, nodemask,
preferred_zone, migratetype);
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:18:43

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 10/30] mm: __GFP_MEMALLOC

>From 78618cc83e6a21b876c30a8fb3940ccc1f5b99e1 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 10:49:16 +0800
Subject: [PATCH 10/30] mm: __GFP_MEMALLOC

__GFP_MEMALLOC will allow the allocation to disregard the watermarks,
much like PF_MEMALLOC.

It allows one to pass along the memalloc state in object related allocation
flags as opposed to task related flags, such as sk->sk_allocation.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/gfp.h | 3 ++-
mm/page_alloc.c | 4 +++-
2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 975609c..c608e26 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -47,6 +47,7 @@ struct vm_area_struct;
#define __GFP_REPEAT ((__force gfp_t)0x400u) /* See above */
#define __GFP_NOFAIL ((__force gfp_t)0x800u) /* See above */
#define __GFP_NORETRY ((__force gfp_t)0x1000u)/* See above */
+#define __GFP_MEMALLOC ((__force gfp_t)0x2000u)/* Use emergency reserves */
#define __GFP_COMP ((__force gfp_t)0x4000u)/* Add compound page metadata */
#define __GFP_ZERO ((__force gfp_t)0x8000u)/* Return zeroed page on success */
#define __GFP_NOMEMALLOC ((__force gfp_t)0x10000u) /* Don't use emergency reserves */
@@ -98,7 +99,7 @@ struct vm_area_struct;
/* Control page allocator reclaim behavior */
#define GFP_RECLAIM_MASK (__GFP_WAIT|__GFP_HIGH|__GFP_IO|__GFP_FS|\
__GFP_NOWARN|__GFP_REPEAT|__GFP_NOFAIL|\
- __GFP_NORETRY|__GFP_NOMEMALLOC)
+ __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC)

/* Control slab gfp mask during early boot */
#define GFP_BOOT_MASK (__GFP_BITS_MASK & ~(__GFP_WAIT|__GFP_IO|__GFP_FS))
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 21d64f7..f7d3060 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1930,7 +1930,9 @@ int gfp_to_alloc_flags(gfp_t gfp_mask)
alloc_flags |= ALLOC_HARDER;

if (likely(!(gfp_mask & __GFP_NOMEMALLOC))) {
- if (!in_irq() && (p->flags & PF_MEMALLOC))
+ if (gfp_mask & __GFP_MEMALLOC)
+ alloc_flags |= ALLOC_NO_WATERMARKS;
+ else if (!in_irq() && (p->flags & PF_MEMALLOC))
alloc_flags |= ALLOC_NO_WATERMARKS;
else if (!in_interrupt() &&
unlikely(test_thread_flag(TIF_MEMDIE)))
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:18:54

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 11/30] mm: memory reserve management

>From c598d3bbb8b5e900ad0f8397b1acbe922003ac5d Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:01:48 +0800
Subject: [PATCH 11/30] mm: memory reserve management

Generic reserve management code.

It provides methods to reserve and charge. Upon this, generic alloc/free style
reserve pools could be build, which could fully replace mempool_t
functionality.

It should also allow for a Banker's algorithm replacement of __GFP_NOFAIL.

[[email protected]: build fix for CONFIG_SLAB]
Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/reserve.h | 198 +++++++++++++++
include/linux/slab.h | 25 +-
mm/Makefile | 2 +-
mm/reserve.c | 637 +++++++++++++++++++++++++++++++++++++++++++++++
mm/slub.c | 2 +-
5 files changed, 851 insertions(+), 13 deletions(-)
create mode 100644 include/linux/reserve.h
create mode 100644 mm/reserve.c

diff --git a/include/linux/reserve.h b/include/linux/reserve.h
new file mode 100644
index 0000000..e1d3dbb
--- /dev/null
+++ b/include/linux/reserve.h
@@ -0,0 +1,198 @@
+/*
+ * Memory reserve management.
+ *
+ * Copyright (C) 2007-2008 Red Hat, Inc.,
+ * Peter Zijlstra <[email protected]>
+ *
+ * This file contains the public data structure and API definitions.
+ */
+
+#ifndef _LINUX_RESERVE_H
+#define _LINUX_RESERVE_H
+
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/slab.h>
+
+struct mem_reserve {
+ struct mem_reserve *parent;
+ struct list_head children;
+ struct list_head siblings;
+
+ const char *name;
+
+ long pages;
+ long limit;
+ long usage;
+ spinlock_t lock; /* protects limit and usage */
+
+ wait_queue_head_t waitqueue;
+};
+
+extern struct mem_reserve mem_reserve_root;
+
+void mem_reserve_init(struct mem_reserve *res, const char *name,
+ struct mem_reserve *parent);
+int mem_reserve_connect(struct mem_reserve *new_child,
+ struct mem_reserve *node);
+void mem_reserve_disconnect(struct mem_reserve *node);
+
+int mem_reserve_pages_set(struct mem_reserve *res, long pages);
+int mem_reserve_pages_add(struct mem_reserve *res, long pages);
+int mem_reserve_pages_charge(struct mem_reserve *res, long pages);
+
+int mem_reserve_kmalloc_set(struct mem_reserve *res, long bytes);
+int mem_reserve_kmalloc_charge(struct mem_reserve *res, long bytes);
+
+struct kmem_cache;
+
+int mem_reserve_kmem_cache_set(struct mem_reserve *res,
+ struct kmem_cache *s,
+ int objects);
+int mem_reserve_kmem_cache_charge(struct mem_reserve *res,
+ struct kmem_cache *s, long objs);
+
+void *___kmalloc_reserve(size_t size, gfp_t flags, int node, unsigned long ip,
+ struct mem_reserve *res, int *emerg);
+
+static inline
+void *__kmalloc_reserve(size_t size, gfp_t flags, int node, unsigned long ip,
+ struct mem_reserve *res, int *emerg)
+{
+ void *obj;
+
+ obj = __kmalloc_node_track_caller(size,
+ flags | __GFP_NOMEMALLOC | __GFP_NOWARN, node, ip);
+ if (!obj)
+ obj = ___kmalloc_reserve(size, flags, node, ip, res, emerg);
+
+ return obj;
+}
+
+/**
+ * kmalloc_reserve() - kmalloc() and charge against @res for @emerg allocations
+ * @size - size of the requested memory region
+ * @gfp - allocation flags to use for this allocation
+ * @node - preferred memory node for this allocation
+ * @res - reserve to charge emergency allocations against
+ * @emerg - bit 0 is set when the allocation was an emergency allocation
+ *
+ * Returns NULL on failure
+ */
+#define kmalloc_reserve(size, gfp, node, res, emerg) \
+ __kmalloc_reserve(size, gfp, node, \
+ _RET_IP_, res, emerg)
+
+void __kfree_reserve(void *obj, struct mem_reserve *res, int emerg);
+
+/**
+ * kfree_reserve() - kfree() and uncharge against @res for @emerg allocations
+ * @obj - memory to free
+ * @res - reserve to uncharge emergency allocations from
+ * @emerg - was this an emergency allocation
+ */
+static inline
+void kfree_reserve(void *obj, struct mem_reserve *res, int emerg)
+{
+ if (unlikely(obj && res && emerg))
+ __kfree_reserve(obj, res, emerg);
+ else
+ kfree(obj);
+}
+
+void *__kmem_cache_alloc_reserve(struct kmem_cache *s, gfp_t flags, int node,
+ struct mem_reserve *res, int *emerg);
+
+/**
+ * kmem_cache_alloc_reserve() - kmem_cache_alloc() and charge against @res
+ * @s - kmem_cache to allocate from
+ * @gfp - allocation flags to use for this allocation
+ * @node - preferred memory node for this allocation
+ * @res - reserve to charge emergency allocations against
+ * @emerg - bit 0 is set when the allocation was an emergency allocation
+ *
+ * Returns NULL on failure
+ */
+static inline
+void *kmem_cache_alloc_reserve(struct kmem_cache *s, gfp_t flags, int node,
+ struct mem_reserve *res, int *emerg)
+{
+ void *obj;
+
+ obj = kmem_cache_alloc_node(s,
+ flags | __GFP_NOMEMALLOC | __GFP_NOWARN, node);
+ if (!obj)
+ obj = __kmem_cache_alloc_reserve(s, flags, node, res, emerg);
+
+ return obj;
+}
+
+void __kmem_cache_free_reserve(struct kmem_cache *s, void *obj,
+ struct mem_reserve *res, int emerg);
+
+/**
+ * kmem_cache_free_reserve() - kmem_cache_free() and uncharge against @res
+ * @s - kmem_cache to free to
+ * @obj - memory to free
+ * @res - reserve to uncharge emergency allocations from
+ * @emerg - was this an emergency allocation
+ */
+static inline
+void kmem_cache_free_reserve(struct kmem_cache *s, void *obj,
+ struct mem_reserve *res, int emerg)
+{
+ if (unlikely(obj && res && emerg))
+ __kmem_cache_free_reserve(s, obj, res, emerg);
+ else
+ kmem_cache_free(s, obj);
+}
+
+struct page *__alloc_pages_reserve(int node, gfp_t flags, int order,
+ struct mem_reserve *res, int *emerg);
+
+/**
+ * alloc_pages_reserve() - alloc_pages() and charge against @res
+ * @node - preferred memory node for this allocation
+ * @gfp - allocation flags to use for this allocation
+ * @order - page order
+ * @res - reserve to charge emergency allocations against
+ * @emerg - bit 0 is set when the allocation was an emergency allocation
+ *
+ * Returns NULL on failure
+ */
+static inline
+struct page *alloc_pages_reserve(int node, gfp_t flags, int order,
+ struct mem_reserve *res, int *emerg)
+{
+ struct page *page;
+
+ page = alloc_pages_node(node,
+ flags | __GFP_NOMEMALLOC | __GFP_NOWARN, order);
+ if (!page)
+ page = __alloc_pages_reserve(node, flags, order, res, emerg);
+
+ return page;
+}
+
+void __free_pages_reserve(struct page *page, int order,
+ struct mem_reserve *res, int emerg);
+
+/**
+ * free_pages_reserve() - __free_pages() and uncharge against @res
+ * @page - page to free
+ * @order - page order
+ * @res - reserve to uncharge emergency allocations from
+ * @emerg - was this an emergency allocation
+ */
+static inline
+void free_pages_reserve(struct page *page, int order,
+ struct mem_reserve *res, int emerg)
+{
+ if (unlikely(page && res && emerg))
+ __free_pages_reserve(page, order, res, emerg);
+ else
+ __free_pages(page, order);
+}
+
+#endif /* _LINUX_RESERVE_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index b57b9ca..68195a7 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -272,15 +272,17 @@ static inline void *kmem_cache_alloc_node(struct kmem_cache *cachep,
* allocator where we care about the real place the memory allocation
* request comes from.
*/
-#if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB)
+#if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB) || \
+ (defined(CONFIG_SLAB) && defined(CONFIG_TRACING))
extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long);
-#define kmalloc_track_caller(size, flags) \
- __kmalloc_track_caller(size, flags, _RET_IP_)
#else
-#define kmalloc_track_caller(size, flags) \
+#define __kmalloc_track_caller(size, flags, ip) \
__kmalloc(size, flags)
#endif /* DEBUG_SLAB */

+#define kmalloc_track_caller(size, flags) \
+ __kmalloc_track_caller(size, flags, _RET_IP_)
+
#ifdef CONFIG_NUMA
/*
* kmalloc_node_track_caller is a special version of kmalloc_node that
@@ -290,23 +292,24 @@ extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long);
* standard allocator where we care about the real place the memory
* allocation request comes from.
*/
-#if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB)
+#if defined(CONFIG_DEBUG_SLAB) || defined(CONFIG_SLUB) || \
+ (defined(CONFIG_SLAB) && defined(CONFIG_TRACING))
extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long);
-#define kmalloc_node_track_caller(size, flags, node) \
- __kmalloc_node_track_caller(size, flags, node, \
- _RET_IP_)
#else
-#define kmalloc_node_track_caller(size, flags, node) \
+#define __kmalloc_node_track_caller(size, flags, node, ip) \
__kmalloc_node(size, flags, node)
#endif

#else /* CONFIG_NUMA */

-#define kmalloc_node_track_caller(size, flags, node) \
- kmalloc_track_caller(size, flags)
+#define __kmalloc_node_track_caller(size, flags, node, ip) \
+ __kmalloc_track_caller(size, flags, ip)

#endif /* CONFIG_NUMA */

+#define kmalloc_node_track_caller(size, flags, node) \
+ __kmalloc_node_track_caller(size, flags, node, \
+ _RET_IP_)
/*
* Shortcuts
*/
diff --git a/mm/Makefile b/mm/Makefile
index 8982504..d1717bb 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -11,7 +11,7 @@ obj-y := bootmem.o filemap.o mempool.o oom_kill.o fadvise.o \
maccess.o page_alloc.o page-writeback.o \
readahead.o swap.o truncate.o vmscan.o shmem.o \
prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
- page_isolation.o mm_init.o mmu_context.o \
+ page_isolation.o mm_init.o mmu_context.o reserve.o \
$(mmu-y)
obj-y += init-mm.o

diff --git a/mm/reserve.c b/mm/reserve.c
new file mode 100644
index 0000000..2a00d72
--- /dev/null
+++ b/mm/reserve.c
@@ -0,0 +1,637 @@
+/*
+ * Memory reserve management.
+ *
+ * Copyright (C) 2007-2008, Red Hat, Inc.,
+ * Peter Zijlstra <[email protected]>
+ *
+ * Description:
+ *
+ * Manage a set of memory reserves.
+ *
+ * A memory reserve is a reserve for a specified number of object of specified
+ * size. Since memory is managed in pages, this reserve demand is then
+ * translated into a page unit.
+ *
+ * So each reserve has a specified object limit, an object usage count and a
+ * number of pages required to back these objects.
+ *
+ * Usage is charged against a reserve, if the charge fails, the resource must
+ * not be allocated/used.
+ *
+ * The reserves are managed in a tree, and the resource demands (pages and
+ * limit) are propagated up the tree. Obviously the object limit will be
+ * meaningless as soon as the unit starts mixing, but the required page reserve
+ * (being of one unit) is still valid at the root.
+ *
+ * It is the page demand of the root node that is used to set the global
+ * reserve (adjust_memalloc_reserve() which sets zone->pages_emerg).
+ *
+ * As long as a subtree has the same usage unit, an aggregate node can be used
+ * to charge against, instead of the leaf nodes. However, do be consistent with
+ * who is charged, resource usage is not propagated up the tree (for
+ * performance reasons).
+ */
+
+#include <linux/reserve.h>
+#include <linux/mutex.h>
+#include <linux/mmzone.h>
+#include <linux/log2.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include "internal.h"
+
+static DEFINE_MUTEX(mem_reserve_mutex);
+
+/**
+ * @mem_reserve_root - the global reserve root
+ *
+ * The global reserve is empty, and has no limit unit, it merely
+ * acts as an aggregation point for reserves and an interface to
+ * adjust_memalloc_reserve().
+ */
+struct mem_reserve mem_reserve_root = {
+ .children = LIST_HEAD_INIT(mem_reserve_root.children),
+ .siblings = LIST_HEAD_INIT(mem_reserve_root.siblings),
+ .name = "total reserve",
+ .lock = __SPIN_LOCK_UNLOCKED(mem_reserve_root.lock),
+ .waitqueue = __WAIT_QUEUE_HEAD_INITIALIZER(mem_reserve_root.waitqueue),
+};
+EXPORT_SYMBOL_GPL(mem_reserve_root);
+
+/**
+ * mem_reserve_init() - initialize a memory reserve object
+ * @res - the new reserve object
+ * @name - a name for this reserve
+ * @parent - when non NULL, the parent to connect to.
+ */
+void mem_reserve_init(struct mem_reserve *res, const char *name,
+ struct mem_reserve *parent)
+{
+ memset(res, 0, sizeof(*res));
+ INIT_LIST_HEAD(&res->children);
+ INIT_LIST_HEAD(&res->siblings);
+ res->name = name;
+ spin_lock_init(&res->lock);
+ init_waitqueue_head(&res->waitqueue);
+
+ if (parent)
+ mem_reserve_connect(res, parent);
+}
+EXPORT_SYMBOL_GPL(mem_reserve_init);
+
+/*
+ * propagate the pages and limit changes up the (sub)tree.
+ */
+static void __calc_reserve(struct mem_reserve *res, long pages, long limit)
+{
+ unsigned long flags;
+
+ for ( ; res; res = res->parent) {
+ res->pages += pages;
+
+ if (limit) {
+ spin_lock_irqsave(&res->lock, flags);
+ res->limit += limit;
+ spin_unlock_irqrestore(&res->lock, flags);
+ }
+ }
+}
+
+/**
+ * __mem_reserve_add() - primitive to change the size of a reserve
+ * @res - reserve to change
+ * @pages - page delta
+ * @limit - usage limit delta
+ *
+ * Returns -ENOMEM when a size increase is not possible atm.
+ */
+static int __mem_reserve_add(struct mem_reserve *res, long pages, long limit)
+{
+ int ret = 0;
+ long reserve;
+
+ /*
+ * This looks more complex than need be, that is because we handle
+ * the case where @res isn't actually connected to mem_reserve_root.
+ *
+ * So, by propagating the new pages up the (sub)tree and computing
+ * the difference in mem_reserve_root.pages we find if this action
+ * affects the actual reserve.
+ *
+ * The (partial) propagation also makes that mem_reserve_connect()
+ * needs only look at the direct child, since each disconnected
+ * sub-tree is fully up-to-date.
+ */
+ reserve = mem_reserve_root.pages;
+ __calc_reserve(res, pages, 0);
+ reserve = mem_reserve_root.pages - reserve;
+
+ if (reserve) {
+ ret = adjust_memalloc_reserve(reserve);
+ if (ret)
+ __calc_reserve(res, -pages, 0);
+ }
+
+ /*
+ * Delay updating the limits until we've acquired the resources to
+ * back it.
+ */
+ if (!ret)
+ __calc_reserve(res, 0, limit);
+
+ return ret;
+}
+
+/**
+ * __mem_reserve_charge() - primitive to charge object usage of a reserve
+ * @res - reserve to charge
+ * @charge - size of the charge
+ *
+ * Returns non-zero on success, zero on failure.
+ */
+static
+int __mem_reserve_charge(struct mem_reserve *res, long charge)
+{
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&res->lock, flags);
+ if (charge < 0 || res->usage + charge < res->limit) {
+ res->usage += charge;
+ if (unlikely(res->usage < 0))
+ res->usage = 0;
+ ret = 1;
+ }
+ if (charge < 0)
+ wake_up_all(&res->waitqueue);
+ spin_unlock_irqrestore(&res->lock, flags);
+
+ return ret;
+}
+
+/**
+ * mem_reserve_connect() - connect a reserve to another in a child-parent relation
+ * @new_child - the reserve node to connect (child)
+ * @node - the reserve node to connect to (parent)
+ *
+ * Connecting a node results in an increase of the reserve by the amount of
+ * pages in @new_child->pages if @node has a connection to mem_reserve_root.
+ *
+ * Returns -ENOMEM when the new connection would increase the reserve (parent
+ * is connected to mem_reserve_root) and there is no memory to do so.
+ *
+ * On error, the child is _NOT_ connected.
+ */
+int mem_reserve_connect(struct mem_reserve *new_child, struct mem_reserve *node)
+{
+ int ret;
+
+ WARN_ON(!new_child->name);
+
+ mutex_lock(&mem_reserve_mutex);
+ if (new_child->parent) {
+ ret = -EEXIST;
+ goto unlock;
+ }
+ new_child->parent = node;
+ list_add(&new_child->siblings, &node->children);
+ ret = __mem_reserve_add(node, new_child->pages, new_child->limit);
+ if (ret) {
+ new_child->parent = NULL;
+ list_del_init(&new_child->siblings);
+ }
+unlock:
+ mutex_unlock(&mem_reserve_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mem_reserve_connect);
+
+/**
+ * mem_reserve_disconnect() - sever a nodes connection to the reserve tree
+ * @node - the node to disconnect
+ *
+ * Disconnecting a node results in a reduction of the reserve by @node->pages
+ * if node had a connection to mem_reserve_root.
+ */
+void mem_reserve_disconnect(struct mem_reserve *node)
+{
+ int ret;
+
+ BUG_ON(!node->parent);
+
+ mutex_lock(&mem_reserve_mutex);
+ if (!node->parent) {
+ ret = -ENOENT;
+ goto unlock;
+ }
+ ret = __mem_reserve_add(node->parent, -node->pages, -node->limit);
+ if (!ret) {
+ node->parent = NULL;
+ list_del_init(&node->siblings);
+ }
+unlock:
+ mutex_unlock(&mem_reserve_mutex);
+
+ /*
+ * We cannot fail to shrink the reserves, can we?
+ */
+ WARN_ON(ret);
+}
+EXPORT_SYMBOL_GPL(mem_reserve_disconnect);
+
+#ifdef CONFIG_PROC_FS
+
+/*
+ * Simple output of the reserve tree in: /proc/reserve_info
+ * Example:
+ *
+ * localhost ~ # cat /proc/reserve_info
+ * 1:0 "total reserve" 6232K 0/278581
+ * 2:1 "total network reserve" 6232K 0/278581
+ * 3:2 "network TX reserve" 212K 0/53
+ * 4:3 "protocol TX pages" 212K 0/53
+ * 5:2 "network RX reserve" 6020K 0/278528
+ * 6:5 "IPv4 route cache" 5508K 0/16384
+ * 7:5 "SKB data reserve" 512K 0/262144
+ * 8:7 "IPv4 fragment cache" 512K 0/262144
+ */
+
+static void mem_reserve_show_item(struct seq_file *m, struct mem_reserve *res,
+ unsigned int parent, unsigned int *id)
+{
+ struct mem_reserve *child;
+ unsigned int my_id = ++*id;
+
+ seq_printf(m, "%d:%d \"%s\" %ldK %ld/%ld\n",
+ my_id, parent, res->name,
+ res->pages << (PAGE_SHIFT - 10),
+ res->usage, res->limit);
+
+ list_for_each_entry(child, &res->children, siblings)
+ mem_reserve_show_item(m, child, my_id, id);
+}
+
+static int mem_reserve_show(struct seq_file *m, void *v)
+{
+ unsigned int ident = 0;
+
+ mutex_lock(&mem_reserve_mutex);
+ mem_reserve_show_item(m, &mem_reserve_root, ident, &ident);
+ mutex_unlock(&mem_reserve_mutex);
+
+ return 0;
+}
+
+static int mem_reserve_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, mem_reserve_show, NULL);
+}
+
+static const struct file_operations mem_reserve_opterations = {
+ .open = mem_reserve_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static __init int mem_reserve_proc_init(void)
+{
+ proc_create("reserve_info", S_IRUSR, NULL, &mem_reserve_opterations);
+ return 0;
+}
+
+module_init(mem_reserve_proc_init);
+
+#endif
+
+/*
+ * alloc_page helpers
+ */
+
+/**
+ * mem_reserve_pages_set() - set reserves size in pages
+ * @res - reserve to set
+ * @pages - size in pages to set it to
+ *
+ * Returns -ENOMEM when it fails to set the reserve. On failure the old size
+ * is preserved.
+ */
+int mem_reserve_pages_set(struct mem_reserve *res, long pages)
+{
+ int ret;
+
+ mutex_lock(&mem_reserve_mutex);
+ pages -= res->pages;
+ ret = __mem_reserve_add(res, pages, pages * PAGE_SIZE);
+ mutex_unlock(&mem_reserve_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mem_reserve_pages_set);
+
+/**
+ * mem_reserve_pages_add() - change the size in a relative way
+ * @res - reserve to change
+ * @pages - number of pages to add (or subtract when negative)
+ *
+ * Similar to mem_reserve_pages_set, except that the argument is relative
+ * instead of absolute.
+ *
+ * Returns -ENOMEM when it fails to increase.
+ */
+int mem_reserve_pages_add(struct mem_reserve *res, long pages)
+{
+ int ret;
+
+ mutex_lock(&mem_reserve_mutex);
+ ret = __mem_reserve_add(res, pages, pages * PAGE_SIZE);
+ mutex_unlock(&mem_reserve_mutex);
+
+ return ret;
+}
+
+/**
+ * mem_reserve_pages_charge() - charge page usage to a reserve
+ * @res - reserve to charge
+ * @pages - size to charge
+ *
+ * Returns non-zero on success.
+ */
+int mem_reserve_pages_charge(struct mem_reserve *res, long pages)
+{
+ return __mem_reserve_charge(res, pages * PAGE_SIZE);
+}
+EXPORT_SYMBOL_GPL(mem_reserve_pages_charge);
+
+/*
+ * kmalloc helpers
+ */
+
+/**
+ * mem_reserve_kmalloc_set() - set this reserve to bytes worth of kmalloc
+ * @res - reserve to change
+ * @bytes - size in bytes to reserve
+ *
+ * Returns -ENOMEM on failure.
+ */
+int mem_reserve_kmalloc_set(struct mem_reserve *res, long bytes)
+{
+ int ret;
+ long pages;
+
+ mutex_lock(&mem_reserve_mutex);
+ pages = kmalloc_estimate_bytes(GFP_ATOMIC, bytes);
+ pages -= res->pages;
+ bytes -= res->limit;
+ ret = __mem_reserve_add(res, pages, bytes);
+ mutex_unlock(&mem_reserve_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mem_reserve_kmalloc_set);
+
+/**
+ * mem_reserve_kmalloc_charge() - charge bytes to a reserve
+ * @res - reserve to charge
+ * @bytes - bytes to charge
+ *
+ * Returns non-zero on success.
+ */
+int mem_reserve_kmalloc_charge(struct mem_reserve *res, long bytes)
+{
+ if (bytes < 0)
+ bytes = -roundup_pow_of_two(-bytes);
+ else
+ bytes = roundup_pow_of_two(bytes);
+
+ return __mem_reserve_charge(res, bytes);
+}
+EXPORT_SYMBOL_GPL(mem_reserve_kmalloc_charge);
+
+/*
+ * kmem_cache helpers
+ */
+
+/**
+ * mem_reserve_kmem_cache_set() - set reserve to @objects worth of kmem_cache_alloc of @s
+ * @res - reserve to set
+ * @s - kmem_cache to reserve from
+ * @objects - number of objects to reserve
+ *
+ * Returns -ENOMEM on failure.
+ */
+int mem_reserve_kmem_cache_set(struct mem_reserve *res, struct kmem_cache *s,
+ int objects)
+{
+ int ret;
+ long pages, bytes;
+
+ mutex_lock(&mem_reserve_mutex);
+ pages = kmem_alloc_estimate(s, GFP_ATOMIC, objects);
+ pages -= res->pages;
+ bytes = objects * kmem_cache_size(s) - res->limit;
+ ret = __mem_reserve_add(res, pages, bytes);
+ mutex_unlock(&mem_reserve_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mem_reserve_kmem_cache_set);
+
+/**
+ * mem_reserve_kmem_cache_charge() - charge (or uncharge) usage of objs
+ * @res - reserve to charge
+ * @objs - objects to charge for
+ *
+ * Returns non-zero on success.
+ */
+int mem_reserve_kmem_cache_charge(struct mem_reserve *res, struct kmem_cache *s,
+ long objs)
+{
+ return __mem_reserve_charge(res, objs * kmem_cache_size(s));
+}
+EXPORT_SYMBOL_GPL(mem_reserve_kmem_cache_charge);
+
+/*
+ * Alloc wrappers.
+ *
+ * Actual usage is commented in linux/reserve.h where the interface functions
+ * live. Furthermore, the code is 3 instances of the same paradigm, hence only
+ * the first contains extensive comments.
+ */
+
+/*
+ * kmalloc/kfree
+ */
+
+void *___kmalloc_reserve(size_t size, gfp_t flags, int node, unsigned long ip,
+ struct mem_reserve *res, int *emerg)
+{
+ void *obj;
+ gfp_t gfp;
+
+ /*
+ * Try a regular allocation, when that fails and we're not entitled
+ * to the reserves, fail.
+ */
+ gfp = flags | __GFP_NOMEMALLOC | __GFP_NOWARN;
+ obj = __kmalloc_node_track_caller(size, gfp, node, ip);
+
+ if (obj || !(gfp_to_alloc_flags(flags) & ALLOC_NO_WATERMARKS))
+ goto out;
+
+ /*
+ * If we were given a reserve to charge against, try that.
+ */
+ if (res && !mem_reserve_kmalloc_charge(res, size)) {
+ /*
+ * If we failed to charge and we're not allowed to wait for
+ * it to succeed, bail.
+ */
+ if (!(flags & __GFP_WAIT))
+ goto out;
+
+ /*
+ * Wait for a successfull charge against the reserve. All
+ * uncharge operations against this reserve will wake us up.
+ */
+ wait_event(res->waitqueue,
+ mem_reserve_kmalloc_charge(res, size));
+
+ /*
+ * After waiting for it, again try a regular allocation.
+ * Pressure could have lifted during our sleep. If this
+ * succeeds, uncharge the reserve.
+ */
+ obj = __kmalloc_node_track_caller(size, gfp, node, ip);
+ if (obj) {
+ mem_reserve_kmalloc_charge(res, -size);
+ goto out;
+ }
+ }
+
+ /*
+ * Regular allocation failed, and we've successfully charged our
+ * requested usage against the reserve. Do the emergency allocation.
+ */
+ obj = __kmalloc_node_track_caller(size, flags, node, ip);
+ WARN_ON(!obj);
+ if (emerg)
+ *emerg = 1;
+
+out:
+ return obj;
+}
+
+void __kfree_reserve(void *obj, struct mem_reserve *res, int emerg)
+{
+ /*
+ * ksize gives the full allocated size vs the requested size we used to
+ * charge; however since we round up to the nearest power of two, this
+ * should all work nicely.
+ */
+ size_t size = ksize(obj);
+
+ kfree(obj);
+ /*
+ * Free before uncharge, this ensures memory is actually present when
+ * a subsequent charge succeeds.
+ */
+ mem_reserve_kmalloc_charge(res, -size);
+}
+
+/*
+ * kmem_cache_alloc/kmem_cache_free
+ */
+
+void *__kmem_cache_alloc_reserve(struct kmem_cache *s, gfp_t flags, int node,
+ struct mem_reserve *res, int *emerg)
+{
+ void *obj;
+ gfp_t gfp;
+
+ gfp = flags | __GFP_NOMEMALLOC | __GFP_NOWARN;
+ obj = kmem_cache_alloc_node(s, gfp, node);
+
+ if (obj || !(gfp_to_alloc_flags(flags) & ALLOC_NO_WATERMARKS))
+ goto out;
+
+ if (res && !mem_reserve_kmem_cache_charge(res, s, 1)) {
+ if (!(flags & __GFP_WAIT))
+ goto out;
+
+ wait_event(res->waitqueue,
+ mem_reserve_kmem_cache_charge(res, s, 1));
+
+ obj = kmem_cache_alloc_node(s, gfp, node);
+ if (obj) {
+ mem_reserve_kmem_cache_charge(res, s, -1);
+ goto out;
+ }
+ }
+
+ obj = kmem_cache_alloc_node(s, flags, node);
+ WARN_ON(!obj);
+ if (emerg)
+ *emerg = 1;
+
+out:
+ return obj;
+}
+
+void __kmem_cache_free_reserve(struct kmem_cache *s, void *obj,
+ struct mem_reserve *res, int emerg)
+{
+ kmem_cache_free(s, obj);
+ mem_reserve_kmem_cache_charge(res, s, -1);
+}
+
+/*
+ * alloc_pages/free_pages
+ */
+
+struct page *__alloc_pages_reserve(int node, gfp_t flags, int order,
+ struct mem_reserve *res, int *emerg)
+{
+ struct page *page;
+ gfp_t gfp;
+ long pages = 1 << order;
+
+ gfp = flags | __GFP_NOMEMALLOC | __GFP_NOWARN;
+ page = alloc_pages_node(node, gfp, order);
+
+ if (page || !(gfp_to_alloc_flags(flags) & ALLOC_NO_WATERMARKS))
+ goto out;
+
+ if (res && !mem_reserve_pages_charge(res, pages)) {
+ if (!(flags & __GFP_WAIT))
+ goto out;
+
+ wait_event(res->waitqueue,
+ mem_reserve_pages_charge(res, pages));
+
+ page = alloc_pages_node(node, gfp, order);
+ if (page) {
+ mem_reserve_pages_charge(res, -pages);
+ goto out;
+ }
+ }
+
+ page = alloc_pages_node(node, flags, order);
+ WARN_ON(!page);
+ if (emerg)
+ *emerg = 1;
+
+out:
+ return page;
+}
+
+void __free_pages_reserve(struct page *page, int order,
+ struct mem_reserve *res, int emerg)
+{
+ __free_pages(page, order);
+ mem_reserve_pages_charge(res, -(1 << order));
+}
diff --git a/mm/slub.c b/mm/slub.c
index 056545e..ffcd315 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2799,6 +2799,7 @@ void *__kmalloc(size_t size, gfp_t flags)
}
EXPORT_SYMBOL(__kmalloc);

+#ifdef CONFIG_NUMA
static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
{
struct page *page;
@@ -2813,7 +2814,6 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
return ptr;
}

-#ifdef CONFIG_NUMA
void *__kmalloc_node(size_t size, gfp_t flags, int node)
{
struct kmem_cache *s;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:19:06

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 12/30] selinux: tag avc cache alloc as non-critical

>From 6c3a91091b2910c23908a9f9953efcf3df14e522 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:02:41 +0800
Subject: [PATCH 12/30] selinux: tag avc cache alloc as non-critical

Failing to allocate a cache entry will only harm performance not correctness.
Do not consume valuable reserve pages for something like that.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
security/selinux/avc.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/security/selinux/avc.c b/security/selinux/avc.c
index 3662b0f..9029395 100644
--- a/security/selinux/avc.c
+++ b/security/selinux/avc.c
@@ -284,7 +284,7 @@ static struct avc_node *avc_alloc_node(void)
{
struct avc_node *node;

- node = kmem_cache_zalloc(avc_node_cachep, GFP_ATOMIC);
+ node = kmem_cache_zalloc(avc_node_cachep, GFP_ATOMIC|__GFP_NOMEMALLOC);
if (!node)
goto out;

--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:19:17

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 13/30] net: packet split receive api

>From 8d908090b5314bed0c3318d82891b8c3bbf27815 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:03:55 +0800
Subject: [PATCH 13/30] net: packet split receive api

Add some packet-split receive hooks.

For one this allows to do NUMA node affine page allocs. Later on these hooks
will be extended to do emergency reserve allocations for fragments.

Thanks to Jiri Bohac for fixing a bug in bnx2.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Jiri Bohac <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
drivers/net/bnx2.c | 9 +++------
drivers/net/e1000e/netdev.c | 7 ++-----
drivers/net/igb/igb_main.c | 6 +-----
drivers/net/ixgbe/ixgbe_main.c | 14 ++++++--------
drivers/net/sky2.c | 16 ++++++----------
5 files changed, 18 insertions(+), 34 deletions(-)

diff --git a/drivers/net/bnx2.c b/drivers/net/bnx2.c
index a5dd81f..f6f83d0 100644
--- a/drivers/net/bnx2.c
+++ b/drivers/net/bnx2.c
@@ -2670,7 +2670,7 @@ bnx2_alloc_rx_page(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
struct sw_pg *rx_pg = &rxr->rx_pg_ring[index];
struct rx_bd *rxbd =
&rxr->rx_pg_desc_ring[RX_RING(index)][RX_IDX(index)];
- struct page *page = alloc_page(GFP_ATOMIC);
+ struct page *page = netdev_alloc_page(bp->dev);

if (!page)
return -ENOMEM;
@@ -2700,7 +2700,7 @@ bnx2_free_rx_page(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
pci_unmap_page(bp->pdev, dma_unmap_addr(rx_pg, mapping), PAGE_SIZE,
PCI_DMA_FROMDEVICE);

- __free_page(page);
+ netdev_free_page(bp->dev, page);
rx_pg->page = NULL;
}

@@ -3035,7 +3035,7 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
if (i == pages - 1)
frag_len -= 4;

- skb_fill_page_desc(skb, i, rx_pg->page, 0, frag_len);
+ skb_add_rx_frag(skb, i, rx_pg->page, 0, frag_len);
rx_pg->page = NULL;

err = bnx2_alloc_rx_page(bp, rxr,
@@ -3052,9 +3052,6 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
PAGE_SIZE, PCI_DMA_FROMDEVICE);

frag_size -= frag_len;
- skb->data_len += frag_len;
- skb->truesize += frag_len;
- skb->len += frag_len;

pg_prod = NEXT_RX_BD(pg_prod);
pg_cons = RX_PG_RING_IDX(NEXT_RX_BD(pg_cons));
diff --git a/drivers/net/e1000e/netdev.c b/drivers/net/e1000e/netdev.c
index 20c5ecf..a381e18 100644
--- a/drivers/net/e1000e/netdev.c
+++ b/drivers/net/e1000e/netdev.c
@@ -604,7 +604,7 @@ static void e1000_alloc_rx_buffers_ps(struct e1000_adapter *adapter,
continue;
}
if (!ps_page->page) {
- ps_page->page = alloc_page(GFP_ATOMIC);
+ ps_page->page = netdev_alloc_page(netdev);
if (!ps_page->page) {
adapter->alloc_rx_buff_failed++;
goto no_buffers;
@@ -1185,11 +1185,8 @@ static bool e1000_clean_rx_irq_ps(struct e1000_adapter *adapter,
dma_unmap_page(&pdev->dev, ps_page->dma, PAGE_SIZE,
DMA_FROM_DEVICE);
ps_page->dma = 0;
- skb_fill_page_desc(skb, j, ps_page->page, 0, length);
+ skb_add_rx_frag(skb, j, ps_page->page, 0, length);
ps_page->page = NULL;
- skb->len += length;
- skb->data_len += length;
- skb->truesize += length;
}

/* strip the ethernet crc, problem is we're using pages now so
diff --git a/drivers/net/igb/igb_main.c b/drivers/net/igb/igb_main.c
index 3881918..7361864 100644
--- a/drivers/net/igb/igb_main.c
+++ b/drivers/net/igb/igb_main.c
@@ -5574,7 +5574,7 @@ static bool igb_clean_rx_irq_adv(struct igb_q_vector *q_vector,
PAGE_SIZE / 2, DMA_FROM_DEVICE);
buffer_info->page_dma = 0;

- skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags,
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
buffer_info->page,
buffer_info->page_offset,
length);
@@ -5584,10 +5584,6 @@ static bool igb_clean_rx_irq_adv(struct igb_q_vector *q_vector,
buffer_info->page = NULL;
else
get_page(buffer_info->page);
-
- skb->len += length;
- skb->data_len += length;
- skb->truesize += length;
}

if (!(staterr & E1000_RXD_STAT_EOP)) {
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index dd46345..60d789c 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -1037,6 +1037,7 @@ static void ixgbe_alloc_rx_buffers(struct ixgbe_adapter *adapter,
int cleaned_count)
{
struct pci_dev *pdev = adapter->pdev;
+ struct net_device *netdev = adapter->netdev;
union ixgbe_adv_rx_desc *rx_desc;
struct ixgbe_rx_buffer *bi;
unsigned int i;
@@ -1050,7 +1051,7 @@ static void ixgbe_alloc_rx_buffers(struct ixgbe_adapter *adapter,
if (!bi->page_dma &&
(rx_ring->flags & IXGBE_RING_RX_PS_ENABLED)) {
if (!bi->page) {
- bi->page = alloc_page(GFP_ATOMIC);
+ bi->page = netdev_alloc_page(netdev);
if (!bi->page) {
adapter->alloc_rx_page_failed++;
goto no_buffers;
@@ -1242,10 +1243,10 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
dma_unmap_page(&pdev->dev, rx_buffer_info->page_dma,
PAGE_SIZE / 2, DMA_FROM_DEVICE);
rx_buffer_info->page_dma = 0;
- skb_fill_page_desc(skb, skb_shinfo(skb)->nr_frags,
- rx_buffer_info->page,
- rx_buffer_info->page_offset,
- upper_len);
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ rx_buffer_info->page,
+ rx_buffer_info->page_offset,
+ upper_len);

if ((rx_ring->rx_buf_len > (PAGE_SIZE / 2)) ||
(page_count(rx_buffer_info->page) != 1))
@@ -1253,9 +1254,6 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
else
get_page(rx_buffer_info->page);

- skb->len += upper_len;
- skb->data_len += upper_len;
- skb->truesize += upper_len;
}

i++;
diff --git a/drivers/net/sky2.c b/drivers/net/sky2.c
index c762c6a..5753b8b 100644
--- a/drivers/net/sky2.c
+++ b/drivers/net/sky2.c
@@ -1394,7 +1394,7 @@ static struct sk_buff *sky2_rx_alloc(struct sky2_port *sky2)
skb_reserve(skb, NET_IP_ALIGN);

for (i = 0; i < sky2->rx_nfrags; i++) {
- struct page *page = alloc_page(GFP_ATOMIC);
+ struct page *page = netdev_alloc_page(sky2->netdev);

if (!page)
goto free_partial;
@@ -2353,8 +2353,8 @@ static struct sk_buff *receive_copy(struct sky2_port *sky2,
}

/* Adjust length of skb with fragments to match received data */
-static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space,
- unsigned int length)
+static void skb_put_frags(struct sky2_port *sky2, struct sk_buff *skb,
+ unsigned int hdr_space, unsigned int length)
{
int i, num_frags;
unsigned int size;
@@ -2371,15 +2371,11 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space,

if (length == 0) {
/* don't need this page */
- __free_page(frag->page);
+ netdev_free_page(sky2->netdev, frag->page);
--skb_shinfo(skb)->nr_frags;
} else {
size = min(length, (unsigned) PAGE_SIZE);
-
- frag->size = size;
- skb->data_len += size;
- skb->truesize += size;
- skb->len += size;
+ skb_add_rx_frag(skb, i, frag->page, 0, size);
length -= size;
}
}
@@ -2407,7 +2403,7 @@ static struct sk_buff *receive_new(struct sky2_port *sky2,
*re = nre;

if (skb_shinfo(skb)->nr_frags)
- skb_put_frags(skb, hdr_space, length);
+ skb_put_frags(sky2, skb, hdr_space, length);
else
skb_put(skb, length);
return skb;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:19:28

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 14/30] net: sk_allocation() - concentrate socket related allocations

>From 3bc4f5211d8716267891ff85385177f181e418ea Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:04:45 +0800
Subject: [PATCH 14/30] net: sk_allocation() - concentrate socket related allocations

Introduce sk_allocation(), this function allows to inject sock specific
flags to each sock related allocation.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/skbuff.h | 3 +++
include/net/sock.h | 5 +++++
net/ipv4/tcp.c | 3 ++-
net/ipv4/tcp_output.c | 11 ++++++-----
net/ipv6/tcp_ipv6.c | 15 +++++++++++----
5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index ac74ee0..988a4dc 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1119,6 +1119,9 @@ static inline void skb_fill_page_desc(struct sk_buff *skb, int i,
extern void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page,
int off, int size);

+extern void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page,
+ int off, int size);
+
#define SKB_PAGE_ASSERT(skb) BUG_ON(skb_shinfo(skb)->nr_frags)
#define SKB_FRAG_ASSERT(skb) BUG_ON(skb_has_frags(skb))
#define SKB_LINEAR_ASSERT(skb) BUG_ON(skb_is_nonlinear(skb))
diff --git a/include/net/sock.h b/include/net/sock.h
index 4f26f2f..9ddb37b 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -563,6 +563,11 @@ static inline int sock_flag(struct sock *sk, enum sock_flags flag)
return test_bit(flag, &sk->sk_flags);
}

+static inline gfp_t sk_allocation(struct sock *sk, gfp_t gfp_mask)
+{
+ return gfp_mask;
+}
+
static inline void sk_acceptq_removed(struct sock *sk)
{
sk->sk_ack_backlog--;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 4e6ddfb..8ffe2c8 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -683,7 +683,8 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp)
/* The TCP header must be at least 32-bit aligned. */
size = ALIGN(size, 4);

- skb = alloc_skb_fclone(size + sk->sk_prot->max_header, gfp);
+ skb = alloc_skb_fclone(size + sk->sk_prot->max_header,
+ sk_allocation(sk, gfp));
if (skb) {
if (sk_wmem_schedule(sk, skb->truesize)) {
/*
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 25ff62e..a5ca337 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2307,7 +2307,7 @@ void tcp_send_fin(struct sock *sk)
/* Socket is locked, keep trying until memory is available. */
for (;;) {
skb = alloc_skb_fclone(MAX_TCP_HEADER,
- sk->sk_allocation);
+ sk_allocation(sk, sk->sk_allocation));
if (skb)
break;
yield();
@@ -2333,7 +2333,7 @@ void tcp_send_active_reset(struct sock *sk, gfp_t priority)
struct sk_buff *skb;

/* NOTE: No TCP options attached and we never retransmit this. */
- skb = alloc_skb(MAX_TCP_HEADER, priority);
+ skb = alloc_skb(MAX_TCP_HEADER, sk_allocation(sk, priority));
if (!skb) {
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTFAILED);
return;
@@ -2406,7 +2406,8 @@ struct sk_buff *tcp_make_synack(struct sock *sk, struct dst_entry *dst,

if (cvp != NULL && cvp->s_data_constant && cvp->s_data_desired)
s_data_desired = cvp->s_data_desired;
- skb = sock_wmalloc(sk, MAX_TCP_HEADER + 15 + s_data_desired, 1, GFP_ATOMIC);
+ skb = sock_wmalloc(sk, MAX_TCP_HEADER + 15 + s_data_desired, 1,
+ sk_allocation(sk, GFP_ATOMIC));
if (skb == NULL)
return NULL;

@@ -2686,7 +2687,7 @@ void tcp_send_ack(struct sock *sk)
* tcp_transmit_skb() will set the ownership to this
* sock.
*/
- buff = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
+ buff = alloc_skb(MAX_TCP_HEADER, sk_allocation(sk, GFP_ATOMIC));
if (buff == NULL) {
inet_csk_schedule_ack(sk);
inet_csk(sk)->icsk_ack.ato = TCP_ATO_MIN;
@@ -2721,7 +2722,7 @@ static int tcp_xmit_probe_skb(struct sock *sk, int urgent)
struct sk_buff *skb;

/* We don't queue it, tcp_transmit_skb() sets ownership. */
- skb = alloc_skb(MAX_TCP_HEADER, GFP_ATOMIC);
+ skb = alloc_skb(MAX_TCP_HEADER, sk_allocation(sk, GFP_ATOMIC));
if (skb == NULL)
return -1;

diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 5ebc27e..cb8bd13 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -589,7 +589,8 @@ static int tcp_v6_md5_do_add(struct sock *sk, struct in6_addr *peer,
} else {
/* reallocate new list if current one is full. */
if (!tp->md5sig_info) {
- tp->md5sig_info = kzalloc(sizeof(*tp->md5sig_info), GFP_ATOMIC);
+ tp->md5sig_info = kzalloc(sizeof(*tp->md5sig_info),
+ sk_allocation(sk, GFP_ATOMIC));
if (!tp->md5sig_info) {
kfree(newkey);
return -ENOMEM;
@@ -602,7 +603,8 @@ static int tcp_v6_md5_do_add(struct sock *sk, struct in6_addr *peer,
}
if (tp->md5sig_info->alloced6 == tp->md5sig_info->entries6) {
keys = kmalloc((sizeof (tp->md5sig_info->keys6[0]) *
- (tp->md5sig_info->entries6 + 1)), GFP_ATOMIC);
+ (tp->md5sig_info->entries6 + 1)),
+ sk_allocation(sk, GFP_ATOMIC));

if (!keys) {
tcp_free_md5sig_pool();
@@ -726,7 +728,8 @@ static int tcp_v6_parse_md5_keys (struct sock *sk, char __user *optval,
struct tcp_sock *tp = tcp_sk(sk);
struct tcp_md5sig_info *p;

- p = kzalloc(sizeof(struct tcp_md5sig_info), GFP_KERNEL);
+ p = kzalloc(sizeof(struct tcp_md5sig_info),
+ sk_allocation(sk, GFP_KERNEL));
if (!p)
return -ENOMEM;

@@ -997,6 +1000,7 @@ static void tcp_v6_send_response(struct sk_buff *skb, u32 seq, u32 ack, u32 win,
unsigned int tot_len = sizeof(struct tcphdr);
struct dst_entry *dst;
__be32 *topt;
+ gfp_t gfp_mask = GFP_ATOMIC;

if (ts)
tot_len += TCPOLEN_TSTAMP_ALIGNED;
@@ -1006,7 +1010,7 @@ static void tcp_v6_send_response(struct sk_buff *skb, u32 seq, u32 ack, u32 win,
#endif

buff = alloc_skb(MAX_HEADER + sizeof(struct ipv6hdr) + tot_len,
- GFP_ATOMIC);
+ gfp_mask);
if (buff == NULL)
return;

@@ -1083,6 +1087,7 @@ static void tcp_v6_send_reset(struct sock *sk, struct sk_buff *skb)
struct tcphdr *th = tcp_hdr(skb);
u32 seq = 0, ack_seq = 0;
struct tcp_md5sig_key *key = NULL;
+ gfp_t gfp_mask = GFP_ATOMIC;

if (th->rst)
return;
@@ -1094,6 +1099,8 @@ static void tcp_v6_send_reset(struct sock *sk, struct sk_buff *skb)
if (sk)
key = tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->daddr);
#endif
+ if (sk)
+ gfp_mask = sk_allocation(sk, gfp_mask);

if (th->ack)
seq = ntohl(th->ack_seq);
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:19:40

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 15/30] netvm: network reserve infrastructure

>From e8a09c013cf8416ece804ddcbc0d016d2f936e6d Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:06:54 +0800
Subject: [PATCH 15/30] netvm: network reserve infrastructure

Provide the basic infrastructure to reserve and charge/account network memory.

We provide the following reserve tree:

1) total network reserve
2) network TX reserve
3) protocol TX pages
4) network RX reserve
5) SKB data reserve

[1] is used to make all the network reserves a single subtree, for easy
manipulation.

[2] and [4] are merely for eastetic reasons.

The TX pages reserve [3] is assumed bounded by it being the upper bound of
memory that can be used for sending pages (not quite true, but good enough)

The SKB reserve [5] is an aggregate reserve, which is used to charge SKB data
against in the fallback path.

The consumers for these reserves are sockets marked with:
SOCK_MEMALLOC

Such sockets are to be used to service the VM (iow. to swap over). They
must be handled kernel side, exposing such a socket to user-space is a BUG.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/net/sock.h | 43 ++++++++++++++++++++-
net/Kconfig | 3 +
net/core/sock.c | 107 ++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 152 insertions(+), 1 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 9ddb37b..1de14b6 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -52,6 +52,7 @@
#include <linux/mm.h>
#include <linux/security.h>
#include <linux/slab.h>
+#include <linux/reserve.h>

#include <linux/filter.h>
#include <linux/rculist_nulls.h>
@@ -532,6 +533,7 @@ enum sock_flags {
SOCK_RCVTSTAMPNS, /* %SO_TIMESTAMPNS setting */
SOCK_LOCALROUTE, /* route locally only, %SO_DONTROUTE setting */
SOCK_QUEUE_SHRUNK, /* write queue has been shrunk recently */
+ SOCK_MEMALLOC, /* the VM depends on us - make sure we're serviced */
SOCK_TIMESTAMPING_TX_HARDWARE, /* %SOF_TIMESTAMPING_TX_HARDWARE */
SOCK_TIMESTAMPING_TX_SOFTWARE, /* %SOF_TIMESTAMPING_TX_SOFTWARE */
SOCK_TIMESTAMPING_RX_HARDWARE, /* %SOF_TIMESTAMPING_RX_HARDWARE */
@@ -563,9 +565,48 @@ static inline int sock_flag(struct sock *sk, enum sock_flags flag)
return test_bit(flag, &sk->sk_flags);
}

+static inline int sk_has_memalloc(struct sock *sk)
+{
+ return sock_flag(sk, SOCK_MEMALLOC);
+}
+
+extern struct mem_reserve net_rx_reserve;
+extern struct mem_reserve net_skb_reserve;
+
+#ifdef CONFIG_NETVM
+/*
+ * Guestimate the per request queue TX upper bound.
+ *
+ * Max packet size is 64k, and we need to reserve that much since the data
+ * might need to bounce it. Double it to be on the safe side.
+ */
+#define TX_RESERVE_PAGES DIV_ROUND_UP(2*65536, PAGE_SIZE)
+
+extern int memalloc_socks;
+
+static inline int sk_memalloc_socks(void)
+{
+ return memalloc_socks;
+}
+
+extern int sk_adjust_memalloc(int socks, long tx_reserve_pages);
+extern int sk_set_memalloc(struct sock *sk);
+extern int sk_clear_memalloc(struct sock *sk);
+#else
+static inline int sk_memalloc_socks(void)
+{
+ return 0;
+}
+
+static inline int sk_clear_memalloc(struct sock *sk)
+{
+ return 0;
+}
+#endif
+
static inline gfp_t sk_allocation(struct sock *sk, gfp_t gfp_mask)
{
- return gfp_mask;
+ return gfp_mask | (sk->sk_allocation & __GFP_MEMALLOC);
}

static inline void sk_acceptq_removed(struct sock *sk)
diff --git a/net/Kconfig b/net/Kconfig
index 0d68b40..2b61a85 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -284,4 +284,7 @@ source "net/9p/Kconfig"
source "net/caif/Kconfig"


+config NETVM
+ def_bool n
+
endif # if NET
diff --git a/net/core/sock.c b/net/core/sock.c
index fef2434..6bd5765 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -111,6 +111,7 @@
#include <linux/init.h>
#include <linux/highmem.h>
#include <linux/user_namespace.h>
+#include <linux/reserve.h>

#include <asm/uaccess.h>
#include <asm/system.h>
@@ -224,6 +225,105 @@ int net_cls_subsys_id = -1;
EXPORT_SYMBOL_GPL(net_cls_subsys_id);
#endif

+static struct mem_reserve net_reserve;
+struct mem_reserve net_rx_reserve;
+EXPORT_SYMBOL_GPL(net_rx_reserve); /* modular ipv6 only */
+struct mem_reserve net_skb_reserve;
+EXPORT_SYMBOL_GPL(net_skb_reserve); /* modular ipv6 only */
+static struct mem_reserve net_tx_reserve;
+static struct mem_reserve net_tx_pages;
+
+#ifdef CONFIG_NETVM
+static DEFINE_MUTEX(memalloc_socks_lock);
+int memalloc_socks;
+
+/**
+ * sk_adjust_memalloc - adjust the global memalloc reserve for critical RX
+ * @socks: number of new %SOCK_MEMALLOC sockets
+ * @tx_resserve_pages: number of pages to (un)reserve for TX
+ *
+ * This function adjusts the memalloc reserve based on system demand.
+ * The RX reserve is a limit, and only added once, not for each socket.
+ *
+ * NOTE:
+ * @tx_reserve_pages is an upper-bound of memory used for TX hence
+ * we need not account the pages like we do for RX pages.
+ */
+int sk_adjust_memalloc(int socks, long tx_reserve_pages)
+{
+ int err;
+
+ mutex_lock(&memalloc_socks_lock);
+ err = mem_reserve_pages_add(&net_tx_pages, tx_reserve_pages);
+ if (err)
+ goto unlock;
+
+ /*
+ * either socks is positive and we need to check for 0 -> !0
+ * transition and connect the reserve tree when we observe it.
+ */
+ if (!memalloc_socks && socks > 0) {
+ err = mem_reserve_connect(&net_reserve, &mem_reserve_root);
+ if (err) {
+ /*
+ * if we failed to connect the tree, undo the tx
+ * reserve so that failure has no side effects.
+ */
+ mem_reserve_pages_add(&net_tx_pages, -tx_reserve_pages);
+ goto unlock;
+ }
+ }
+ memalloc_socks += socks;
+ /*
+ * or socks is negative and we must observe the !0 -> 0 transition
+ * and disconnect the reserve tree.
+ */
+ if (!memalloc_socks && socks)
+ mem_reserve_disconnect(&net_reserve);
+
+unlock:
+ mutex_unlock(&memalloc_socks_lock);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(sk_adjust_memalloc);
+
+/**
+ * sk_set_memalloc - sets %SOCK_MEMALLOC
+ * @sk: socket to set it on
+ *
+ * Set %SOCK_MEMALLOC on a socket and increase the memalloc reserve
+ * accordingly.
+ */
+int sk_set_memalloc(struct sock *sk)
+{
+ int set = sock_flag(sk, SOCK_MEMALLOC);
+
+ if (!set) {
+ int err = sk_adjust_memalloc(1, 0);
+ if (err)
+ return err;
+
+ sock_set_flag(sk, SOCK_MEMALLOC);
+ sk->sk_allocation |= __GFP_MEMALLOC;
+ }
+ return !set;
+}
+EXPORT_SYMBOL_GPL(sk_set_memalloc);
+
+int sk_clear_memalloc(struct sock *sk)
+{
+ int set = sock_flag(sk, SOCK_MEMALLOC);
+ if (set) {
+ sk_adjust_memalloc(-1, 0);
+ sock_reset_flag(sk, SOCK_MEMALLOC);
+ sk->sk_allocation &= ~__GFP_MEMALLOC;
+ }
+ return set;
+}
+EXPORT_SYMBOL_GPL(sk_clear_memalloc);
+#endif
+
static int sock_set_timeout(long *timeo_p, char __user *optval, int optlen)
{
struct timeval tv;
@@ -1121,6 +1221,7 @@ static void __sk_free(struct sock *sk)
{
struct sk_filter *filter;

+ sk_clear_memalloc(sk);
if (sk->sk_destruct)
sk->sk_destruct(sk);

@@ -1300,6 +1401,12 @@ void __init sk_init(void)
sysctl_wmem_max = 131071;
sysctl_rmem_max = 131071;
}
+
+ mem_reserve_init(&net_reserve, "total network reserve", NULL);
+ mem_reserve_init(&net_rx_reserve, "network RX reserve", &net_reserve);
+ mem_reserve_init(&net_skb_reserve, "SKB data reserve", &net_rx_reserve);
+ mem_reserve_init(&net_tx_reserve, "network TX reserve", &net_reserve);
+ mem_reserve_init(&net_tx_pages, "protocol TX pages", &net_tx_reserve);
}

/*
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:19:52

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 16/30] netvm: INET reserves

>From e1a39da88a7b093474c48bb5f22f3b715e5ec205 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:17:23 +0800
Subject: [PATCH 16/30] netvm: INET reserves.

Add reserves for INET.

The two big users seem to be the route cache and ip-fragment cache.

Reserve the route cache under generic RX reserve, its usage is bounded by
the high reclaim watermark, and thus does not need further accounting.

Reserve the ip-fragement caches under SKB data reserve, these add to the
SKB RX limit. By ensuring we can at least receive as much data as fits in
the reassmbly line we avoid fragment attack deadlocks.

Adds to the reserve tree:

total network reserve
network TX reserve
protocol TX pages
network RX reserve
+ IPv6 route cache
+ IPv4 route cache
SKB data reserve
+ IPv6 fragment cache
+ IPv4 fragment cache

[[email protected]: PROCFS typo fix]
[[email protected]: build fix for removing .strategy]
Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Jeff Mahoney <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/net/inet_frag.h | 7 ++++++
include/net/netns/ipv6.h | 4 +++
net/ipv4/inet_fragment.c | 3 ++
net/ipv4/ip_fragment.c | 34 +++++++++++++++++++++++++++-
net/ipv4/route.c | 42 ++++++++++++++++++++++++++++++++++-
net/ipv6/reassembly.c | 55 ++++++++++++++++++++++++++++++++++++++++++++-
net/ipv6/route.c | 47 +++++++++++++++++++++++++++++++++++++-
7 files changed, 186 insertions(+), 6 deletions(-)

diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
index 16ff29a..958ee27 100644
--- a/include/net/inet_frag.h
+++ b/include/net/inet_frag.h
@@ -1,6 +1,9 @@
#ifndef __NET_FRAG_H__
#define __NET_FRAG_H__

+#include <linux/reserve.h>
+#include <linux/mutex.h>
+
struct netns_frags {
int nqueues;
atomic_t mem;
@@ -10,6 +13,10 @@ struct netns_frags {
int timeout;
int high_thresh;
int low_thresh;
+
+ /* reserves */
+ struct mutex lock;
+ struct mem_reserve reserve;
};

struct inet_frag_queue {
diff --git a/include/net/netns/ipv6.h b/include/net/netns/ipv6.h
index 81abfcb..1f644b2 100644
--- a/include/net/netns/ipv6.h
+++ b/include/net/netns/ipv6.h
@@ -25,6 +25,8 @@ struct netns_sysctl_ipv6 {
int ip6_rt_mtu_expires;
int ip6_rt_min_advmss;
int icmpv6_time;
+
+ struct mutex ip6_rt_lock;
};

struct netns_ipv6 {
@@ -58,6 +60,8 @@ struct netns_ipv6 {
struct sock *ndisc_sk;
struct sock *tcp_sk;
struct sock *igmp_sk;
+
+ struct mem_reserve ip6_rt_reserve;
#ifdef CONFIG_IPV6_MROUTE
#ifndef CONFIG_IPV6_MROUTE_MULTIPLE_TABLES
struct mr6_table *mrt6;
diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
index a2ca6ae..58270a3 100644
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -20,6 +20,7 @@
#include <linux/skbuff.h>
#include <linux/rtnetlink.h>
#include <linux/slab.h>
+#include <linux/reserve.h>

#include <net/inet_frag.h>

@@ -75,6 +76,8 @@ void inet_frags_init_net(struct netns_frags *nf)
nf->nqueues = 0;
atomic_set(&nf->mem, 0);
INIT_LIST_HEAD(&nf->lru_list);
+ mutex_init(&nf->lock);
+ mem_reserve_init(&nf->reserve, "IP fragement cache", NULL);
}
EXPORT_SYMBOL(inet_frags_init_net);

diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
index dd0dbf0..a2e2e05 100644
--- a/net/ipv4/ip_fragment.c
+++ b/net/ipv4/ip_fragment.c
@@ -45,6 +45,8 @@
#include <linux/udp.h>
#include <linux/inet.h>
#include <linux/netfilter_ipv4.h>
+#include <linux/reserve.h>
+#include <linux/nsproxy.h>

/* NOTE. Logic of IP defragmentation is parallel to corresponding IPv6
* code now. If you change something here, _PLEASE_ update ipv6/reassembly.c
@@ -634,6 +636,34 @@ int ip_defrag(struct sk_buff *skb, u32 user)
}

#ifdef CONFIG_SYSCTL
+static int
+proc_dointvec_fragment(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ struct net *net = container_of(table->data, struct net,
+ ipv4.frags.high_thresh);
+ ctl_table tmp = *table;
+ int new_bytes, ret;
+
+ mutex_lock(&net->ipv4.frags.lock);
+ if (write) {
+ tmp.data = &new_bytes;
+ table = &tmp;
+ }
+
+ ret = proc_dointvec(table, write, buffer, lenp, ppos);
+
+ if (!ret && write) {
+ ret = mem_reserve_kmalloc_set(&net->ipv4.frags.reserve,
+ new_bytes);
+ if (!ret)
+ net->ipv4.frags.high_thresh = new_bytes;
+ }
+ mutex_unlock(&net->ipv4.frags.lock);
+
+ return ret;
+}
+
static int zero;

static struct ctl_table ip4_frags_ns_ctl_table[] = {
@@ -642,7 +672,7 @@ static struct ctl_table ip4_frags_ns_ctl_table[] = {
.data = &init_net.ipv4.frags.high_thresh,
.maxlen = sizeof(int),
.mode = 0644,
- .proc_handler = proc_dointvec
+ .proc_handler = &proc_dointvec_fragment,
},
{
.procname = "ipfrag_low_thresh",
@@ -740,6 +770,8 @@ static inline void ip4_frags_ctl_register(void)

static int __net_init ipv4_frags_init_net(struct net *net)
{
+ int ret;
+
/*
* Fragment cache limits. We will commit 256K at one time. Should we
* cross that limit we will prune down to 192K. This should cope with
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 03430de..548aa37 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -108,6 +108,7 @@
#ifdef CONFIG_SYSCTL
#include <linux/sysctl.h>
#endif
+#include <linux/reserve.h>

#define RT_FL_TOS(oldflp) \
((u32)(oldflp->fl4_tos & (IPTOS_RT_MASK | RTO_ONLINK)))
@@ -268,6 +269,8 @@ static inline int rt_genid(struct net *net)
return atomic_read(&net->ipv4.rt_genid);
}

+static struct mem_reserve ipv4_route_reserve;
+
#ifdef CONFIG_PROC_FS
struct rt_cache_iter_state {
struct seq_net_private p;
@@ -398,6 +401,34 @@ static int rt_cache_seq_show(struct seq_file *seq, void *v)
return 0;
}

+static struct mutex ipv4_route_lock;
+
+static int
+proc_dointvec_route(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ ctl_table tmp = *table;
+ int new_size, ret;
+
+ mutex_lock(&ipv4_route_lock);
+ if (write) {
+ tmp.data = &new_size;
+ table = &tmp;
+ }
+
+ ret = proc_dointvec(table, write, buffer, lenp, ppos);
+
+ if (!ret && write) {
+ ret = mem_reserve_kmem_cache_set(&ipv4_route_reserve,
+ ipv4_dst_ops.kmem_cachep, new_size);
+ if (!ret)
+ ip_rt_max_size = new_size;
+ }
+ mutex_unlock(&ipv4_route_lock);
+
+ return ret;
+}
+
static const struct seq_operations rt_cache_seq_ops = {
.start = rt_cache_seq_start,
.next = rt_cache_seq_next,
@@ -3096,7 +3127,7 @@ static ctl_table ipv4_route_table[] = {
.data = &ip_rt_max_size,
.maxlen = sizeof(int),
.mode = 0644,
- .proc_handler = proc_dointvec,
+ .proc_handler = &proc_dointvec_route,
},
{
/* Deprecated. Use gc_min_interval_ms */
@@ -3327,6 +3358,15 @@ int __init ip_rt_init(void)
ipv4_dst_ops.gc_thresh = (rt_hash_mask + 1);
ip_rt_max_size = (rt_hash_mask + 1) * 16;

+#ifdef CONFIG_PROC_FS
+ mutex_init(&ipv4_route_lock);
+#endif
+
+ mem_reserve_init(&ipv4_route_reserve, "IPv4 route cache",
+ &net_rx_reserve);
+ mem_reserve_kmem_cache_set(&ipv4_route_reserve,
+ ipv4_dst_ops.kmem_cachep, ip_rt_max_size);
+
devinet_init();
ip_fib_init();

diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
index 545c414..6f02d8c 100644
--- a/net/ipv6/reassembly.c
+++ b/net/ipv6/reassembly.c
@@ -42,6 +42,7 @@
#include <linux/jhash.h>
#include <linux/skbuff.h>
#include <linux/slab.h>
+#include <linux/reserve.h>

#include <net/sock.h>
#include <net/snmp.h>
@@ -639,13 +640,41 @@ static const struct inet6_protocol frag_protocol =
};

#ifdef CONFIG_SYSCTL
+static int
+proc_dointvec_fragment(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ struct net *net = container_of(table->data, struct net,
+ ipv6.frags.high_thresh);
+ ctl_table tmp = *table;
+ int new_bytes, ret;
+
+ mutex_lock(&net->ipv6.frags.lock);
+ if (write) {
+ tmp.data = &new_bytes;
+ table = &tmp;
+ }
+
+ ret = proc_dointvec(table, write, buffer, lenp, ppos);
+
+ if (!ret && write) {
+ ret = mem_reserve_kmalloc_set(&net->ipv6.frags.reserve,
+ new_bytes);
+ if (!ret)
+ net->ipv6.frags.high_thresh = new_bytes;
+ }
+ mutex_unlock(&net->ipv6.frags.lock);
+
+ return ret;
+}
+
static struct ctl_table ip6_frags_ns_ctl_table[] = {
{
.procname = "ip6frag_high_thresh",
.data = &init_net.ipv6.frags.high_thresh,
.maxlen = sizeof(int),
.mode = 0644,
- .proc_handler = proc_dointvec
+ .proc_handler = &proc_dointvec_fragment,
},
{
.procname = "ip6frag_low_thresh",
@@ -750,17 +779,39 @@ static inline void ip6_frags_sysctl_unregister(void)

static int __net_init ipv6_frags_init_net(struct net *net)
{
+ int ret;
+
net->ipv6.frags.high_thresh = IPV6_FRAG_HIGH_THRESH;
net->ipv6.frags.low_thresh = IPV6_FRAG_LOW_THRESH;
net->ipv6.frags.timeout = IPV6_FRAG_TIMEOUT;

inet_frags_init_net(&net->ipv6.frags);

- return ip6_frags_ns_sysctl_register(net);
+ ret = ip6_frags_ns_sysctl_register(net);
+ if (ret)
+ goto out_reg;
+
+ mem_reserve_init(&net->ipv6.frags.reserve, "IPv6 fragment cache",
+ &net_skb_reserve);
+ ret = mem_reserve_kmalloc_set(&net->ipv6.frags.reserve,
+ net->ipv6.frags.high_thresh);
+ if (ret)
+ goto out_reserve;
+
+ return 0;
+
+out_reserve:
+ mem_reserve_disconnect(&net->ipv6.frags.reserve);
+ ip6_frags_ns_sysctl_unregister(net);
+out_reg:
+ inet_frags_exit_net(&net->ipv6.frags, &ip6_frags);
+
+ return ret;
}

static void __net_exit ipv6_frags_exit_net(struct net *net)
{
+ mem_reserve_disconnect(&net->ipv6.frags.reserve);
ip6_frags_ns_sysctl_unregister(net);
inet_frags_exit_net(&net->ipv6.frags, &ip6_frags);
}
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 8f2d040..67aa341 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -37,6 +37,7 @@
#include <linux/mroute6.h>
#include <linux/init.h>
#include <linux/if_arp.h>
+#include <linux/reserve.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/nsproxy.h>
@@ -2532,6 +2533,34 @@ int ipv6_sysctl_rtcache_flush(ctl_table *ctl, int write,
return -EINVAL;
}

+static int
+proc_dointvec_route(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ struct net *net = container_of(table->data, struct net,
+ ipv6.sysctl.ip6_rt_max_size);
+ ctl_table tmp = *table;
+ int new_size, ret;
+
+ mutex_lock(&net->ipv6.sysctl.ip6_rt_lock);
+ if (write) {
+ tmp.data = &new_size;
+ table = &tmp;
+ }
+
+ ret = proc_dointvec(table, write, buffer, lenp, ppos);
+
+ if (!ret && write) {
+ ret = mem_reserve_kmem_cache_set(&net->ipv6.ip6_rt_reserve,
+ net->ipv6.ip6_dst_ops.kmem_cachep, new_size);
+ if (!ret)
+ net->ipv6.sysctl.ip6_rt_max_size = new_size;
+ }
+ mutex_unlock(&net->ipv6.sysctl.ip6_rt_lock);
+
+ return ret;
+}
+
ctl_table ipv6_route_table_template[] = {
{
.procname = "flush",
@@ -2552,7 +2581,7 @@ ctl_table ipv6_route_table_template[] = {
.data = &init_net.ipv6.sysctl.ip6_rt_max_size,
.maxlen = sizeof(int),
.mode = 0644,
- .proc_handler = proc_dointvec,
+ .proc_handler = &proc_dointvec_route,
},
{
.procname = "gc_min_interval",
@@ -2627,6 +2656,8 @@ struct ctl_table * __net_init ipv6_route_sysctl_init(struct net *net)
table[9].data = &net->ipv6.sysctl.ip6_rt_gc_min_interval;
}

+ mutex_init(&net->ipv6.sysctl.ip6_rt_lock);
+
return table;
}
#endif
@@ -2676,6 +2707,14 @@ static int __net_init ip6_route_net_init(struct net *net)
net->ipv6.sysctl.ip6_rt_mtu_expires = 10*60*HZ;
net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;

+ mem_reserve_init(&net->ipv6.ip6_rt_reserve, "IPv6 route cache",
+ &net_rx_reserve);
+ ret = mem_reserve_kmem_cache_set(&net->ipv6.ip6_rt_reserve,
+ net->ipv6.ip6_dst_ops.kmem_cachep,
+ net->ipv6.sysctl.ip6_rt_max_size);
+ if (ret)
+ goto out_reserve_fail;
+
#ifdef CONFIG_PROC_FS
proc_net_fops_create(net, "ipv6_route", 0, &ipv6_route_proc_fops);
proc_net_fops_create(net, "rt6_stats", S_IRUGO, &rt6_stats_seq_fops);
@@ -2686,12 +2725,15 @@ static int __net_init ip6_route_net_init(struct net *net)
out:
return ret;

+out_reserve_fail:
+ mem_reserve_disconnect(&net->ipv6.ip6_rt_reserve);
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
+ kfree(net->ipv6.ip6_blk_hole_entry);
out_ip6_prohibit_entry:
kfree(net->ipv6.ip6_prohibit_entry);
out_ip6_null_entry:
- kfree(net->ipv6.ip6_null_entry);
#endif
+ kfree(net->ipv6.ip6_null_entry);
out_ip6_dst_ops:
goto out;
}
@@ -2702,6 +2744,7 @@ static void __net_exit ip6_route_net_exit(struct net *net)
proc_net_remove(net, "ipv6_route");
proc_net_remove(net, "rt6_stats");
#endif
+ mem_reserve_disconnect(&net->ipv6.ip6_rt_reserve);
kfree(net->ipv6.ip6_null_entry);
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
kfree(net->ipv6.ip6_prohibit_entry);
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:20:03

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 17/30] netvm: hook skb allocation to reserves

>From 3e824860934af5aa0150608314693c5e0e3608b6 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:30:27 +0800
Subject: [PATCH 17/30] netvm: hook skb allocation to reserves

Change the skb allocation api to indicate RX usage and use this to fall back to
the reserve when needed. SKBs allocated from the reserve are tagged in
skb->emergency.

Teach all other skb ops about emergency skbs and the reserve accounting.

Use the (new) packet split API to allocate and track fragment pages from the
emergency reserve. Do this using an atomic counter in page->index. This is
needed because the fragments have a different sharing semantic than that
indicated by skb_shinfo()->dataref.

Note that the decision to distinguish between regular and emergency SKBs allows
the accounting overhead to be limited to the later kind.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/mm_types.h | 1 +
include/linux/skbuff.h | 27 +++++++--
net/core/skbuff.c | 137 +++++++++++++++++++++++++++++++++++++---------
3 files changed, 133 insertions(+), 32 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index a95a202..73e0526 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -72,6 +72,7 @@ struct page {
pgoff_t index; /* Our offset within mapping. */
void *freelist; /* SLUB: freelist req. slab lock */
int reserve; /* page_alloc: page is a reserve page */
+ atomic_t frag_count; /* skb fragment use count */
};
struct list_head lru; /* Pageout list, eg. active_list
* protected by zone->lru_lock !
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 988a4dc..4ac45ad 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -385,9 +385,12 @@ struct sk_buff {
#else
__u8 deliver_no_wcard:1;
#endif
+#ifdef CONFIG_NETVM
+ __u8 emergency:1;
+#endif
kmemcheck_bitfield_end(flags2);

- /* 0/14 bit hole */
+ /* 0/13/14 bit hole */

#ifdef CONFIG_NET_DMA
dma_cookie_t dma_cookie;
@@ -429,6 +432,18 @@ struct sk_buff {
#define SKB_DST_NOREF 1UL
#define SKB_DST_PTRMASK ~(SKB_DST_NOREF)

+#define SKB_ALLOC_FCLONE 0x01
+#define SKB_ALLOC_RX 0x02
+
+static inline bool skb_emergency(const struct sk_buff *skb)
+{
+#ifdef CONFIG_NETVM
+ return unlikely(skb->emergency);
+#else
+ return false;
+#endif
+}
+
/**
* skb_dst - returns skb dst_entry
* @skb: buffer
@@ -491,7 +506,7 @@ extern void kfree_skb(struct sk_buff *skb);
extern void consume_skb(struct sk_buff *skb);
extern void __kfree_skb(struct sk_buff *skb);
extern struct sk_buff *__alloc_skb(unsigned int size,
- gfp_t priority, int fclone, int node);
+ gfp_t priority, int flags, int node);
static inline struct sk_buff *alloc_skb(unsigned int size,
gfp_t priority)
{
@@ -501,7 +516,7 @@ static inline struct sk_buff *alloc_skb(unsigned int size,
static inline struct sk_buff *alloc_skb_fclone(unsigned int size,
gfp_t priority)
{
- return __alloc_skb(size, priority, 1, -1);
+ return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, -1);
}

extern bool skb_recycle_check(struct sk_buff *skb, int skb_size);
@@ -1516,7 +1531,8 @@ static inline void __skb_queue_purge(struct sk_buff_head *list)
static inline struct sk_buff *__dev_alloc_skb(unsigned int length,
gfp_t gfp_mask)
{
- struct sk_buff *skb = alloc_skb(length + NET_SKB_PAD, gfp_mask);
+ struct sk_buff *skb =
+ __alloc_skb(length + NET_SKB_PAD, gfp_mask, SKB_ALLOC_RX, -1);
if (likely(skb))
skb_reserve(skb, NET_SKB_PAD);
return skb;
@@ -1557,6 +1573,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev,
}

extern struct page *__netdev_alloc_page(struct net_device *dev, gfp_t gfp_mask);
+extern void __netdev_free_page(struct net_device *dev, struct page *page);

/**
* netdev_alloc_page - allocate a page for ps-rx on a specific device
@@ -1573,7 +1590,7 @@ static inline struct page *netdev_alloc_page(struct net_device *dev)

static inline void netdev_free_page(struct net_device *dev, struct page *page)
{
- __free_page(page);
+ __netdev_free_page(dev, page);
}

/**
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 34432b4..9e36dc2 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -168,14 +168,21 @@ static void skb_under_panic(struct sk_buff *skb, int sz, void *here)
* %GFP_ATOMIC.
*/
struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
- int fclone, int node)
+ int flags, int node)
{
struct kmem_cache *cache;
struct skb_shared_info *shinfo;
struct sk_buff *skb;
u8 *data;
+ int emergency = 0;
+ int memalloc = sk_memalloc_socks();

- cache = fclone ? skbuff_fclone_cache : skbuff_head_cache;
+ size = SKB_DATA_ALIGN(size);
+ cache = (flags & SKB_ALLOC_FCLONE)
+ ? skbuff_fclone_cache : skbuff_head_cache;
+
+ if (memalloc && (flags & SKB_ALLOC_RX))
+ gfp_mask |= __GFP_MEMALLOC;

/* Get the HEAD */
skb = kmem_cache_alloc_node(cache, gfp_mask & ~__GFP_DMA, node);
@@ -183,9 +190,8 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
goto out;
prefetchw(skb);

- size = SKB_DATA_ALIGN(size);
- data = kmalloc_node_track_caller(size + sizeof(struct skb_shared_info),
- gfp_mask, node);
+ data = kmalloc_reserve(size + sizeof(struct skb_shared_info),
+ gfp_mask, node, &net_skb_reserve, &emergency);
if (!data)
goto nodata;
prefetchw(data + size);
@@ -196,6 +202,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
* the tail pointer in struct sk_buff!
*/
memset(skb, 0, offsetof(struct sk_buff, tail));
+#ifdef CONFIG_NETVM
+ skb->emergency = emergency;
+#endif
skb->truesize = size + sizeof(struct sk_buff);
atomic_set(&skb->users, 1);
skb->head = data;
@@ -213,7 +222,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
atomic_set(&shinfo->dataref, 1);

- if (fclone) {
+ if (flags & SKB_ALLOC_FCLONE) {
struct sk_buff *child = skb + 1;
atomic_t *fclone_ref = (atomic_t *) (child + 1);

@@ -223,6 +232,9 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
atomic_set(fclone_ref, 1);

child->fclone = SKB_FCLONE_UNAVAILABLE;
+#ifdef CONFIG_NETVM
+ child->emergency = skb->emergency;
+#endif
}
out:
return skb;
@@ -252,7 +264,7 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev,
int node = dev->dev.parent ? dev_to_node(dev->dev.parent) : -1;
struct sk_buff *skb;

- skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask, 0, node);
+ skb = __alloc_skb(length + NET_SKB_PAD, gfp_mask, SKB_ALLOC_RX, node);
if (likely(skb)) {
skb_reserve(skb, NET_SKB_PAD);
skb->dev = dev;
@@ -266,11 +278,19 @@ struct page *__netdev_alloc_page(struct net_device *dev, gfp_t gfp_mask)
int node = dev->dev.parent ? dev_to_node(dev->dev.parent) : -1;
struct page *page;

- page = alloc_pages_node(node, gfp_mask, 0);
+ page = alloc_pages_reserve(node, gfp_mask | __GFP_MEMALLOC, 0,
+ &net_skb_reserve, NULL);
+
return page;
}
EXPORT_SYMBOL(__netdev_alloc_page);

+void __netdev_free_page(struct net_device *dev, struct page *page)
+{
+ free_pages_reserve(page, 0, &net_skb_reserve, page->reserve);
+}
+EXPORT_SYMBOL(__netdev_free_page);
+
void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off,
int size)
{
@@ -278,6 +298,27 @@ void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off,
skb->len += size;
skb->data_len += size;
skb->truesize += size;
+
+#ifdef CONFIG_NETVM
+ /*
+ * In the rare case that skb_emergency() != page->reserved we'll
+ * skew the accounting slightly, but since its only a 'small' constant
+ * shift its ok.
+ */
+ if (skb_emergency(skb)) {
+ /*
+ * We need to track fragment pages so that we properly
+ * release their reserve in skb_put_page().
+ */
+ atomic_set(&page->frag_count, 1);
+ } else if (unlikely(page->reserve)) {
+ /*
+ * Release the reserve now, because normal skbs don't
+ * do the emergency accounting.
+ */
+ mem_reserve_pages_charge(&net_skb_reserve, -1);
+ }
+#endif
}
EXPORT_SYMBOL(skb_add_rx_frag);

@@ -329,21 +370,38 @@ static void skb_clone_fraglist(struct sk_buff *skb)
skb_get(list);
}

+static void skb_get_page(struct sk_buff *skb, struct page *page)
+{
+ get_page(page);
+ if (skb_emergency(skb))
+ atomic_inc(&page->frag_count);
+}
+
+static void skb_put_page(struct sk_buff *skb, struct page *page)
+{
+ if (skb_emergency(skb) && atomic_dec_and_test(&page->frag_count))
+ mem_reserve_pages_charge(&net_skb_reserve, -1);
+ put_page(page);
+}
+
static void skb_release_data(struct sk_buff *skb)
{
if (!skb->cloned ||
!atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
&skb_shinfo(skb)->dataref)) {
+
if (skb_shinfo(skb)->nr_frags) {
int i;
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
- put_page(skb_shinfo(skb)->frags[i].page);
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_put_page(skb,
+ skb_shinfo(skb)->frags[i].page);
+ }
}

if (skb_has_frags(skb))
skb_drop_fraglist(skb);

- kfree(skb->head);
+ kfree_reserve(skb->head, &net_skb_reserve, skb_emergency(skb));
}
}

@@ -536,6 +594,9 @@ static void __copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
#if defined(CONFIG_IP_VS) || defined(CONFIG_IP_VS_MODULE)
new->ipvs_property = old->ipvs_property;
#endif
+#ifdef CONFIG_NETVM
+ new->emergency = old->emergency;
+#endif
new->protocol = old->protocol;
new->mark = old->mark;
new->skb_iif = old->skb_iif;
@@ -630,6 +691,9 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask)
n->fclone = SKB_FCLONE_CLONE;
atomic_inc(fclone_ref);
} else {
+ if (skb_emergency(skb))
+ gfp_mask |= __GFP_MEMALLOC;
+
n = kmem_cache_alloc(skbuff_head_cache, gfp_mask);
if (!n)
return NULL;
@@ -666,6 +730,14 @@ static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
skb_shinfo(new)->gso_type = skb_shinfo(old)->gso_type;
}

+static inline int skb_alloc_rx_flag(const struct sk_buff *skb)
+{
+ if (skb_emergency(skb))
+ return SKB_ALLOC_RX;
+
+ return 0;
+}
+
/**
* skb_copy - create private copy of an sk_buff
* @skb: buffer to copy
@@ -686,15 +758,17 @@ static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask)
{
int headerlen = skb->data - skb->head;
+ int size;
/*
* Allocate the copy buffer
*/
struct sk_buff *n;
#ifdef NET_SKBUFF_DATA_USES_OFFSET
- n = alloc_skb(skb->end + skb->data_len, gfp_mask);
+ size = skb->end + skb->data_len;
#else
- n = alloc_skb(skb->end - skb->head + skb->data_len, gfp_mask);
+ size = skb->end - skb->head + skb->data_len;
#endif
+ n = __alloc_skb(size, gfp_mask, skb_alloc_rx_flag(skb), -1);
if (!n)
return NULL;

@@ -729,12 +803,14 @@ struct sk_buff *pskb_copy(struct sk_buff *skb, gfp_t gfp_mask)
/*
* Allocate the copy buffer
*/
+ int size;
struct sk_buff *n;
#ifdef NET_SKBUFF_DATA_USES_OFFSET
- n = alloc_skb(skb->end, gfp_mask);
+ size = skb->end;
#else
- n = alloc_skb(skb->end - skb->head, gfp_mask);
+ size = skb->end - skb->head;
#endif
+ n = __alloc_skb(size, gfp_mask, skb_alloc_rx_flag(skb), -1);
if (!n)
goto out;

@@ -753,8 +829,9 @@ struct sk_buff *pskb_copy(struct sk_buff *skb, gfp_t gfp_mask)
int i;

for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- skb_shinfo(n)->frags[i] = skb_shinfo(skb)->frags[i];
- get_page(skb_shinfo(n)->frags[i].page);
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ skb_shinfo(n)->frags[i] = *frag;
+ skb_get_page(n, frag->page);
}
skb_shinfo(n)->nr_frags = i;
}
@@ -805,7 +882,11 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,

size = SKB_DATA_ALIGN(size);

- data = kmalloc(size + sizeof(struct skb_shared_info), gfp_mask);
+ if (skb_emergency(skb))
+ gfp_mask |= __GFP_MEMALLOC;
+
+ data = kmalloc_reserve(size + sizeof(struct skb_shared_info),
+ gfp_mask, -1, &net_skb_reserve, NULL);
if (!data)
goto nodata;

@@ -820,7 +901,7 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
sizeof(struct skb_shared_info));

for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
- get_page(skb_shinfo(skb)->frags[i].page);
+ skb_get_page(skb, skb_shinfo(skb)->frags[i].page);

if (skb_has_frags(skb))
skb_clone_fraglist(skb);
@@ -901,8 +982,8 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
/*
* Allocate the copy buffer
*/
- struct sk_buff *n = alloc_skb(newheadroom + skb->len + newtailroom,
- gfp_mask);
+ struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom,
+ gfp_mask, skb_alloc_rx_flag(skb), -1);
int oldheadroom = skb_headroom(skb);
int head_copy_len, head_copy_off;
int off;
@@ -1094,7 +1175,7 @@ drop_pages:
skb_shinfo(skb)->nr_frags = i;

for (; i < nfrags; i++)
- put_page(skb_shinfo(skb)->frags[i].page);
+ skb_put_page(skb, skb_shinfo(skb)->frags[i].page);

if (skb_has_frags(skb))
skb_drop_fraglist(skb);
@@ -1263,7 +1344,7 @@ pull_pages:
k = 0;
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
if (skb_shinfo(skb)->frags[i].size <= eat) {
- put_page(skb_shinfo(skb)->frags[i].page);
+ skb_put_page(skb, skb_shinfo(skb)->frags[i].page);
eat -= skb_shinfo(skb)->frags[i].size;
} else {
skb_shinfo(skb)->frags[k] = skb_shinfo(skb)->frags[i];
@@ -2045,6 +2126,7 @@ static inline void skb_split_no_header(struct sk_buff *skb,
skb_shinfo(skb1)->frags[k] = skb_shinfo(skb)->frags[i];

if (pos < len) {
+ struct page *page = skb_shinfo(skb)->frags[i].page;
/* Split frag.
* We have two variants in this case:
* 1. Move all the frag to the second
@@ -2053,7 +2135,7 @@ static inline void skb_split_no_header(struct sk_buff *skb,
* where splitting is expensive.
* 2. Split is accurately. We make this.
*/
- get_page(skb_shinfo(skb)->frags[i].page);
+ skb_get_page(skb1, page);
skb_shinfo(skb1)->frags[0].page_offset += len - pos;
skb_shinfo(skb1)->frags[0].size -= len - pos;
skb_shinfo(skb)->frags[i].size = len - pos;
@@ -2552,8 +2634,9 @@ struct sk_buff *skb_segment(struct sk_buff *skb, int features)
skb_release_head_state(nskb);
__skb_push(nskb, doffset);
} else {
- nskb = alloc_skb(hsize + doffset + headroom,
- GFP_ATOMIC);
+ nskb = __alloc_skb(hsize + doffset + headroom,
+ GFP_ATOMIC, skb_alloc_rx_flag(skb),
+ -1);

if (unlikely(!nskb))
goto err;
@@ -2595,7 +2678,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, int features)

while (pos < offset + len && i < nfrags) {
*frag = skb_shinfo(skb)->frags[i];
- get_page(frag->page);
+ skb_get_page(nskb, frag->page);
size = frag->size;

if (pos < offset) {
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:20:15

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 18/30] netvm: filter emergency skbs

>From b0240dd1e2ee0b4dc30f98c67cfe35e8c1833753 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:36:53 +0800
Subject: [PATCH 18/30] netvm: filter emergency skbs.

Toss all emergency packets not for a SOCK_MEMALLOC socket. This ensures our
precious memory reserve doesn't get stuck waiting for user-space.

The correctness of this approach relies on the fact that networks must be
assumed lossy.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
net/core/filter.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 52b051f..bdcbc14 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -82,6 +82,9 @@ int sk_filter(struct sock *sk, struct sk_buff *skb)
int err;
struct sk_filter *filter;

+ if (skb_emergency(skb) && !sk_has_memalloc(sk))
+ return -ENOMEM;
+
err = security_sock_rcv_skb(sk, skb);
if (err)
return err;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:20:26

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 19/30] netvm: prevent a stream specific deadlock

>From 97cab9c6b5964ba48bee576214d71edbef74d0a6 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:37:56 +0800
Subject: [PATCH 19/30] netvm: prevent a stream specific deadlock

It could happen that all !SOCK_MEMALLOC sockets have buffered so much data
that we're over the global rmem limit. This will prevent SOCK_MEMALLOC buffers
from receiving data, which will prevent userspace from running, which is needed
to reduce the buffered data.

Fix this by exempting the SOCK_MEMALLOC sockets from the rmem limit.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/net/sock.h | 7 ++++---
net/core/sock.c | 2 +-
net/ipv4/tcp_input.c | 12 ++++++------
net/sctp/ulpevent.c | 2 +-
4 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index 1de14b6..ac87f6f 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -977,12 +977,13 @@ static inline int sk_wmem_schedule(struct sock *sk, int size)
__sk_mem_schedule(sk, size, SK_MEM_SEND);
}

-static inline int sk_rmem_schedule(struct sock *sk, int size)
+static inline int sk_rmem_schedule(struct sock *sk, struct sk_buff *skb)
{
if (!sk_has_account(sk))
return 1;
- return size <= sk->sk_forward_alloc ||
- __sk_mem_schedule(sk, size, SK_MEM_RECV);
+ return skb->truesize <= sk->sk_forward_alloc ||
+ __sk_mem_schedule(sk, skb->truesize, SK_MEM_RECV) ||
+ skb_emergency(skb);
}

static inline void sk_mem_reclaim(struct sock *sk)
diff --git a/net/core/sock.c b/net/core/sock.c
index 6bd5765..f24560c 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -399,7 +399,7 @@ int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
if (err)
return err;

- if (!sk_rmem_schedule(sk, skb->truesize)) {
+ if (!sk_rmem_schedule(sk, skb)) {
atomic_inc(&sk->sk_drops);
return -ENOBUFS;
}
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 0433466..cea2bc2 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4340,19 +4340,19 @@ static void tcp_ofo_queue(struct sock *sk)
static int tcp_prune_ofo_queue(struct sock *sk);
static int tcp_prune_queue(struct sock *sk);

-static inline int tcp_try_rmem_schedule(struct sock *sk, unsigned int size)
+static inline int tcp_try_rmem_schedule(struct sock *sk, struct sk_buff *skb)
{
if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
- !sk_rmem_schedule(sk, size)) {
+ !sk_rmem_schedule(sk, skb)) {

if (tcp_prune_queue(sk) < 0)
return -1;

- if (!sk_rmem_schedule(sk, size)) {
+ if (!sk_rmem_schedule(sk, skb)) {
if (!tcp_prune_ofo_queue(sk))
return -1;

- if (!sk_rmem_schedule(sk, size))
+ if (!sk_rmem_schedule(sk, skb))
return -1;
}
}
@@ -4405,7 +4405,7 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
if (eaten <= 0) {
queue_and_out:
if (eaten < 0 &&
- tcp_try_rmem_schedule(sk, skb->truesize))
+ tcp_try_rmem_schedule(sk, skb))
goto drop;

skb_set_owner_r(skb, sk);
@@ -4476,7 +4476,7 @@ drop:

TCP_ECN_check_ce(tp, skb);

- if (tcp_try_rmem_schedule(sk, skb->truesize))
+ if (tcp_try_rmem_schedule(sk, skb))
goto drop;

/* Disable header prediction. */
diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c
index aa72e89..ebbbfea 100644
--- a/net/sctp/ulpevent.c
+++ b/net/sctp/ulpevent.c
@@ -702,7 +702,7 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc,
if (rx_count >= asoc->base.sk->sk_rcvbuf) {

if ((asoc->base.sk->sk_userlocks & SOCK_RCVBUF_LOCK) ||
- (!sk_rmem_schedule(asoc->base.sk, chunk->skb->truesize)))
+ (!sk_rmem_schedule(asoc->base.sk, chunk->skb)))
goto fail;
}

--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:20:37

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 20/30] netfilter: NF_QUEUE vs emergency skbs

>From 6c5ccad4c45a73a6d9ebecbfcb1bce8ff3ca462f Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 11:38:29 +0800
Subject: [PATCH 20/30] netfilter: NF_QUEUE vs emergency skbs

Avoid memory getting stuck waiting for userspace, drop all emergency packets.
This of course requires the regular storage route to not include an NF_QUEUE
target ;-)

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
net/netfilter/core.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/net/netfilter/core.c b/net/netfilter/core.c
index 78b505d..cc04549 100644
--- a/net/netfilter/core.c
+++ b/net/netfilter/core.c
@@ -176,9 +176,12 @@ next_hook:
if (verdict == NF_ACCEPT || verdict == NF_STOP) {
ret = 1;
} else if (verdict == NF_DROP) {
+drop:
kfree_skb(skb);
ret = -EPERM;
} else if ((verdict & NF_VERDICT_MASK) == NF_QUEUE) {
+ if (skb_emergency(skb))
+ goto drop;
if (!nf_queue(skb, elem, pf, hook, indev, outdev, okfn,
verdict >> NF_VERDICT_BITS))
goto next_hook;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:20:48

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 21/30] netvm: skb processing

>From 15437174f171e197ecdfa5fe71ae89334bb58fd2 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:07:28 +0800
Subject: [PATCH 21/30] netvm: skb processing

In order to make sure emergency packets receive all memory needed to proceed
ensure processing of emergency SKBs happens under PF_MEMALLOC.

Use the (new) sk_backlog_rcv() wrapper to ensure this for backlog processing.

Skip taps, since those are user-space again.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/net/sock.h | 5 ++++
net/core/dev.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++---
net/core/sock.c | 16 +++++++++++++++
3 files changed, 72 insertions(+), 4 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index ac87f6f..aadf15c 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -680,8 +680,13 @@ static inline __must_check int sk_add_backlog(struct sock *sk, struct sk_buff *s
return 0;
}

+extern int __sk_backlog_rcv(struct sock *sk, struct sk_buff *skb);
+
static inline int sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
{
+ if (skb_emergency(skb))
+ return __sk_backlog_rcv(sk, skb);
+
return sk->sk_backlog_rcv(sk, skb);
}

diff --git a/net/core/dev.c b/net/core/dev.c
index e85cc5f..7169b9b 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2801,6 +2801,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
struct net_device *orig_or_bond;
int ret = NET_RX_DROP;
__be16 type;
+ unsigned long pflags = current->flags;

if (!netdev_tstamp_prequeue)
net_timestamp_check(skb);
@@ -2808,9 +2809,21 @@ static int __netif_receive_skb(struct sk_buff *skb)
if (vlan_tx_tag_present(skb) && vlan_hwaccel_do_receive(skb))
return NET_RX_SUCCESS;

+ /* Emergency skb are special, they should
+ * - be delivered to SOCK_MEMALLOC sockets only
+ * - stay away from userspace
+ * - have bounded memory usage
+ *
+ * Use PF_MEMALLOC as a poor mans memory pool - the grouping kind.
+ * This saves us from propagating the allocation context down to all
+ * allocation sites.
+ */
+ if (skb_emergency(skb))
+ current->flags |= PF_MEMALLOC;
+
/* if we've gotten here through NAPI, check netpoll */
if (netpoll_receive_skb(skb))
- return NET_RX_DROP;
+ goto out;

if (!skb->skb_iif)
skb->skb_iif = skb->dev->ifindex;
@@ -2852,6 +2865,9 @@ static int __netif_receive_skb(struct sk_buff *skb)
}
#endif

+ if (skb_emergency(skb))
+ goto skip_taps;
+
list_for_each_entry_rcu(ptype, &ptype_all, list) {
if (ptype->dev == null_or_orig || ptype->dev == skb->dev ||
ptype->dev == orig_dev) {
@@ -2861,13 +2877,17 @@ static int __netif_receive_skb(struct sk_buff *skb)
}
}

+skip_taps:
#ifdef CONFIG_NET_CLS_ACT
skb = handle_ing(skb, &pt_prev, &ret, orig_dev);
if (!skb)
- goto out;
+ goto unlock;
ncls:
#endif

+ if (!skb_emergency_protocol(skb))
+ goto drop;
+
/* Handle special case of bridge or macvlan */
rx_handler = rcu_dereference(skb->dev->rx_handler);
if (rx_handler) {
@@ -2877,7 +2897,7 @@ ncls:
}
skb = rx_handler(skb);
if (!skb)
- goto out;
+ goto unlock;
}

/*
@@ -2907,6 +2927,7 @@ ncls:
if (pt_prev) {
ret = pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
} else {
+drop:
kfree_skb(skb);
/* Jamal, now you will not able to escape explaining
* me how you were going to use this. :-)
@@ -2914,11 +2935,37 @@ ncls:
ret = NET_RX_DROP;
}

-out:
+unlock:
rcu_read_unlock();
+out:
+ tsk_restore_flags(current, pflags, PF_MEMALLOC);
return ret;
}

+/*
+ * Filter the protocols for which the reserves are adequate.
+ *
+ * Before adding a protocol make sure that it is either covered by the existing
+ * reserves, or add reserves covering the memory need of the new protocol's
+ * packet processing.
+ */
+static int skb_emergency_protocol(struct sk_buff *skb)
+{
+ if (skb_emergency(skb))
+ switch (skb->protocol) {
+ case __constant_htons(ETH_P_ARP):
+ case __constant_htons(ETH_P_IP):
+ case __constant_htons(ETH_P_IPV6):
+ case __constant_htons(ETH_P_8021Q):
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
/**
* netif_receive_skb - process receive buffer from network
* @skb: buffer to process
diff --git a/net/core/sock.c b/net/core/sock.c
index f24560c..dfc2dfe 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -322,6 +322,22 @@ int sk_clear_memalloc(struct sock *sk)
return set;
}
EXPORT_SYMBOL_GPL(sk_clear_memalloc);
+
+int __sk_backlog_rcv(struct sock *sk, struct sk_buff *skb)
+{
+ int ret;
+ unsigned long pflags = current->flags;
+
+ /* these should have been dropped before queueing */
+ BUG_ON(!sk_has_memalloc(sk));
+
+ current->flags |= PF_MEMALLOC;
+ ret = sk->sk_backlog_rcv(sk, skb);
+ tsk_restore_flags(current, pflags, PF_MEMALLOC);
+
+ return ret;
+}
+EXPORT_SYMBOL(__sk_backlog_rcv);
#endif

static int sock_set_timeout(long *timeo_p, char __user *optval, int optlen)
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:20:59

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 22/30] mm: add support for non block device backed swap files

>From 0c3f6a5db5c61a222135286e8a6ada7411b3ac3b Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:08:45 +0800
Subject: [PATCH 22/30] mm: add support for non block device backed swap files

New addres_space_operations methods are added:
int swapon(struct file *);
int swapoff(struct file *);
int swap_out(struct file *, struct page *, struct writeback_control *);
int swap_in(struct file *, struct page *);

When during sys_swapon() the ->swapon() method is found and returns no error
the swapper_space.a_ops will proxy to sis->swap_file->f_mapping->a_ops, and
make use of ->swap_{out,in}() to write/read swapcache pages.

The ->swapon() method will be used to communicate to the file that the VM
relies on it, and the address_space should take adequate measures (like
reserving memory for mempools or the like). The ->swapoff() method will be
called on sys_swapoff() when ->swapon() was found and returned no error.

This new interface can be used to obviate the need for ->bmap in the swapfile
code. A filesystem would need to load (and maybe even allocate) the full block
map for a file into memory and pin it there on ->swapon() so that
->swap_{out,in}() have instant access to it. It can be released on ->swapoff().

The reason to provide ->swap_{out,in}() over using {write,read}page() is to
1) make a distinction between swapcache and pagecache pages, and
2) to provide a struct file * for credential context (normally not needed
in the context of writepage, as the page content is normally dirtied
using either of the following interfaces:
write_{begin,end}()
{prepare,commit}_write()
page_mkwrite()
which do have the file context.

[[email protected]: cleanups]
[[email protected]: fix get_swap_info return value]
[[email protected]: fix wrong SWP_FILE enum]
Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
Documentation/filesystems/Locking | 22 +++++++++++++++
Documentation/filesystems/vfs.txt | 18 ++++++++++++
include/linux/buffer_head.h | 1 +
include/linux/fs.h | 9 ++++++
include/linux/swap.h | 5 +++-
mm/page_io.c | 54 +++++++++++++++++++++++++++++++++++++
mm/swap_state.c | 4 +-
mm/swapfile.c | 36 +++++++++++++++++++++++-
8 files changed, 144 insertions(+), 5 deletions(-)

diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking
index 96d4293..9e221ad 100644
--- a/Documentation/filesystems/Locking
+++ b/Documentation/filesystems/Locking
@@ -174,6 +174,10 @@ prototypes:
int (*direct_IO)(int, struct kiocb *, const struct iovec *iov,
loff_t offset, unsigned long nr_segs);
int (*launder_page) (struct page *);
+ int (*swapon) (struct file *);
+ int (*swapoff) (struct file *);
+ int (*swap_out) (struct file *, struct page *, struct writeback_control *);
+ int (*swap_in) (struct file *, struct page *);

locking rules:
All except set_page_dirty may block
@@ -193,6 +197,10 @@ invalidatepage: no yes
releasepage: no yes
direct_IO: no
launder_page: no yes
+swapon no
+swapoff no
+swap_out no yes, unlocks
+swap_in no yes, unlocks

->write_begin(), ->write_end(), ->sync_page() and ->readpage()
may be called from the request handler (/dev/loop).
@@ -292,6 +300,20 @@ cleaned, or an error value if not. Note that in order to prevent the page
getting mapped back in and redirtied, it needs to be kept locked
across the entire operation.

+ ->swapon() will be called with a non-zero argument on files backing
+(non block device backed) swapfiles. A return value of zero indicates success,
+in which case this file can be used for backing swapspace. The swapspace
+operations will be proxied to the address space operations.
+
+ ->swapoff() will be called in the sys_swapoff() path when ->swapon()
+returned success.
+
+ ->swap_out() when swapon() returned success, this method is used to
+write the swap page.
+
+ ->swap_in() when swapon() returned success, this method is used to
+read the swap page.
+
Note: currently almost all instances of address_space methods are
using BKL for internal serialization and that's one of the worst sources
of contention. Normally they are calling library functions (in fs/buffer.c)
diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index 94677e7..209ae81 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -542,6 +542,11 @@ struct address_space_operations {
int (*migratepage) (struct page *, struct page *);
int (*launder_page) (struct page *);
int (*error_remove_page) (struct mapping *mapping, struct page *page);
+ int (*swapon)(struct file *);
+ int (*swapoff)(struct file *);
+ int (*swap_out)(struct file *file, struct page *page,
+ struct writeback_control *wbc);
+ int (*swap_in)(struct file *file, struct page *page);
};

writepage: called by the VM to write a dirty page to backing store.
@@ -706,6 +711,19 @@ struct address_space_operations {
unless you have them locked or reference counts increased.


+ swapon: Called when swapon is used on a file. A
+ return value of zero indicates success, in which case this
+ file can be used to back swapspace. The swapspace operations
+ will be proxied to this address space's ->swap_{out,in} methods.
+
+ swapoff: Called during swapoff on files where swapon was successfull.
+
+ swap_out: Called to write a swapcache page to a backing store, similar to
+ writepage.
+
+ swap_in: Called to read a swapcache page from a backing store, similar to
+ readpage.
+
The File Object
===============

diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 5aa3850..7ec96e7 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -329,6 +329,7 @@ static inline int inode_has_buffers(struct inode *inode) { return 0; }
static inline void invalidate_inode_buffers(struct inode *inode) {}
static inline int remove_inode_buffers(struct inode *inode) { return 1; }
static inline int sync_mapping_buffers(struct address_space *mapping) { return 0; }
+static inline void block_sync_page(struct page *) { }

#endif /* CONFIG_BLOCK */
#endif /* _LINUX_BUFFER_HEAD_H */
diff --git a/include/linux/fs.h b/include/linux/fs.h
index dc9d185..ef11408 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -612,6 +612,15 @@ struct address_space_operations {
int (*is_partially_uptodate) (struct page *, read_descriptor_t *,
unsigned long);
int (*error_remove_page)(struct address_space *, struct page *);
+
+ /*
+ * swapfile support
+ */
+ int (*swapon)(struct file *file);
+ int (*swapoff)(struct file *file);
+ int (*swap_out)(struct file *file, struct page *page,
+ struct writeback_control *wbc);
+ int (*swap_in)(struct file *file, struct page *page);
};

/*
diff --git a/include/linux/swap.h b/include/linux/swap.h
index ff4acea..dafea65 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -147,7 +147,7 @@ enum {
SWP_SOLIDSTATE = (1 << 4), /* blkdev seeks are cheap */
SWP_CONTINUED = (1 << 5), /* swap_map has count continuation */
SWP_BLKDEV = (1 << 6), /* its a block device */
- /* add others here before... */
+ SWP_FILE = (1 << 7), /* file swap area */
SWP_SCANNING = (1 << 8), /* refcount in scan_swap_map */
};

@@ -293,6 +293,8 @@ extern void swap_unplug_io_fn(struct backing_dev_info *, struct page *);
/* linux/mm/page_io.c */
extern int swap_readpage(struct page *);
extern int swap_writepage(struct page *page, struct writeback_control *wbc);
+extern void swap_sync_page(struct page *page);
+extern int swap_set_page_dirty(struct page *page);
extern void end_swap_bio_read(struct bio *bio, int err);

/* linux/mm/swap_state.c */
@@ -329,6 +331,7 @@ extern int swap_type_of(dev_t, sector_t, struct block_device **);
extern unsigned int count_swap_pages(int, int);
extern sector_t map_swap_page(struct page *, struct block_device **);
extern sector_t swapdev_block(int, pgoff_t);
+extern struct swap_info_struct *page_swap_info(struct page *);
extern int reuse_swap_page(struct page *);
extern int try_to_free_swap(struct page *);
struct backing_dev_info;
diff --git a/mm/page_io.c b/mm/page_io.c
index 2dee975..012b9ef 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -18,6 +18,7 @@
#include <linux/bio.h>
#include <linux/swapops.h>
#include <linux/writeback.h>
+#include <linux/buffer_head.h>
#include <asm/pgtable.h>

static struct bio *get_swap_bio(gfp_t gfp_flags,
@@ -98,6 +99,17 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
unlock_page(page);
goto out;
}
+
+ if (sis->flags & SWP_FILE) {
+ struct file *swap_file = sis->swap_file;
+ struct address_space *mapping = swap_file->f_mapping;
+
+ ret = mapping->a_ops->swap_out(swap_file, page, wbc);
+ if (!ret)
+ count_vm_event(PSWPOUT);
+ return ret;
+ }
+
bio = get_swap_bio(GFP_NOIO, page, end_swap_bio_write);
if (bio == NULL) {
set_page_dirty(page);
@@ -115,13 +127,55 @@ out:
return ret;
}

+void swap_sync_page(struct page *page)
+{
+ struct swap_info_struct *sis = page_swap_info(page);
+
+ if (!sis)
+ return;
+
+ if (sis->flags & SWP_FILE) {
+ struct address_space *mapping = sis->swap_file->f_mapping;
+
+ if (mapping->a_ops->sync_page)
+ mapping->a_ops->sync_page(page);
+ } else {
+ block_sync_page(page);
+ }
+}
+
+int swap_set_page_dirty(struct page *page)
+{
+ struct swap_info_struct *sis = page_swap_info(page);
+
+ if (sis->flags & SWP_FILE) {
+ struct address_space *mapping = sis->swap_file->f_mapping;
+
+ return mapping->a_ops->set_page_dirty(page);
+ } else {
+ return __set_page_dirty_nobuffers(page);
+ }
+}
+
int swap_readpage(struct page *page)
{
struct bio *bio;
int ret = 0;
+ struct swap_info_struct *sis = page_swap_info(page);

VM_BUG_ON(!PageLocked(page));
VM_BUG_ON(PageUptodate(page));
+
+ if (sis->flags & SWP_FILE) {
+ struct file *swap_file = sis->swap_file;
+ struct address_space *mapping = swap_file->f_mapping;
+
+ ret = mapping->a_ops->swap_in(swap_file, page);
+ if (!ret)
+ count_vm_event(PSWPIN);
+ return ret;
+ }
+
bio = get_swap_bio(GFP_KERNEL, page, end_swap_bio_read);
if (bio == NULL) {
unlock_page(page);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 8d5399f..041428b 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -28,8 +28,8 @@
*/
static const struct address_space_operations swap_aops = {
.writepage = swap_writepage,
- .sync_page = block_sync_page,
- .set_page_dirty = __set_page_dirty_nobuffers,
+ .sync_page = swap_sync_page,
+ .set_page_dirty = swap_set_page_dirty,
.migratepage = migrate_page,
};

diff --git a/mm/swapfile.c b/mm/swapfile.c
index 03aa2d5..a7baef1 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1353,6 +1353,14 @@ static void destroy_swap_extents(struct swap_info_struct *sis)
list_del(&se->list);
kfree(se);
}
+
+ if (sis->flags & SWP_FILE) {
+ struct file *swap_file = sis->swap_file;
+ struct address_space *mapping = swap_file->f_mapping;
+
+ sis->flags &= ~SWP_FILE;
+ mapping->a_ops->swapoff(swap_file);
+ }
}

/*
@@ -1434,7 +1442,9 @@ add_swap_extent(struct swap_info_struct *sis, unsigned long start_page,
*/
static int setup_swap_extents(struct swap_info_struct *sis, sector_t *span)
{
- struct inode *inode;
+ struct file *swap_file = sis->swap_file;
+ struct address_space *mapping = swap_file->f_mapping;
+ struct inode *inode = mapping->host;
unsigned blocks_per_page;
unsigned long page_no;
unsigned blkbits;
@@ -1445,13 +1455,22 @@ static int setup_swap_extents(struct swap_info_struct *sis, sector_t *span)
int nr_extents = 0;
int ret;

- inode = sis->swap_file->f_mapping->host;
if (S_ISBLK(inode->i_mode)) {
ret = add_swap_extent(sis, 0, sis->max, 0);
*span = sis->pages;
goto out;
}

+ if (mapping->a_ops->swapon) {
+ ret = mapping->a_ops->swapon(swap_file);
+ if (!ret) {
+ sis->flags |= SWP_FILE;
+ ret = add_swap_extent(sis, 0, sis->max, 0);
+ *span = sis->pages;
+ }
+ goto out;
+ }
+
blkbits = inode->i_blkbits;
blocks_per_page = PAGE_SIZE >> blkbits;

@@ -2228,6 +2247,19 @@ int swapcache_prepare(swp_entry_t entry)
return __swap_duplicate(entry, SWAP_HAS_CACHE);
}

+struct swap_info_struct *page_swap_info(struct page *page)
+{
+ swp_entry_t swap = { .val = page_private(page) };
+ if (!PageSwapCache(page) || !swap.val) {
+ /* This should only happen from sync_page.
+ * In other cases the page should be locked and
+ * should be in a SwapCache
+ */
+ return NULL;
+ }
+ return swap_info[swp_type(swap)];
+}
+
/*
* swap_lock prevents swap_map being freed. Don't grab an extra
* reference on the swaphandle, it doesn't matter if it becomes unused.
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:21:10

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 23/30] mm: methods for teaching filesystems about PG_swapcache pages

>From 50e813068c51de733bbbdd04eb4af9c43919cd57 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:09:50 +0800
Subject: [PATCH 23/30] mm: methods for teaching filesystems about PG_swapcache pages

In order to teach filesystems to handle swap cache pages, three new page
functions are introduced:

pgoff_t page_file_index(struct page *);
loff_t page_file_offset(struct page *);
struct address_space *page_file_mapping(struct page *);

page_file_index() - gives the offset of this page in the file in
PAGE_CACHE_SIZE blocks. Like page->index is for mapped pages, this function
also gives the correct index for PG_swapcache pages.

page_file_offset() - uses page_file_index(), so that it will give the expected
result, even for PG_swapcache pages.

page_file_mapping() - gives the mapping backing the actual page; that is for
swap cache pages it will give swap_file->f_mapping.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/mm.h | 25 +++++++++++++++++++++++++
include/linux/pagemap.h | 5 +++++
mm/swapfile.c | 19 +++++++++++++++++++
3 files changed, 49 insertions(+), 0 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 32033ba..0cf97fc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -663,6 +663,17 @@ static inline void *page_rmapping(struct page *page)
return (void *)((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
}

+extern struct address_space *__page_file_mapping(struct page *);
+
+static inline
+struct address_space *page_file_mapping(struct page *page)
+{
+ if (unlikely(PageSwapCache(page)))
+ return __page_file_mapping(page);
+
+ return page->mapping;
+}
+
static inline int PageAnon(struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0;
@@ -679,6 +690,20 @@ static inline pgoff_t page_index(struct page *page)
return page->index;
}

+extern pgoff_t __page_file_index(struct page *page);
+
+/*
+ * Return the file index of the page. Regular pagecache pages use ->index
+ * whereas swapcache pages use swp_offset(->private)
+ */
+static inline pgoff_t page_file_index(struct page *page)
+{
+ if (unlikely(PageSwapCache(page)))
+ return __page_file_index(page);
+
+ return page->index;
+}
+
/*
* The atomic page->_mapcount, like _count, starts from -1:
* so that transitions both from it and to it can be tracked,
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e12cdc6..64eda5b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -285,6 +285,11 @@ static inline loff_t page_offset(struct page *page)
extern pgoff_t linear_hugepage_index(struct vm_area_struct *vma,
unsigned long address);

+static inline loff_t page_file_offset(struct page *page)
+{
+ return ((loff_t)page_file_index(page)) << PAGE_CACHE_SHIFT;
+}
+
static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
unsigned long address)
{
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a7baef1..d8a05e4 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2261,6 +2261,25 @@ struct swap_info_struct *page_swap_info(struct page *page)
}

/*
+ * out-of-line __page_file_ methods to avoid include hell.
+ */
+
+struct address_space *__page_file_mapping(struct page *page)
+{
+ VM_BUG_ON(!PageSwapCache(page));
+ return page_swap_info(page)->swap_file->f_mapping;
+}
+EXPORT_SYMBOL_GPL(__page_file_mapping);
+
+pgoff_t __page_file_index(struct page *page)
+{
+ swp_entry_t swap = { .val = page_private(page) };
+ VM_BUG_ON(!PageSwapCache(page));
+ return swp_offset(swap);
+}
+EXPORT_SYMBOL_GPL(__page_file_index);
+
+/*
* swap_lock prevents swap_map being freed. Don't grab an extra
* reference on the swaphandle, it doesn't matter if it becomes unused.
*/
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:21:21

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 24/30] nfs: teach the NFS client how to treat PG_swapcache pages

>From 743090cf0c129f3c83506260866f525a9f181f99 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:10:26 +0800
Subject: [PATCH 24/30] nfs: teach the NFS client how to treat PG_swapcache pages

Replace all relevant occurences of page->index and page->mapping in the NFS
client with the new page_file_index() and page_file_mapping() functions.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
fs/nfs/dir.c | 4 ++--
fs/nfs/file.c | 14 +++++++-------
fs/nfs/fscache-index.c | 2 +-
fs/nfs/fscache.c | 14 +++++++-------
fs/nfs/internal.h | 7 ++++---
fs/nfs/pagelist.c | 6 +++---
fs/nfs/read.c | 6 +++---
fs/nfs/write.c | 45 +++++++++++++++++++++++----------------------
8 files changed, 50 insertions(+), 48 deletions(-)

diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 782b431..0305786 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -182,7 +182,7 @@ int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page *page)

dfprintk(DIRCACHE, "NFS: %s: reading cookie %Lu into page %lu\n",
__func__, (long long)desc->entry->cookie,
- page->index);
+ page_file_index(page));

again:
timestamp = jiffies;
@@ -207,7 +207,7 @@ int nfs_readdir_filler(nfs_readdir_descriptor_t *desc, struct page *page)
* Note: assumes we have exclusive access to this mapping either
* through inode->i_mutex or some other mechanism.
*/
- if (invalidate_inode_pages2_range(inode->i_mapping, page->index + 1, -1) < 0) {
+ if (invalidate_inode_pages2_range(inode->i_mapping, page_file_index(page) + 1, -1) < 0) {
/* Should never happen */
nfs_zap_mapping(inode, inode->i_mapping);
}
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 36a5e74..8f066fe 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -480,9 +480,9 @@ static void nfs_invalidate_page(struct page *page, unsigned long offset)
if (offset != 0)
return;
/* Cancel any unstarted writes on this page */
- nfs_wb_page_cancel(page->mapping->host, page);
+ nfs_wb_page_cancel(page_file_mapping(page)->host, page);

- nfs_fscache_invalidate_page(page, page->mapping->host);
+ nfs_fscache_invalidate_page(page, page_file_mapping(page)->host);
}

/*
@@ -497,7 +497,7 @@ static int nfs_release_page(struct page *page, gfp_t gfp)

/* Only do I/O if gfp is a superset of GFP_KERNEL */
if ((gfp & GFP_KERNEL) == GFP_KERNEL)
- nfs_wb_page(page->mapping->host, page);
+ nfs_wb_page(page_file_mapping(page)->host, page);
/* If PagePrivate() is set, then the page is not freeable */
if (PagePrivate(page))
return 0;
@@ -514,11 +514,11 @@ static int nfs_release_page(struct page *page, gfp_t gfp)
*/
static int nfs_launder_page(struct page *page)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
struct nfs_inode *nfsi = NFS_I(inode);

dfprintk(PAGECACHE, "NFS: launder_page(%ld, %llu)\n",
- inode->i_ino, (long long)page_offset(page));
+ inode->i_ino, (long long)page_file_offset(page));

nfs_fscache_wait_on_page_write(nfsi, page);
return nfs_wb_page(inode, page);
@@ -557,13 +557,13 @@ static int nfs_vm_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
dfprintk(PAGECACHE, "NFS: vm_page_mkwrite(%s/%s(%ld), offset %lld)\n",
dentry->d_parent->d_name.name, dentry->d_name.name,
filp->f_mapping->host->i_ino,
- (long long)page_offset(page));
+ (long long)page_file_offset(page));

/* make sure the cache has finished storing the page */
nfs_fscache_wait_on_page_write(NFS_I(dentry->d_inode), page);

lock_page(page);
- mapping = page->mapping;
+ mapping = page_file_mapping(page);
if (mapping != dentry->d_inode->i_mapping)
goto out_unlock;

diff --git a/fs/nfs/fscache-index.c b/fs/nfs/fscache-index.c
index 5b10064..9aa62a0 100644
--- a/fs/nfs/fscache-index.c
+++ b/fs/nfs/fscache-index.c
@@ -283,7 +283,7 @@ static void nfs_fscache_inode_now_uncached(void *cookie_netfs_data)
for (loop = 0; loop < nr_pages; loop++)
ClearPageFsCache(pvec.pages[loop]);

- first = pvec.pages[nr_pages - 1]->index + 1;
+ first = page_file_index(pvec.pages[nr_pages - 1]) + 1;

pvec.nr = nr_pages;
pagevec_release(&pvec);
diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
index ce153a6..c6642ea 100644
--- a/fs/nfs/fscache.c
+++ b/fs/nfs/fscache.c
@@ -356,7 +356,7 @@ void nfs_fscache_reset_inode_cookie(struct inode *inode)
int nfs_fscache_release_page(struct page *page, gfp_t gfp)
{
if (PageFsCache(page)) {
- struct nfs_inode *nfsi = NFS_I(page->mapping->host);
+ struct nfs_inode *nfsi = NFS_I(page_file_mapping(page)->host);
struct fscache_cookie *cookie = nfsi->fscache;

BUG_ON(!cookie);
@@ -366,7 +366,7 @@ int nfs_fscache_release_page(struct page *page, gfp_t gfp)
if (!fscache_maybe_release_page(cookie, page, gfp))
return 0;

- nfs_add_fscache_stats(page->mapping->host,
+ nfs_add_fscache_stats(page_file_mapping(page)->host,
NFSIOS_FSCACHE_PAGES_UNCACHED, 1);
}

@@ -391,7 +391,7 @@ void __nfs_fscache_invalidate_page(struct page *page, struct inode *inode)

BUG_ON(!PageLocked(page));
fscache_uncache_page(cookie, page);
- nfs_add_fscache_stats(page->mapping->host,
+ nfs_add_fscache_stats(page_file_mapping(page)->host,
NFSIOS_FSCACHE_PAGES_UNCACHED, 1);
}

@@ -413,7 +413,7 @@ static void nfs_readpage_from_fscache_complete(struct page *page,
SetPageUptodate(page);
unlock_page(page);
} else {
- error = nfs_readpage_async(context, page->mapping->host, page);
+ error = nfs_readpage_async(context, page_file_mapping(page)->host, page);
if (error)
unlock_page(page);
}
@@ -429,7 +429,7 @@ int __nfs_readpage_from_fscache(struct nfs_open_context *ctx,

dfprintk(FSCACHE,
"NFS: readpage_from_fscache(fsc:%p/p:%p(i:%lx f:%lx)/0x%p)\n",
- NFS_I(inode)->fscache, page, page->index, page->flags, inode);
+ NFS_I(inode)->fscache, page, page_file_index(page), page->flags, inode);

ret = fscache_read_or_alloc_page(NFS_I(inode)->fscache,
page,
@@ -518,12 +518,12 @@ void __nfs_readpage_to_fscache(struct inode *inode, struct page *page, int sync)

dfprintk(FSCACHE,
"NFS: readpage_to_fscache(fsc:%p/p:%p(i:%lx f:%lx)/%d)\n",
- NFS_I(inode)->fscache, page, page->index, page->flags, sync);
+ NFS_I(inode)->fscache, page, page_file_index(page), page->flags, sync);

ret = fscache_write_page(NFS_I(inode)->fscache, page, GFP_KERNEL);
dfprintk(FSCACHE,
"NFS: readpage_to_fscache: p:%p(i:%lu f:%lx) ret %d\n",
- page, page->index, page->flags, ret);
+ page, page_file_index(page), page->flags, ret);

if (ret != 0) {
fscache_uncache_page(NFS_I(inode)->fscache, page);
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index d8bd619..1110617 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -342,13 +342,14 @@ void nfs_super_set_maxbytes(struct super_block *sb, __u64 maxfilesize)
static inline
unsigned int nfs_page_length(struct page *page)
{
- loff_t i_size = i_size_read(page->mapping->host);
+ loff_t i_size = i_size_read(page_file_mapping(page)->host);

if (i_size > 0) {
+ pgoff_t page_index = page_file_index(page);
pgoff_t end_index = (i_size - 1) >> PAGE_CACHE_SHIFT;
- if (page->index < end_index)
+ if (page_index < end_index)
return PAGE_CACHE_SIZE;
- if (page->index == end_index)
+ if (page_index == end_index)
return ((i_size - 1) & ~PAGE_CACHE_MASK) + 1;
}
return 0;
diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
index a3654e5..2be94bb 100644
--- a/fs/nfs/pagelist.c
+++ b/fs/nfs/pagelist.c
@@ -70,11 +70,11 @@ nfs_create_request(struct nfs_open_context *ctx, struct inode *inode,
* update_nfs_request below if the region is not locked. */
req->wb_page = page;
atomic_set(&req->wb_complete, 0);
- req->wb_index = page->index;
+ req->wb_index = page_file_index(page);
page_cache_get(page);
BUG_ON(PagePrivate(page));
BUG_ON(!PageLocked(page));
- BUG_ON(page->mapping->host != inode);
+ BUG_ON(page_file_mapping(page)->host != inode);
req->wb_offset = offset;
req->wb_pgbase = offset;
req->wb_bytes = count;
@@ -363,7 +363,7 @@ void nfs_pageio_cond_complete(struct nfs_pageio_descriptor *desc, pgoff_t index)
* nfs_scan_list - Scan a list for matching requests
* @nfsi: NFS inode
* @dst: Destination list
- * @idx_start: lower bound of page->index to scan
+ * @idx_start: lower bound of page_file_index(page) to scan
* @npages: idx_start + npages sets the upper bound to scan.
* @tag: tag to scan for
*
diff --git a/fs/nfs/read.c b/fs/nfs/read.c
index 5a33a92..5cbae00 100644
--- a/fs/nfs/read.c
+++ b/fs/nfs/read.c
@@ -501,11 +501,11 @@ static const struct rpc_call_ops nfs_read_full_ops = {
int nfs_readpage(struct file *file, struct page *page)
{
struct nfs_open_context *ctx;
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
int error;

dprintk("NFS: nfs_readpage (%p %ld@%lu)\n",
- page, PAGE_CACHE_SIZE, page->index);
+ page, PAGE_CACHE_SIZE, page_file_index(page));
nfs_inc_stats(inode, NFSIOS_VFSREADPAGE);
nfs_add_stats(inode, NFSIOS_READPAGES, 1);

@@ -559,7 +559,7 @@ static int
readpage_async_filler(void *data, struct page *page)
{
struct nfs_readdesc *desc = (struct nfs_readdesc *)data;
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
struct nfs_page *new;
unsigned int len;
int error;
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 03df228..109a970 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -123,7 +123,7 @@ static struct nfs_page *nfs_page_find_request_locked(struct page *page)

static struct nfs_page *nfs_page_find_request(struct page *page)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
struct nfs_page *req = NULL;

spin_lock(&inode->i_lock);
@@ -135,16 +135,16 @@ static struct nfs_page *nfs_page_find_request(struct page *page)
/* Adjust the file length if we're writing beyond the end */
static void nfs_grow_file(struct page *page, unsigned int offset, unsigned int count)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
loff_t end, i_size;
pgoff_t end_index;

spin_lock(&inode->i_lock);
i_size = i_size_read(inode);
end_index = (i_size - 1) >> PAGE_CACHE_SHIFT;
- if (i_size > 0 && page->index < end_index)
+ if (i_size > 0 && page_file_index(page) < end_index)
goto out;
- end = ((loff_t)page->index << PAGE_CACHE_SHIFT) + ((loff_t)offset+count);
+ end = page_file_offset(page) + ((loff_t)offset+count);
if (i_size >= end)
goto out;
i_size_write(inode, end);
@@ -157,7 +157,7 @@ out:
static void nfs_set_pageerror(struct page *page)
{
SetPageError(page);
- nfs_zap_mapping(page->mapping->host, page->mapping);
+ nfs_zap_mapping(page_file_mapping(page)->host, page_file_mapping(page));
}

/* We can set the PG_uptodate flag if we see that a write request
@@ -198,7 +198,7 @@ static int nfs_set_page_writeback(struct page *page)
int ret = test_set_page_writeback(page);

if (!ret) {
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
struct nfs_server *nfss = NFS_SERVER(inode);

page_cache_get(page);
@@ -213,7 +213,7 @@ static int nfs_set_page_writeback(struct page *page)

static void nfs_end_page_writeback(struct page *page)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
struct nfs_server *nfss = NFS_SERVER(inode);

end_page_writeback(page);
@@ -224,7 +224,7 @@ static void nfs_end_page_writeback(struct page *page)

static struct nfs_page *nfs_find_and_lock_request(struct page *page)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
struct nfs_page *req;
int ret;

@@ -282,12 +282,12 @@ out:

static int nfs_do_writepage(struct page *page, struct writeback_control *wbc, struct nfs_pageio_descriptor *pgio)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;

nfs_inc_stats(inode, NFSIOS_VFSWRITEPAGE);
nfs_add_stats(inode, NFSIOS_WRITEPAGES, 1);

- nfs_pageio_cond_complete(pgio, page->index);
+ nfs_pageio_cond_complete(pgio, page_file_index(page));
return nfs_page_async_flush(pgio, page);
}

@@ -299,7 +299,7 @@ static int nfs_writepage_locked(struct page *page, struct writeback_control *wbc
struct nfs_pageio_descriptor pgio;
int err;

- nfs_pageio_init_write(&pgio, page->mapping->host, wb_priority(wbc));
+ nfs_pageio_init_write(&pgio, page_file_mapping(page)->host, wb_priority(wbc));
err = nfs_do_writepage(page, wbc, &pgio);
nfs_pageio_complete(&pgio);
if (err < 0)
@@ -423,7 +423,7 @@ static void
nfs_mark_request_dirty(struct nfs_page *req)
{
__set_page_dirty_nobuffers(req->wb_page);
- __mark_inode_dirty(req->wb_page->mapping->host, I_DIRTY_DATASYNC);
+ __mark_inode_dirty(page_file_mapping(req->wb_page)->host, I_DIRTY_DATASYNC);
}

#if defined(CONFIG_NFS_V3) || defined(CONFIG_NFS_V4)
@@ -444,7 +444,8 @@ nfs_mark_request_commit(struct nfs_page *req)
nfsi->ncommit++;
spin_unlock(&inode->i_lock);
inc_zone_page_state(req->wb_page, NR_UNSTABLE_NFS);
- inc_bdi_stat(req->wb_page->mapping->backing_dev_info, BDI_RECLAIMABLE);
+ inc_bdi_stat(page_file_mapping(req->wb_page)->backing_dev_info,
+ BDI_RECLAIMABLE);
__mark_inode_dirty(inode, I_DIRTY_DATASYNC);
}

@@ -455,7 +456,7 @@ nfs_clear_request_commit(struct nfs_page *req)

if (test_and_clear_bit(PG_CLEAN, &(req)->wb_flags)) {
dec_zone_page_state(page, NR_UNSTABLE_NFS);
- dec_bdi_stat(page->mapping->backing_dev_info, BDI_RECLAIMABLE);
+ dec_bdi_stat(page_file_mapping(page)->backing_dev_info, BDI_RECLAIMABLE);
return 1;
}
return 0;
@@ -516,7 +517,7 @@ nfs_need_commit(struct nfs_inode *nfsi)
* nfs_scan_commit - Scan an inode for commit requests
* @inode: NFS inode to scan
* @dst: destination list
- * @idx_start: lower bound of page->index to scan.
+ * @idx_start: lower bound of page_file_index(page) to scan.
* @npages: idx_start + npages sets the upper bound to scan.
*
* Moves requests from the inode's 'commit' request list.
@@ -636,7 +637,7 @@ out_err:
static struct nfs_page * nfs_setup_write_request(struct nfs_open_context* ctx,
struct page *page, unsigned int offset, unsigned int bytes)
{
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
struct nfs_page *req;
int error;

@@ -693,7 +694,7 @@ int nfs_flush_incompatible(struct file *file, struct page *page)
nfs_release_request(req);
if (!do_flush)
return 0;
- status = nfs_wb_page(page->mapping->host, page);
+ status = nfs_wb_page(page_file_mapping(page)->host, page);
} while (status == 0);
return status;
}
@@ -719,7 +720,7 @@ int nfs_updatepage(struct file *file, struct page *page,
unsigned int offset, unsigned int count)
{
struct nfs_open_context *ctx = nfs_file_open_context(file);
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;
int status = 0;

nfs_inc_stats(inode, NFSIOS_VFSUPDATEPAGE);
@@ -727,7 +728,7 @@ int nfs_updatepage(struct file *file, struct page *page,
dprintk("NFS: nfs_updatepage(%s/%s %d@%lld)\n",
file->f_path.dentry->d_parent->d_name.name,
file->f_path.dentry->d_name.name, count,
- (long long)(page_offset(page) + offset));
+ (long long)(page_file_offset(page) + offset));

/* If we're not using byte range locks, and we know the page
* is up to date, it may be more efficient to extend the write
@@ -1009,7 +1010,7 @@ static void nfs_writeback_release_partial(void *calldata)
}

if (nfs_write_need_commit(data)) {
- struct inode *inode = page->mapping->host;
+ struct inode *inode = page_file_mapping(page)->host;

spin_lock(&inode->i_lock);
if (test_bit(PG_NEED_RESCHED, &req->wb_flags)) {
@@ -1307,7 +1308,7 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how)
nfs_list_remove_request(req);
nfs_mark_request_commit(req);
dec_zone_page_state(req->wb_page, NR_UNSTABLE_NFS);
- dec_bdi_stat(req->wb_page->mapping->backing_dev_info,
+ dec_bdi_stat(page_file_mapping(req->wb_page)->backing_dev_info,
BDI_RECLAIMABLE);
nfs_clear_page_tag_locked(req);
}
@@ -1508,7 +1509,7 @@ int nfs_wb_page_cancel(struct inode *inode, struct page *page)
*/
int nfs_wb_page(struct inode *inode, struct page *page)
{
- loff_t range_start = page_offset(page);
+ loff_t range_start = page_file_offset(page);
loff_t range_end = range_start + (loff_t)(PAGE_CACHE_SIZE - 1);
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:21:33

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 25/30] nfs: disable data cache revalidation for swapfiles

>From ee72952409a0b811d61f435682e6d161e3b5189b Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:10:49 +0800
Subject: [PATCH 25/30] nfs: disable data cache revalidation for swapfiles

Do as Trond suggested:
http://lkml.org/lkml/2006/8/25/348

Disable NFS data cache revalidation on swap files since it doesn't really
make sense to have other clients change the file while you are using it.

Thereby we can stop setting PG_private on swap pages, since there ought to
be no further races with invalidate_inode_pages2() to deal with.

And since we cannot set PG_private we cannot use page->private (which is
already used by PG_swapcache pages anyway) to store the nfs_page. Thus
augment the new nfs_page_find_request logic.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
fs/nfs/inode.c | 6 +++++
fs/nfs/write.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++---------
2 files changed, 63 insertions(+), 12 deletions(-)

diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
index 099b351..45293af 100644
--- a/fs/nfs/inode.c
+++ b/fs/nfs/inode.c
@@ -798,6 +798,12 @@ int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping)
struct nfs_inode *nfsi = NFS_I(inode);
int ret = 0;

+ /*
+ * swapfiles are not supposed to be shared.
+ */
+ if (IS_SWAPFILE(inode))
+ goto out;
+
if ((nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE)
|| nfs_attribute_cache_expired(inode)
|| NFS_STALE(inode)) {
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 109a970..0d7ea95 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -109,25 +109,62 @@ static void nfs_context_set_write_error(struct nfs_open_context *ctx, int error)
set_bit(NFS_CONTEXT_ERROR_WRITE, &ctx->flags);
}

-static struct nfs_page *nfs_page_find_request_locked(struct page *page)
+static struct nfs_page *
+__nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page, int get)
{
struct nfs_page *req = NULL;

- if (PagePrivate(page)) {
+ if (PagePrivate(page))
req = (struct nfs_page *)page_private(page);
- if (req != NULL)
- kref_get(&req->wb_kref);
- }
+ else if (unlikely(PageSwapCache(page)))
+ req = radix_tree_lookup(&nfsi->nfs_page_tree, page_file_index(page));
+
+ if (get && req)
+ kref_get(&req->wb_kref);
+
return req;
}

+static inline struct nfs_page *
+nfs_page_find_request_locked(struct nfs_inode *nfsi, struct page *page)
+{
+ return __nfs_page_find_request_locked(nfsi, page, 1);
+}
+
+static int __nfs_page_has_request(struct page *page)
+{
+ struct inode *inode = page_file_mapping(page)->host;
+ struct nfs_page *req = NULL;
+
+ spin_lock(&inode->i_lock);
+ req = __nfs_page_find_request_locked(NFS_I(inode), page, 0);
+ spin_unlock(&inode->i_lock);
+
+ /*
+ * hole here plugged by the caller holding onto PG_locked
+ */
+
+ return req != NULL;
+}
+
+static inline int nfs_page_has_request(struct page *page)
+{
+ if (PagePrivate(page))
+ return 1;
+
+ if (unlikely(PageSwapCache(page)))
+ return __nfs_page_has_request(page);
+
+ return 0;
+}
+
static struct nfs_page *nfs_page_find_request(struct page *page)
{
struct inode *inode = page_file_mapping(page)->host;
struct nfs_page *req = NULL;

spin_lock(&inode->i_lock);
- req = nfs_page_find_request_locked(page);
+ req = nfs_page_find_request_locked(NFS_I(inode), page);
spin_unlock(&inode->i_lock);
return req;
}
@@ -230,7 +267,7 @@ static struct nfs_page *nfs_find_and_lock_request(struct page *page)

spin_lock(&inode->i_lock);
for (;;) {
- req = nfs_page_find_request_locked(page);
+ req = nfs_page_find_request_locked(NFS_I(inode), page);
if (req == NULL)
break;
if (nfs_set_page_tag_locked(req))
@@ -383,8 +420,14 @@ static int nfs_inode_add_request(struct inode *inode, struct nfs_page *req)
if (nfs_have_delegation(inode, FMODE_WRITE))
nfsi->change_attr++;
}
- SetPagePrivate(req->wb_page);
- set_page_private(req->wb_page, (unsigned long)req);
+ /*
+ * Swap-space should not get truncated. Hence no need to plug the race
+ * with invalidate/truncate.
+ */
+ if (likely(!PageSwapCache(req->wb_page))) {
+ SetPagePrivate(req->wb_page);
+ set_page_private(req->wb_page, (unsigned long)req);
+ }
nfsi->npages++;
kref_get(&req->wb_kref);
radix_tree_tag_set(&nfsi->nfs_page_tree, req->wb_index,
@@ -406,8 +449,10 @@ static void nfs_inode_remove_request(struct nfs_page *req)
BUG_ON (!NFS_WBACK_BUSY(req));

spin_lock(&inode->i_lock);
- set_page_private(req->wb_page, 0);
- ClearPagePrivate(req->wb_page);
+ if (likely(!PageSwapCache(req->wb_page))) {
+ set_page_private(req->wb_page, 0);
+ ClearPagePrivate(req->wb_page);
+ }
radix_tree_delete(&nfsi->nfs_page_tree, req->wb_index);
nfsi->npages--;
if (!nfsi->npages) {
@@ -575,7 +620,7 @@ static struct nfs_page *nfs_try_to_update_request(struct inode *inode,
spin_lock(&inode->i_lock);

for (;;) {
- req = nfs_page_find_request_locked(page);
+ req = nfs_page_find_request_locked(NFS_I(inode), page);
if (req == NULL)
goto out_unlock;

--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:21:44

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 26/30] nfs: enable swap on NFS

>From 61388a8872071bb5b0015b9f5e3183410a98d949 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:11:13 +0800
Subject: [PATCH 26/30] nfs: enable swap on NFS

Implement all the new swapfile a_ops for NFS. This will set the NFS socket to
SOCK_MEMALLOC and run socket reconnect under PF_MEMALLOC as well as reset
SOCK_MEMALLOC before engaging the protocol ->connect() method.

PF_MEMALLOC should allow the allocation of struct socket and related objects
and the early (re)setting of SOCK_MEMALLOC should allow us to receive the
packets required for the TCP connection buildup.

(swapping continues over a server reset during heavy network traffic)

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
fs/nfs/Kconfig | 10 ++++++
fs/nfs/file.c | 18 +++++++++++
fs/nfs/write.c | 22 ++++++++++++++
include/linux/nfs_fs.h | 2 +
include/linux/sunrpc/xprt.h | 5 ++-
mm/page_io.c | 1 +
net/sunrpc/Kconfig | 5 +++
net/sunrpc/sched.c | 9 ++++-
net/sunrpc/xprtsock.c | 68 +++++++++++++++++++++++++++++++++++++++++++
9 files changed, 137 insertions(+), 3 deletions(-)

diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig
index a43d07e..b022858 100644
--- a/fs/nfs/Kconfig
+++ b/fs/nfs/Kconfig
@@ -74,6 +74,16 @@ config NFS_V4

If unsure, say N.

+config NFS_SWAP
+ bool "Provide swap over NFS support"
+ default n
+ depends on NFS_FS
+ select SUNRPC_SWAP
+ help
+ This option enables swapon to work on files located on NFS mounts.
+
+ For more details, see Documentation/network-swap.txt
+
config NFS_V4_1
bool "NFS client support for NFSv4.1 (DEVELOPER ONLY)"
depends on NFS_V4 && EXPERIMENTAL
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 8f066fe..319c2b3 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -524,6 +524,18 @@ static int nfs_launder_page(struct page *page)
return nfs_wb_page(inode, page);
}

+#ifdef CONFIG_NFS_SWAP
+static int nfs_swapon(struct file *file)
+{
+ return xs_swapper(NFS_CLIENT(file->f_mapping->host)->cl_xprt, 1);
+}
+
+static int nfs_swapoff(struct file *file)
+{
+ return xs_swapper(NFS_CLIENT(file->f_mapping->host)->cl_xprt, 0);
+}
+#endif
+
const struct address_space_operations nfs_file_aops = {
.readpage = nfs_readpage,
.readpages = nfs_readpages,
@@ -538,6 +550,12 @@ const struct address_space_operations nfs_file_aops = {
.migratepage = nfs_migrate_page,
.launder_page = nfs_launder_page,
.error_remove_page = generic_error_remove_page,
+#ifdef CONFIG_NFS_SWAP
+ .swapon = nfs_swapon,
+ .swapoff = nfs_swapoff,
+ .swap_out = nfs_swap_out,
+ .swap_in = nfs_readpage,
+#endif
};

/*
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 0d7ea95..5852b20 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -355,6 +355,28 @@ int nfs_writepage(struct page *page, struct writeback_control *wbc)
return ret;
}

+static int nfs_writepage_setup(struct nfs_open_context *ctx, struct page *page,
+ unsigned int offset, unsigned int count);
+
+int nfs_swap_out(struct file *file, struct page *page,
+ struct writeback_control *wbc)
+{
+ struct nfs_open_context *ctx = nfs_file_open_context(file);
+ int status;
+
+ status = nfs_writepage_setup(ctx, page, 0, nfs_page_length(page));
+ if (status < 0) {
+ nfs_set_pageerror(page);
+ goto out;
+ }
+
+ status = nfs_writepage_locked(page, wbc);
+
+out:
+ unlock_page(page);
+ return status;
+}
+
static int nfs_writepages_callback(struct page *page, struct writeback_control *wbc, void *data)
{
int ret;
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index 77c2ae5..fc1bbfb 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -484,6 +484,8 @@ extern int nfs_writepages(struct address_space *, struct writeback_control *);
extern int nfs_flush_incompatible(struct file *file, struct page *page);
extern int nfs_updatepage(struct file *, struct page *, unsigned int, unsigned int);
extern int nfs_writeback_done(struct rpc_task *, struct nfs_write_data *);
+extern int nfs_swap_out(struct file *file, struct page *page,
+ struct writeback_control *wbc);

/*
* Try to write back everything synchronously (but check the
diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index b514703..ba2330d 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -171,7 +171,9 @@ struct rpc_xprt {
unsigned int max_reqs; /* total slots */
unsigned long state; /* transport state */
unsigned char shutdown : 1, /* being shut down */
- resvport : 1; /* use a reserved port */
+ resvport : 1, /* use a reserved port */
+ swapper : 1; /* we're swapping over this
+ transport */
unsigned int bind_index; /* bind function index */

/*
@@ -303,6 +305,7 @@ void xprt_release_rqst_cong(struct rpc_task *task);
void xprt_disconnect_done(struct rpc_xprt *xprt);
void xprt_force_disconnect(struct rpc_xprt *xprt);
void xprt_conditional_disconnect(struct rpc_xprt *xprt, unsigned int cookie);
+int xs_swapper(struct rpc_xprt *xprt, int enable);

/*
* Reserved bit positions in xprt->state
diff --git a/mm/page_io.c b/mm/page_io.c
index 012b9ef..c8d7d8d 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -94,6 +94,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
{
struct bio *bio;
int ret = 0, rw = WRITE;
+ struct swap_info_struct *sis = page_swap_info(page);

if (try_to_free_swap(page)) {
unlock_page(page);
diff --git a/net/sunrpc/Kconfig b/net/sunrpc/Kconfig
index 443c161..521eadb 100644
--- a/net/sunrpc/Kconfig
+++ b/net/sunrpc/Kconfig
@@ -17,6 +17,11 @@ config SUNRPC_XPRT_RDMA

If unsure, say N.

+config SUNRPC_SWAP
+ def_bool n
+ depends on SUNRPC
+ select NETVM
+
config RPCSEC_GSS_KRB5
tristate "Secure RPC: Kerberos V mechanism (EXPERIMENTAL)"
depends on SUNRPC && EXPERIMENTAL
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 4a843b8..6a16dc0 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -742,7 +742,10 @@ static void rpc_async_schedule(struct work_struct *work)
void *rpc_malloc(struct rpc_task *task, size_t size)
{
struct rpc_buffer *buf;
- gfp_t gfp = RPC_IS_SWAPPER(task) ? GFP_ATOMIC : GFP_NOWAIT;
+ gfp_t gfp = GFP_NOWAIT;
+
+ if (RPC_IS_SWAPPER(task))
+ gfp |= __GFP_MEMALLOC;

size += sizeof(struct rpc_buffer);
if (size <= RPC_BUFFER_MAXSIZE)
@@ -813,6 +816,8 @@ static void rpc_init_task(struct rpc_task *task, const struct rpc_task_setup *ta
kref_get(&task->tk_client->cl_kref);
if (task->tk_client->cl_softrtry)
task->tk_flags |= RPC_TASK_SOFT;
+ if (task->tk_client->cl_xprt->swapper)
+ task->tk_flags |= RPC_TASK_SWAPPER;
}

if (task->tk_ops->rpc_call_prepare != NULL)
@@ -838,7 +843,7 @@ static void rpc_init_task(struct rpc_task *task, const struct rpc_task_setup *ta
static struct rpc_task *
rpc_alloc_task(void)
{
- return (struct rpc_task *)mempool_alloc(rpc_task_mempool, GFP_NOFS);
+ return (struct rpc_task *)mempool_alloc(rpc_task_mempool, GFP_NOIO);
}

/*
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 49a62f0..5c8b918 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1631,6 +1631,55 @@ static inline void xs_reclassify_socket6(struct socket *sock)
}
#endif

+#ifdef CONFIG_SUNRPC_SWAP
+static void xs_set_memalloc(struct rpc_xprt *xprt)
+{
+ struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
+
+ if (xprt->swapper)
+ sk_set_memalloc(transport->inet);
+}
+
+#define RPC_BUF_RESERVE_PAGES \
+ kmalloc_estimate_objs(sizeof(struct rpc_rqst), GFP_KERNEL, RPC_MAX_SLOT_TABLE)
+#define RPC_RESERVE_PAGES (RPC_BUF_RESERVE_PAGES + TX_RESERVE_PAGES)
+
+/**
+ * xs_swapper - Tag this transport as being used for swap.
+ * @xprt: transport to tag
+ * @enable: enable/disable
+ *
+ */
+int xs_swapper(struct rpc_xprt *xprt, int enable)
+{
+ struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
+ int err = 0;
+
+ if (enable) {
+ /*
+ * keep one extra sock reference so the reserve won't dip
+ * when the socket gets reconnected.
+ */
+ err = sk_adjust_memalloc(1, RPC_RESERVE_PAGES);
+ if (!err) {
+ xprt->swapper = 1;
+ xs_set_memalloc(xprt);
+ }
+ } else if (xprt->swapper) {
+ xprt->swapper = 0;
+ sk_clear_memalloc(transport->inet);
+ sk_adjust_memalloc(-1, -RPC_RESERVE_PAGES);
+ }
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(xs_swapper);
+#else
+static void xs_set_memalloc(struct rpc_xprt *xprt)
+{
+}
+#endif
+
static void xs_udp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
{
struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
@@ -1655,6 +1704,8 @@ static void xs_udp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
transport->sock = sock;
transport->inet = sk;

+ xs_set_memalloc(xprt);
+
write_unlock_bh(&sk->sk_callback_lock);
}
xs_udp_do_set_buffer_size(xprt);
@@ -1672,11 +1723,15 @@ static void xs_udp_connect_worker4(struct work_struct *work)
container_of(work, struct sock_xprt, connect_worker.work);
struct rpc_xprt *xprt = &transport->xprt;
struct socket *sock = transport->sock;
+ unsigned long pflags = current->flags;
int err, status = -EIO;

if (xprt->shutdown)
goto out;

+ if (xprt->swapper)
+ current->flags |= PF_MEMALLOC;
+
/* Start by resetting any existing state */
xs_reset_transport(transport);

@@ -1703,6 +1758,7 @@ static void xs_udp_connect_worker4(struct work_struct *work)
out:
xprt_clear_connecting(xprt);
xprt_wake_pending_tasks(xprt, status);
+ tsk_restore_flags(current, pflags, PF_MEMALLOC);
}

/**
@@ -1717,11 +1773,15 @@ static void xs_udp_connect_worker6(struct work_struct *work)
container_of(work, struct sock_xprt, connect_worker.work);
struct rpc_xprt *xprt = &transport->xprt;
struct socket *sock = transport->sock;
+ unsigned long pflags = current->flags;
int err, status = -EIO;

if (xprt->shutdown)
goto out;

+ if (xprt->swapper)
+ current->flags |= PF_MEMALLOC;
+
/* Start by resetting any existing state */
xs_reset_transport(transport);

@@ -1748,6 +1808,7 @@ static void xs_udp_connect_worker6(struct work_struct *work)
out:
xprt_clear_connecting(xprt);
xprt_wake_pending_tasks(xprt, status);
+ tsk_restore_flags(current, pflags, PF_MEMALLOC);
}

/*
@@ -1822,6 +1883,8 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock)
if (!xprt_bound(xprt))
return -ENOTCONN;

+ xs_set_memalloc(xprt);
+
/* Tell the socket layer to start connecting... */
xprt->stat.connect_count++;
xprt->stat.connect_start = jiffies;
@@ -1842,11 +1905,15 @@ static void xs_tcp_setup_socket(struct rpc_xprt *xprt,
struct sock_xprt *))
{
struct socket *sock = transport->sock;
+ unsigned long pflags = current->flags;
int status = -EIO;

if (xprt->shutdown)
goto out;

+ if (xprt->swapper)
+ current->flags |= PF_MEMALLOC;
+
if (!sock) {
clear_bit(XPRT_CONNECTION_ABORT, &xprt->state);
sock = create_sock(xprt, transport);
@@ -1907,6 +1974,7 @@ out_eagain:
out:
xprt_clear_connecting(xprt);
xprt_wake_pending_tasks(xprt, status);
+ tsk_restore_flags(current, pflags, PF_MEMALLOC);
}

static struct socket *xs_create_tcp_sock4(struct rpc_xprt *xprt,
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:21:55

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 27/30] nfs: fix various memory recursions possible with swap over NFS

>From df0106f58d7ac2337f74efb1d8caaf27f635e050 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:11:32 +0800
Subject: [PATCH 27/30] nfs: fix various memory recursions possible with swap over NFS.

GFP_NOFS is _more_ permissive than GFP_NOIO in that it will initiate IO,
just not of any filesystem data.

The problem is that previuosly NOFS was correct because that avoids
recursion into the NFS code, it now is not, because also IO (swap) can
lead to this recursion.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Suresh Jayaraman <[email protected]>
Signed-off-by: Xiaotian Feng <[email protected]>
---
fs/nfs/pagelist.c | 2 +-
fs/nfs/write.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
index 2be94bb..c0247e9 100644
--- a/fs/nfs/pagelist.c
+++ b/fs/nfs/pagelist.c
@@ -27,7 +27,7 @@ static inline struct nfs_page *
nfs_page_alloc(void)
{
struct nfs_page *p;
- p = kmem_cache_alloc(nfs_page_cachep, GFP_KERNEL);
+ p = kmem_cache_alloc(nfs_page_cachep, GFP_NOIO);
if (p) {
memset(p, 0, sizeof(*p));
INIT_LIST_HEAD(&p->wb_list);
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 5852b20..dfa08cb 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -50,7 +50,7 @@ static mempool_t *nfs_commit_mempool;

struct nfs_write_data *nfs_commitdata_alloc(void)
{
- struct nfs_write_data *p = mempool_alloc(nfs_commit_mempool, GFP_NOFS);
+ struct nfs_write_data *p = mempool_alloc(nfs_commit_mempool, GFP_NOIO);

if (p) {
memset(p, 0, sizeof(*p));
@@ -69,7 +69,7 @@ void nfs_commit_free(struct nfs_write_data *p)

struct nfs_write_data *nfs_writedata_alloc(unsigned int pagecount)
{
- struct nfs_write_data *p = mempool_alloc(nfs_wdata_mempool, GFP_NOFS);
+ struct nfs_write_data *p = mempool_alloc(nfs_wdata_mempool, GFP_NOIO);

if (p) {
memset(p, 0, sizeof(*p));
@@ -79,7 +79,7 @@ struct nfs_write_data *nfs_writedata_alloc(unsigned int pagecount)
if (pagecount <= ARRAY_SIZE(p->page_array))
p->pagevec = p->page_array;
else {
- p->pagevec = kcalloc(pagecount, sizeof(struct page *), GFP_NOFS);
+ p->pagevec = kcalloc(pagecount, sizeof(struct page *), GFP_NOIO);
if (!p->pagevec) {
mempool_free(p, nfs_wdata_mempool);
p = NULL;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:22:07

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 28/30] build fix for skb_emergency_protocol

>From 50d2e72527b3e821544cc97c4dd5b1e5a44b6659 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:21:10 +0800
Subject: [PATCH 28/30] build fix for skb_emergency_protocol

Signed-off-by: Xiaotian Feng <[email protected]>
---
net/core/dev.c | 48 ++++++++++++++++++++++++------------------------
1 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 7169b9b..fd7f8ac 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -2791,6 +2791,30 @@ int __skb_bond_should_drop(struct sk_buff *skb, struct net_device *master)
}
EXPORT_SYMBOL(__skb_bond_should_drop);

+/*
+ * Filter the protocols for which the reserves are adequate.
+ *
+ * Before adding a protocol make sure that it is either covered by the existing
+ * reserves, or add reserves covering the memory need of the new protocol's
+ * packet processing.
+ */
+static int skb_emergency_protocol(struct sk_buff *skb)
+{
+ if (skb_emergency(skb))
+ switch (skb->protocol) {
+ case __constant_htons(ETH_P_ARP):
+ case __constant_htons(ETH_P_IP):
+ case __constant_htons(ETH_P_IPV6):
+ case __constant_htons(ETH_P_8021Q):
+ break;
+
+ default:
+ return 0;
+ }
+
+ return 1;
+}
+
static int __netif_receive_skb(struct sk_buff *skb)
{
struct packet_type *ptype, *pt_prev;
@@ -2942,30 +2966,6 @@ out:
return ret;
}

-/*
- * Filter the protocols for which the reserves are adequate.
- *
- * Before adding a protocol make sure that it is either covered by the existing
- * reserves, or add reserves covering the memory need of the new protocol's
- * packet processing.
- */
-static int skb_emergency_protocol(struct sk_buff *skb)
-{
- if (skb_emergency(skb))
- switch (skb->protocol) {
- case __constant_htons(ETH_P_ARP):
- case __constant_htons(ETH_P_IP):
- case __constant_htons(ETH_P_IPV6):
- case __constant_htons(ETH_P_8021Q):
- break;
-
- default:
- return 0;
- }
-
- return 1;
-}
-
/**
* netif_receive_skb - process receive buffer from network
* @skb: buffer to process
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:22:18

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 29/30] fix null pointer deref in swap_entry_free

>From ea7b13006f42f7dcadd1bfb874d5e525b4c259e3 Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 13:44:08 +0800
Subject: [PATCH 29/30] fix null pointer deref in swap_entry_free

Commit b3a27d uses p->bdev->bd_disk, this will lead a null pointer
deref with swap over nfs.

Signed-off-by: Xiaotian Feng <[email protected]>
---
mm/swapfile.c | 9 +++++----
1 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/mm/swapfile.c b/mm/swapfile.c
index d8a05e4..3eb53fc 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -577,7 +577,6 @@ static unsigned char swap_entry_free(struct swap_info_struct *p,

/* free if no reference */
if (!usage) {
- struct gendisk *disk = p->bdev->bd_disk;
if (offset < p->lowest_bit)
p->lowest_bit = offset;
if (offset > p->highest_bit)
@@ -587,9 +586,11 @@ static unsigned char swap_entry_free(struct swap_info_struct *p,
swap_list.next = p->type;
nr_swap_pages++;
p->inuse_pages--;
- if ((p->flags & SWP_BLKDEV) &&
- disk->fops->swap_slot_free_notify)
- disk->fops->swap_slot_free_notify(p->bdev, offset);
+ if (p->flags & SWP_BLKDEV) {
+ struct gendisk *disk = p->bdev->bd_disk;
+ if (disk->fops->swap_slot_free_notify)
+ disk->fops->swap_slot_free_notify(p->bdev, offset);
+ }
}

return usage;
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:22:29

by Xiaotian Feng

[permalink] [raw]
Subject: [PATCH -mmotm 30/30] fix mess up on swap with multi files from same nfs server

>From fd03848cadf5719228f617b72039cc8302d892ef Mon Sep 17 00:00:00 2001
From: Xiaotian Feng <[email protected]>
Date: Tue, 13 Jul 2010 14:00:02 +0800
Subject: [PATCH 30/30] fix mess up on swap with multi files from same nfs server

xs_swapper() will set xprt->swapper when swapon nfs files, unset xprt->swapper
when swapoff nfs files. This will lead a bug if we swapon multi files from
the same nfs server, they had the same xprt, then the reserved memory could
not be disconnected when we swapoff all files.

Signed-off-by: Xiaotian Feng <[email protected]>
---
include/linux/sunrpc/xprt.h | 4 ++--
net/sunrpc/xprtsock.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h
index ba2330d..bc49091 100644
--- a/include/linux/sunrpc/xprt.h
+++ b/include/linux/sunrpc/xprt.h
@@ -171,8 +171,8 @@ struct rpc_xprt {
unsigned int max_reqs; /* total slots */
unsigned long state; /* transport state */
unsigned char shutdown : 1, /* being shut down */
- resvport : 1, /* use a reserved port */
- swapper : 1; /* we're swapping over this
+ resvport : 1; /* use a reserved port */
+ unsigned int swapper; /* we're swapping over this
transport */
unsigned int bind_index; /* bind function index */

diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 5c8b918..30bb8ce 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -1662,11 +1662,11 @@ int xs_swapper(struct rpc_xprt *xprt, int enable)
*/
err = sk_adjust_memalloc(1, RPC_RESERVE_PAGES);
if (!err) {
- xprt->swapper = 1;
+ xprt->swapper++;
xs_set_memalloc(xprt);
}
} else if (xprt->swapper) {
- xprt->swapper = 0;
+ xprt->swapper--;
sk_clear_memalloc(transport->inet);
sk_adjust_memalloc(-1, -RPC_RESERVE_PAGES);
}
--
1.7.1.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 10:55:55

by Mitchell Erblich

[permalink] [raw]
Subject: Re: [PATCH -mmotm 12/30] selinux: tag avc cache alloc as non-critical


On Jul 13, 2010, at 3:19 AM, Xiaotian Feng wrote:

> =46rom 6c3a91091b2910c23908a9f9953efcf3df14e522 Mon Sep 17 00:00:00 =
2001
> From: Xiaotian Feng <[email protected]>
> Date: Tue, 13 Jul 2010 11:02:41 +0800
> Subject: [PATCH 12/30] selinux: tag avc cache alloc as non-critical
>=20
> Failing to allocate a cache entry will only harm performance not =
correctness.
> Do not consume valuable reserve pages for something like that.
>=20
> Signed-off-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Suresh Jayaraman <[email protected]>
> Signed-off-by: Xiaotian Feng <[email protected]>
> ---
> security/selinux/avc.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>=20
> diff --git a/security/selinux/avc.c b/security/selinux/avc.c
> index 3662b0f..9029395 100644
> --- a/security/selinux/avc.c
> +++ b/security/selinux/avc.c
> @@ -284,7 +284,7 @@ static struct avc_node *avc_alloc_node(void)
> {
> struct avc_node *node;
>=20
> - node =3D kmem_cache_zalloc(avc_node_cachep, GFP_ATOMIC);
> + node =3D kmem_cache_zalloc(avc_node_cachep, =
GFP_ATOMIC|__GFP_NOMEMALLOC);
> if (!node)
> goto out;
>=20
> --=20
> 1.7.1.1
>=20

Why not just replace GFP_ATOMIC with GFP_NOWAIT?

This would NOT consume the valuable last pages.

Mitchell Erblich
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>

2010-07-13 20:33:17

by Pekka Enberg

[permalink] [raw]
Subject: Re: [PATCH -mmotm 05/30] mm: sl[au]b: add knowledge of reserve pages

Hi Xiaotian!

I would actually prefer that the SLAB, SLOB, and SLUB changes were in
separate patches to make reviewing easier.

Looking at SLUB:

On Tue, Jul 13, 2010 at 1:17 PM, Xiaotian Feng <[email protected]> wrote=
:
> diff --git a/mm/slub.c b/mm/slub.c
> index 7bb7940..7a5d6dc 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -27,6 +27,8 @@
> =A0#include <linux/memory.h>
> =A0#include <linux/math64.h>
> =A0#include <linux/fault-inject.h>
> +#include "internal.h"
> +
>
> =A0/*
> =A0* Lock order:
> @@ -1139,7 +1141,8 @@ static void setup_object(struct kmem_cache *s, =
struct page *page,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0s->ctor(object);
> =A0}
>
> -static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int =
node)
> +static
> +struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node, i=
nt *reserve)
> =A0{
> =A0 =A0 =A0 =A0struct page *page;
> =A0 =A0 =A0 =A0void *start;
> @@ -1153,6 +1156,8 @@ static struct page *new_slab(struct kmem_cache =
*s, gfp_t flags, int node)
> =A0 =A0 =A0 =A0if (!page)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto out;
>
> + =A0 =A0 =A0 *reserve =3D page->reserve;
> +
> =A0 =A0 =A0 =A0inc_slabs_node(s, page_to_nid(page), page->objects);
> =A0 =A0 =A0 =A0page->slab =3D s;
> =A0 =A0 =A0 =A0page->flags |=3D 1 << PG_slab;
> @@ -1606,10 +1611,20 @@ static void *__slab_alloc(struct kmem_cache *=
s, gfp_t gfpflags, int node,
> =A0{
> =A0 =A0 =A0 =A0void **object;
> =A0 =A0 =A0 =A0struct page *new;
> + =A0 =A0 =A0 int reserve;
>
> =A0 =A0 =A0 =A0/* We handle __GFP_ZERO in the caller */
> =A0 =A0 =A0 =A0gfpflags &=3D ~__GFP_ZERO;
>
> + =A0 =A0 =A0 if (unlikely(c->reserve)) {
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /*
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* If the current slab is a reserve s=
lab and the current
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* allocation context does not allow =
access to the reserves we
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* must force an allocation to test t=
he current levels.
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!(gfp_to_alloc_flags(gfpflags) & AL=
LOC_NO_WATERMARKS))
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto grow_slab;

OK, so assume that:

(1) c->reserve is set to one

(2) GFP flags don't allow dipping into the reserves

(3) we've managed to free enough pages so normal
allocations are fine

(4) the page from reserves is not yet empty

we will call flush_slab() and put the "emergency page" on partial list
and clear c->reserve. This effectively means that now some other
allocation can fetch the partial page and start to use it. Is this OK?
Who makes sure the emergency reserves are large enough for the next
out-of-memory condition where we swap over NFS?

> + =A0 =A0 =A0 }
> =A0 =A0 =A0 =A0if (!c->page)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto new_slab;
>
> @@ -1623,8 +1638,8 @@ load_freelist:
> =A0 =A0 =A0 =A0object =3D c->page->freelist;
> =A0 =A0 =A0 =A0if (unlikely(!object))
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto another_slab;
> - =A0 =A0 =A0 if (unlikely(SLABDEBUG && PageSlubDebug(c->page)))
> - =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto debug;
> + =A0 =A0 =A0 if (unlikely(SLABDEBUG && PageSlubDebug(c->page) || c->=
reserve))
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto slow_path;
>
> =A0 =A0 =A0 =A0c->freelist =3D get_freepointer(s, object);
> =A0 =A0 =A0 =A0c->page->inuse =3D c->page->objects;
> @@ -1646,16 +1661,18 @@ new_slab:
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto load_freelist;
> =A0 =A0 =A0 =A0}
>
> +grow_slab:
> =A0 =A0 =A0 =A0if (gfpflags & __GFP_WAIT)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0local_irq_enable();
>
> - =A0 =A0 =A0 new =3D new_slab(s, gfpflags, node);
> + =A0 =A0 =A0 new =3D new_slab(s, gfpflags, node, &reserve);
>
> =A0 =A0 =A0 =A0if (gfpflags & __GFP_WAIT)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0local_irq_disable();
>
> =A0 =A0 =A0 =A0if (new) {
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0c =3D __this_cpu_ptr(s->cpu_slab);
> + =A0 =A0 =A0 =A0 =A0 =A0 =A0 c->reserve =3D reserve;
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0stat(s, ALLOC_SLAB);
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (c->page)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0flush_slab(s, c);
> @@ -1667,10 +1684,20 @@ new_slab:
> =A0 =A0 =A0 =A0if (!(gfpflags & __GFP_NOWARN) && printk_ratelimit())
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0slab_out_of_memory(s, gfpflags, node);
> =A0 =A0 =A0 =A0return NULL;
> -debug:
> - =A0 =A0 =A0 if (!alloc_debug_processing(s, c->page, object, addr))
> +
> +slow_path:
> + =A0 =A0 =A0 if (!c->reserve && !alloc_debug_processing(s, c->page, =
object, addr))
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0goto another_slab;
>
> + =A0 =A0 =A0 /*
> + =A0 =A0 =A0 =A0* Avoid the slub fast path in slab_alloc() by not se=
tting
> + =A0 =A0 =A0 =A0* c->freelist and the fast path in slab_free() by ma=
king
> + =A0 =A0 =A0 =A0* node_match() fail by setting c->node to -1.
> + =A0 =A0 =A0 =A0*
> + =A0 =A0 =A0 =A0* We use this for for debug and reserve checks which=
need
> + =A0 =A0 =A0 =A0* to be done for each allocation.
> + =A0 =A0 =A0 =A0*/
> +
> =A0 =A0 =A0 =A0c->page->inuse++;
> =A0 =A0 =A0 =A0c->page->freelist =3D get_freepointer(s, object);
> =A0 =A0 =A0 =A0c->node =3D -1;
> @@ -2095,10 +2122,11 @@ static void early_kmem_cache_node_alloc(gfp_t=
gfpflags, int node)
> =A0 =A0 =A0 =A0struct page *page;
> =A0 =A0 =A0 =A0struct kmem_cache_node *n;
> =A0 =A0 =A0 =A0unsigned long flags;
> + =A0 =A0 =A0 int reserve;
>
> =A0 =A0 =A0 =A0BUG_ON(kmalloc_caches->size < sizeof(struct kmem_cache=
_node));
>
> - =A0 =A0 =A0 page =3D new_slab(kmalloc_caches, gfpflags, node);
> + =A0 =A0 =A0 page =3D new_slab(kmalloc_caches, gfpflags, node, &rese=
rve);
>
> =A0 =A0 =A0 =A0BUG_ON(!page);
> =A0 =A0 =A0 =A0if (page_to_nid(page) !=3D node) {
> --
> 1.7.1.1
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected] =A0For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=3Dmailto:"[email protected]"> [email protected] </a>
>

2010-08-03 01:44:29

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH -mmotm 05/30] mm: sl[au]b: add knowledge of reserve pages

On Tue, 13 Jul 2010 23:33:14 +0300
Pekka Enberg <[email protected]> wrote:

> Hi Xiaotian!
>
> I would actually prefer that the SLAB, SLOB, and SLUB changes were in
> separate patches to make reviewing easier.
>
> Looking at SLUB:
>
> On Tue, Jul 13, 2010 at 1:17 PM, Xiaotian Feng <[email protected]> wrote:
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 7bb7940..7a5d6dc 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -27,6 +27,8 @@
> >  #include <linux/memory.h>
> >  #include <linux/math64.h>
> >  #include <linux/fault-inject.h>
> > +#include "internal.h"
> > +
> >
> >  /*
> >  * Lock order:
> > @@ -1139,7 +1141,8 @@ static void setup_object(struct kmem_cache *s, struct page *page,
> >                s->ctor(object);
> >  }
> >
> > -static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> > +static
> > +struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node, int *reserve)
> >  {
> >        struct page *page;
> >        void *start;
> > @@ -1153,6 +1156,8 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
> >        if (!page)
> >                goto out;
> >
> > +       *reserve = page->reserve;
> > +
> >        inc_slabs_node(s, page_to_nid(page), page->objects);
> >        page->slab = s;
> >        page->flags |= 1 << PG_slab;
> > @@ -1606,10 +1611,20 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> >  {
> >        void **object;
> >        struct page *new;
> > +       int reserve;
> >
> >        /* We handle __GFP_ZERO in the caller */
> >        gfpflags &= ~__GFP_ZERO;
> >
> > +       if (unlikely(c->reserve)) {
> > +               /*
> > +                * If the current slab is a reserve slab and the current
> > +                * allocation context does not allow access to the reserves we
> > +                * must force an allocation to test the current levels.
> > +                */
> > +               if (!(gfp_to_alloc_flags(gfpflags) & ALLOC_NO_WATERMARKS))
> > +                       goto grow_slab;
>
> OK, so assume that:
>
> (1) c->reserve is set to one
>
> (2) GFP flags don't allow dipping into the reserves
>
> (3) we've managed to free enough pages so normal
> allocations are fine
>
> (4) the page from reserves is not yet empty
>
> we will call flush_slab() and put the "emergency page" on partial list
> and clear c->reserve. This effectively means that now some other
> allocation can fetch the partial page and start to use it. Is this OK?
> Who makes sure the emergency reserves are large enough for the next
> out-of-memory condition where we swap over NFS?

Yes, this is OK. The emergency reserves are maintained at a lower level -
within alloc_page.
The fact that (3) normal allocations are fine means that there are enough
free pages to satisfy any swap-out allocation - so any pages that were
previously allocated as 'emergency' pages can have their emergency status
forgotten (the emergency has passed).

This is a subtle but important aspect of the emergency reservation scheme in
swap-over-NFS. It is the act-of-allocating that is emergency-or-not. The
memory itself, once allocated, is not special.

c->reserve means "the last page allocated required an emergency allocation".
This means that parts of that page, or any other page, can only be given as
emergency allocations. Once the slab succeeds at a non-emergency allocation,
the flag should obviously be cleared.

Similarly the page->reserve flag does not mean "this is a reserve page", but
simply "when this page was allocated, it was an emergency allocation". The
flag is often soon lost as it is in a union with e.g. freelist. But that
doesn't matter as it is only really meaningful at the moment of allocation.

I hope that clarifies the situation,

NeilBrown

>
> > +       }
> >        if (!c->page)
> >                goto new_slab;
> >
> > @@ -1623,8 +1638,8 @@ load_freelist:
> >        object = c->page->freelist;
> >        if (unlikely(!object))
> >                goto another_slab;
> > -       if (unlikely(SLABDEBUG && PageSlubDebug(c->page)))
> > -               goto debug;
> > +       if (unlikely(SLABDEBUG && PageSlubDebug(c->page) || c->reserve))
> > +               goto slow_path;
> >
> >        c->freelist = get_freepointer(s, object);
> >        c->page->inuse = c->page->objects;
> > @@ -1646,16 +1661,18 @@ new_slab:
> >                goto load_freelist;
> >        }
> >
> > +grow_slab:
> >        if (gfpflags & __GFP_WAIT)
> >                local_irq_enable();
> >
> > -       new = new_slab(s, gfpflags, node);
> > +       new = new_slab(s, gfpflags, node, &reserve);
> >
> >        if (gfpflags & __GFP_WAIT)
> >                local_irq_disable();
> >
> >        if (new) {
> >                c = __this_cpu_ptr(s->cpu_slab);
> > +               c->reserve = reserve;
> >                stat(s, ALLOC_SLAB);
> >                if (c->page)
> >                        flush_slab(s, c);
> > @@ -1667,10 +1684,20 @@ new_slab:
> >        if (!(gfpflags & __GFP_NOWARN) && printk_ratelimit())
> >                slab_out_of_memory(s, gfpflags, node);
> >        return NULL;
> > -debug:
> > -       if (!alloc_debug_processing(s, c->page, object, addr))
> > +
> > +slow_path:
> > +       if (!c->reserve && !alloc_debug_processing(s, c->page, object, addr))
> >                goto another_slab;
> >
> > +       /*
> > +        * Avoid the slub fast path in slab_alloc() by not setting
> > +        * c->freelist and the fast path in slab_free() by making
> > +        * node_match() fail by setting c->node to -1.
> > +        *
> > +        * We use this for for debug and reserve checks which need
> > +        * to be done for each allocation.
> > +        */
> > +
> >        c->page->inuse++;
> >        c->page->freelist = get_freepointer(s, object);
> >        c->node = -1;
> > @@ -2095,10 +2122,11 @@ static void early_kmem_cache_node_alloc(gfp_t gfpflags, int node)
> >        struct page *page;
> >        struct kmem_cache_node *n;
> >        unsigned long flags;
> > +       int reserve;
> >
> >        BUG_ON(kmalloc_caches->size < sizeof(struct kmem_cache_node));
> >
> > -       page = new_slab(kmalloc_caches, gfpflags, node);
> > +       page = new_slab(kmalloc_caches, gfpflags, node, &reserve);
> >
> >        BUG_ON(!page);
> >        if (page_to_nid(page) != node) {
> > --
> > 1.7.1.1
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to [email protected].  For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html