2007-10-30 16:23:58

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH 00/33] Swap over NFS -v14


Hi,

Another posting of the full swap over NFS series.

[ I tried just posting the first part last time around, but
that just gets more confusion by lack of a general picture ]

[ patches against 2.6.23-mm1, also to be found online at:
http://programming.kicks-ass.net/kernel-patches/vm_deadlock/v2.6.23-mm1/ ]

The patch-set can be split in roughtly 5 parts, for each of which I shall give
a description.


Part 1, patches 1-12

The problem with swap over network is the generic swap problem: needing memory
to free memory. Normally this is solved using mempools, as can be seen in the
BIO layer.

Swap over network has the problem that the network subsystem does not use fixed
sized allocations, but heavily relies on kmalloc(). This makes mempools
unusable.

This first part provides a generic reserve framework.

Care is taken to only affect the slow paths - when we're low on memory.

Caveats: it is currently SLUB only.

1 - mm: gfp_to_alloc_flags()
2 - mm: tag reseve pages
3 - mm: slub: add knowledge of reserve pages
4 - mm: allow mempool to fall back to memalloc reserves
5 - mm: kmem_estimate_pages()
6 - mm: allow PF_MEMALLOC from softirq context
7 - mm: serialize access to min_free_kbytes
8 - mm: emergency pool
9 - mm: system wide ALLOC_NO_WATERMARK
10 - mm: __GFP_MEMALLOC
11 - mm: memory reserve management
12 - selinux: tag avc cache alloc as non-critical


Part 2, patches 13-15

Provide some generic network infrastructure needed later on.

13 - net: wrap sk->sk_backlog_rcv()
14 - net: packet split receive api
15 - net: sk_allocation() - concentrate socket related allocations


Part 3, patches 16-23

Now that we have a generic memory reserve system, use it on the network stack.
The thing that makes this interesting is that, contrary to BIO, both the
transmit and receive path require memory allocations.

That is, in the BIO layer write back completion is usually just an ISR flipping
a bit and waking stuff up. A network write back completion involved receiving
packets, which when there is no memory, is rather hard. And even when there is
memory there is no guarantee that the required packet comes in in the window
that that memory buys us.

The solution to this problem is found in the fact that network is to be assumed
lossy. Even now, when there is no memory to receive packets the network card
will have to discard packets. What we do is move this into the network stack.

So we reserve a little pool to act as a receive buffer, this allows us to
inspect packets before tossing them. This way, we can filter out those packets
that ensure progress (writeback completion) and disregard the others (as would
have happened anyway). [ NOTE: this is a stable mode of operation with limited
memory usage, exactly the kind of thing we need ]

Again, care is taken to keep much of the overhead of this to only affect the
slow path. Only packets allocated from the reserves will suffer the extra
atomic overhead needed for accounting.

16 - netvm: network reserve infrastructure
17 - sysctl: propagate conv errors
18 - netvm: INET reserves.
19 - netvm: hook skb allocation to reserves
20 - netvm: filter emergency skbs.
21 - netvm: prevent a TCP specific deadlock
22 - netfilter: NF_QUEUE vs emergency skbs
23 - netvm: skb processing


Part 4, patches 24-26

Generic vm infrastructure to handle swapping to a filesystem instead of a block
device. The approach here has been questioned, people would like to see a less
invasive approach.

One suggestion is to create and use a_ops->swap_{in,out}().

24 - mm: prepare swap entry methods for use in page methods
25 - mm: add support for non block device backed swap files
26 - mm: methods for teaching filesystems about PG_swapcache pages


Part 5, patches 27-33

Finally, convert NFS to make use of the new network and vm infrastructure to
provide swap over NFS.

27 - nfs: remove mempools
28 - nfs: teach the NFS client how to treat PG_swapcache pages
29 - nfs: disable data cache revalidation for swapfiles
30 - nfs: swap vs nfs_writepage
31 - nfs: enable swap on NFS
32 - nfs: fix various memory recursions possible with swap over NFS.
33 - nfs: do not warn on radix tree node allocation failures




2007-10-31 04:33:27

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Wednesday 31 October 2007 03:04, Peter Zijlstra wrote:
> Hi,
>
> Another posting of the full swap over NFS series.

Hi,

Is it really worth all the added complexity of making swap
over NFS files work, given that you could use a network block
device instead?

Also, have you ensured that page_file_index, page_file_mapping
and page_offset are only ever used on anonymous pages when the
page is locked? (otherwise PageSwapCache could change)

2007-10-31 04:38:11

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

From: Nick Piggin <[email protected]>
Date: Wed, 31 Oct 2007 14:26:32 +1100

> Is it really worth all the added complexity of making swap
> over NFS files work, given that you could use a network block
> device instead?

Don't be misled. Swapping over NFS is just a scarecrow for the
seemingly real impetus behind these changes which is network storage
stuff like iSCSI.

2007-10-31 05:11:21

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Wednesday 31 October 2007 15:37, David Miller wrote:
> From: Nick Piggin <[email protected]>
> Date: Wed, 31 Oct 2007 14:26:32 +1100
>
> > Is it really worth all the added complexity of making swap
> > over NFS files work, given that you could use a network block
> > device instead?
>
> Don't be misled. Swapping over NFS is just a scarecrow for the
> seemingly real impetus behind these changes which is network storage
> stuff like iSCSI.

Oh, I'm OK with the network reserves stuff (not the actual patch,
which I'm not really qualified to review, but at least the idea
of it...).

And also I'm not as such against the idea of swap over network.

However, specifically the change to make swapfiles work through
the filesystem layer (ATM it goes straight to the block layer,
modulo some initialisation stuff which uses block filesystem-
specific calls).

I mean, I assume that anybody trying to swap over network *today*
has to be using a network block device anyway, so the idea of
just being able to transparently improve that case seems better
than adding new complexities for seemingly not much gain.

2007-10-31 08:50:57

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Tue, Oct 30, 2007 at 09:37:53PM -0700, David Miller wrote:
> Don't be misled. Swapping over NFS is just a scarecrow for the
> seemingly real impetus behind these changes which is network storage
> stuff like iSCSI.

So can we please do swap over network storage only first? All these
VM bits look conceptually sane to me, while the changes to the swap
code to support nfs are real crackpipe material. Then again doing
that part properly by adding address_space methods for swap I/O without
the abuse might be a really good idea, especially as the way we
do swapfiles on block-based filesystems is an horrible hack already.

So please get the VM bits for swap over network blockdevices in first,
and then we can look into a complete revamp of the swapfile support
that cleans up the current mess and adds support for nfs insted of
making the mess even worse.

2007-10-31 09:53:33

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Tue, 2007-10-30 at 21:37 -0700, David Miller wrote:
> From: Nick Piggin <[email protected]>
> Date: Wed, 31 Oct 2007 14:26:32 +1100
>
> > Is it really worth all the added complexity of making swap
> > over NFS files work, given that you could use a network block
> > device instead?
>
> Don't be misled. Swapping over NFS is just a scarecrow for the
> seemingly real impetus behind these changes which is network storage
> stuff like iSCSI.

Not quite, yes, iSCSI is also on the 'want' list of quite a few people,
but swap over NFS on its own is also a feature of great demand.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-10-31 10:56:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Wed, 2007-10-31 at 08:50 +0000, Christoph Hellwig wrote:
> On Tue, Oct 30, 2007 at 09:37:53PM -0700, David Miller wrote:
> > Don't be misled. Swapping over NFS is just a scarecrow for the
> > seemingly real impetus behind these changes which is network storage
> > stuff like iSCSI.
>
> So can we please do swap over network storage only first? All these
> VM bits look conceptually sane to me, while the changes to the swap
> code to support nfs are real crackpipe material.

Yeah, I know how you stand on that. I just wanted to post all this
before going off into the woods reworking it all.

> Then again doing
> that part properly by adding address_space methods for swap I/O without
> the abuse might be a really good idea, especially as the way we
> do swapfiles on block-based filesystems is an horrible hack already.

Is planned. What do you think of the proposed a_ops extension to
accomplish this? That is,

->swapfile() - is this address space willing to back swap
->swapout() - write out a page
->swapin() - read in a page

> So please get the VM bits for swap over network blockdevices in first,

Trouble with that part is that we don't have any sane network block
devices atm, NBD is utter crap, and iSCSI is too complex to be called
sane.

Maybe Evgeniy's Distributed storage thingy would work, will have a look
at that.

> and then we can look into a complete revamp of the swapfile support
> that cleans up the current mess and adds support for nfs insted of
> making the mess even worse.

Sure, concrete suggestion are always welcome. Just being told something
is utter crap only goes so far.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-10-31 11:18:23

by Pavel Machek

[permalink] [raw]
Subject: NBD was Re: [PATCH 00/33] Swap over NFS -v14

Hi!

> > So please get the VM bits for swap over network blockdevices in first,
>
> Trouble with that part is that we don't have any sane network block
> devices atm, NBD is utter crap, and iSCSI is too complex to be called
> sane.

Hey, NBD was designed to be _simple_. And I think it works okay in
that area.. so can you elaborate on "utter crap"? [Ok, performance is
not great.]

Plus, I'd suggest you to look at ata-over-ethernet. It is in tree
today, quite simple, but should have better performance than nbd.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2007-10-31 11:25:14

by Peter Zijlstra

[permalink] [raw]
Subject: Re: NBD was Re: [PATCH 00/33] Swap over NFS -v14

On Wed, 2007-10-31 at 12:18 +0100, Pavel Machek wrote:
> Hi!
>
> > > So please get the VM bits for swap over network blockdevices in first,
> >
> > Trouble with that part is that we don't have any sane network block
> > devices atm, NBD is utter crap, and iSCSI is too complex to be called
> > sane.
>
> Hey, NBD was designed to be _simple_. And I think it works okay in
> that area.. so can you elaborate on "utter crap"? [Ok, performance is
> not great.]

Yeah, sorry, perhaps I was overly strong.

It doesn't work for me, because:

- it does connection management in user-space, which makes it
impossible to reconnect. I'd want a full kernel based client.

- it had some plugging issues, and after talking to Jens about it
he suggested a rewrite using ->make_request() ala AoE. [ sorry if
I'm short on details here, it was a long time ago, and I
forgot, maybe Jens remembers ]

> Plus, I'd suggest you to look at ata-over-ethernet. It is in tree
> today, quite simple, but should have better performance than nbd.

Ah, right, I keep forgetting about that one. The only draw-back to that
on is, is that its raw ethernet, and not some IP protocol.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-10-31 11:27:30

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Wed, 2007-10-31 at 14:26 +1100, Nick Piggin wrote:
> On Wednesday 31 October 2007 03:04, Peter Zijlstra wrote:
> > Hi,
> >
> > Another posting of the full swap over NFS series.
>
> Hi,
>
> Is it really worth all the added complexity of making swap
> over NFS files work, given that you could use a network block
> device instead?

As it stands, we don't have a usable network block device IMHO.
NFS is by far the most used and usable network storage solution out
there, anybody with half a brain knows how to set it up and use it.

> Also, have you ensured that page_file_index, page_file_mapping
> and page_offset are only ever used on anonymous pages when the
> page is locked? (otherwise PageSwapCache could change)

Good point, I hope so, both ->readpage() and ->writepage() take a locked
page, I'd have to look if it remains locked throughout the NFS call
chain.

Then again, it might become obsolete with the extended swap a_ops.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-10-31 12:16:51

by Jeff Garzik

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

Thoughts:

1) I absolutely agree that NFS is far more prominent and useful than any
network block device, at the present time.


2) Nonetheless, swap over NFS is a pretty rare case. I view this work
as interesting, but I really don't see a huge need, for swapping over
NBD or swapping over NFS. I tend to think swapping to a remote resource
starts to approach "migration" rather than merely swapping. Yes, we can
do it... but given the lack of burning need one must examine the price.


3) You note
> Swap over network has the problem that the network subsystem does not use fixed
> sized allocations, but heavily relies on kmalloc(). This makes mempools
> unusable.

True, but IMO there are mitigating factors that should be researched and
taken into account:

a) To give you some net driver background/history, most mainstream net
drivers were coded to allocate RX skbs of size 1538, under the theory
that they would all be allocating out of the same underlying slab cache.
It would not be difficult to update a great many of the [non-jumbo]
cases to create a fixed size allocation pattern.

b) Spare-time experiments and anecdotal evidence points to RX and TX skb
recycling as a potentially valuable area of research. If you are able
to do something like that, then memory suddenly becomes a lot more
bounded and predictable.


So my gut feeling is that taking a hard look at how net drivers function
in the field should give you a lot of good ideas that approach the
shared goal of making network memory allocations more predictable and
bounded.

Jeff


2007-10-31 12:57:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Wed, 2007-10-31 at 08:16 -0400, Jeff Garzik wrote:
> Thoughts:
>
> 1) I absolutely agree that NFS is far more prominent and useful than any
> network block device, at the present time.
>
>
> 2) Nonetheless, swap over NFS is a pretty rare case. I view this work
> as interesting, but I really don't see a huge need, for swapping over
> NBD or swapping over NFS. I tend to think swapping to a remote resource
> starts to approach "migration" rather than merely swapping. Yes, we can
> do it... but given the lack of burning need one must examine the price.

There is a large corporate demand for this, which is why I'm doing this.

The typical usage scenarios are:
- cluster/blades, where having local disks is a cost issue (maintenance
of failures, heat, etc)
- virtualisation, where dumping the storage on a networked storage unit
makes for trivial migration and what not..

But please, people who want this (I'm sure some of you are reading) do
speak up. I'm just the motivated corporate drone implementing the
feature :-)

> 3) You note
> > Swap over network has the problem that the network subsystem does not use fixed
> > sized allocations, but heavily relies on kmalloc(). This makes mempools
> > unusable.
>
> True, but IMO there are mitigating factors that should be researched and
> taken into account:
>
> a) To give you some net driver background/history, most mainstream net
> drivers were coded to allocate RX skbs of size 1538, under the theory
> that they would all be allocating out of the same underlying slab cache.
> It would not be difficult to update a great many of the [non-jumbo]
> cases to create a fixed size allocation pattern.

One issue that comes to mind is how to ensure we'd still overflow the
IP-reassembly buffers. Currently those are managed on the number of
bytes present, not the number of fragments.

One of the goals of my approach was to not rewrite the network subsystem
to accomodate this feature (and I hope I succeeded).

> b) Spare-time experiments and anecdotal evidence points to RX and TX skb
> recycling as a potentially valuable area of research. If you are able
> to do something like that, then memory suddenly becomes a lot more
> bounded and predictable.
>
>
> So my gut feeling is that taking a hard look at how net drivers function
> in the field should give you a lot of good ideas that approach the
> shared goal of making network memory allocations more predictable and
> bounded.

Note that being bounded only comes from dropping most packets before
trying them to a socket. That is the crucial part of the RX path, to
receive all packets from the NIC (regardless their size) but to not pass
them on to the network stack - unless they belong to a 'special' socket
that promises undelayed processing.

Thanks for these ideas, I'll look into them.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-10-31 13:20:19

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

Em Wed, Oct 31, 2007 at 01:56:53PM +0100, Peter Zijlstra escreveu:
> On Wed, 2007-10-31 at 08:16 -0400, Jeff Garzik wrote:
> > Thoughts:
> >
> > 1) I absolutely agree that NFS is far more prominent and useful than any
> > network block device, at the present time.
> >
> >
> > 2) Nonetheless, swap over NFS is a pretty rare case. I view this work
> > as interesting, but I really don't see a huge need, for swapping over
> > NBD or swapping over NFS. I tend to think swapping to a remote resource
> > starts to approach "migration" rather than merely swapping. Yes, we can
> > do it... but given the lack of burning need one must examine the price.
>
> There is a large corporate demand for this, which is why I'm doing this.
>
> The typical usage scenarios are:
> - cluster/blades, where having local disks is a cost issue (maintenance
> of failures, heat, etc)
> - virtualisation, where dumping the storage on a networked storage unit
> makes for trivial migration and what not..
>
> But please, people who want this (I'm sure some of you are reading) do
> speak up. I'm just the motivated corporate drone implementing the
> feature :-)

Keep it up, Dave already mentioned iSCSI, there is AoE, there are RT
sockets, you name it, the networking bits we've talked about several
times, they look OK, so I'm sorry for not going over all of them in
detail, but you have my support neverthless.

- Arnaldo

2007-10-31 13:44:43

by Gregory Haskins

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

Peter Zijlstra wrote:

>
> But please, people who want this (I'm sure some of you are reading) do
> speak up. I'm just the motivated corporate drone implementing the
> feature :-)

FWIW, I could have used a "swap to network technology X" like system at
my last job. We were building a large networking switch with blades,
and the IO cards didn't have anywhere near the resources that the
control modules had (no persistent storage, small ram, etc). We were
already doing userspace coredumps over NFS to the control cards. It
would have been nice to swap as well.

2007-10-31 14:33:57

by Byron Stanoszek

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On Wed, 31 Oct 2007, Nick Piggin wrote:

> On Wednesday 31 October 2007 15:37, David Miller wrote:
>> From: Nick Piggin <[email protected]>
>> Date: Wed, 31 Oct 2007 14:26:32 +1100
>>
>>> Is it really worth all the added complexity of making swap
>>> over NFS files work, given that you could use a network block
>>> device instead?
>>
>> Don't be misled. Swapping over NFS is just a scarecrow for the
>> seemingly real impetus behind these changes which is network storage
>> stuff like iSCSI.
>
> Oh, I'm OK with the network reserves stuff (not the actual patch,
> which I'm not really qualified to review, but at least the idea
> of it...).
>
> And also I'm not as such against the idea of swap over network.
>
> However, specifically the change to make swapfiles work through
> the filesystem layer (ATM it goes straight to the block layer,
> modulo some initialisation stuff which uses block filesystem-
> specific calls).
>
> I mean, I assume that anybody trying to swap over network *today*
> has to be using a network block device anyway, so the idea of
> just being able to transparently improve that case seems better
> than adding new complexities for seemingly not much gain.

I have some embedded diskless devices that have 16 MB of RAM and >500MB of
swap. Its root fs and swap device are both done over NBD because NFS is too
expensive in 16MB of RAM. Any memory contention (i.e needing memory to swap
memory over the network), however infrequent, causes the system to freeze when
about 50 MB of VM is used up. I would love to see some work done in this area.

-Byron

--
Byron Stanoszek Ph: (330) 644-3059
Systems Programmer Fax: (330) 644-8110
Commercial Timesharing Inc. Email: [email protected]

2007-10-31 14:54:21

by Mike Snitzer

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

On 10/31/07, Peter Zijlstra <[email protected]> wrote:
> On Wed, 2007-10-31 at 08:50 +0000, Christoph Hellwig wrote:
> > On Tue, Oct 30, 2007 at 09:37:53PM -0700, David Miller wrote:
> > > Don't be misled. Swapping over NFS is just a scarecrow for the
> > > seemingly real impetus behind these changes which is network storage
> > > stuff like iSCSI.
> >
> > So can we please do swap over network storage only first? All these
> > VM bits look conceptually sane to me, while the changes to the swap
> > code to support nfs are real crackpipe material.
>
> Yeah, I know how you stand on that. I just wanted to post all this
> before going off into the woods reworking it all.
...
> > So please get the VM bits for swap over network blockdevices in first,
>
> Trouble with that part is that we don't have any sane network block
> devices atm, NBD is utter crap, and iSCSI is too complex to be called
> sane.
>
> Maybe Evgeniy's Distributed storage thingy would work, will have a look
> at that.

Andrew recently asked Evgeniy if his DST was ready for merging; to
which Evgeniy basically said yes:
http://lkml.org/lkml/2007/10/27/54

It would be great if DST could be merged; whereby addressing the fact
that NBD is lacking for net-vm. If DST were scrutinized in the
context of net-vm it should help it get the review that is needed for
merging.

Mike

2007-10-31 16:37:30

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

Hi.

On Wed, Oct 31, 2007 at 10:54:02AM -0400, Mike Snitzer ([email protected]) wrote:
> > Trouble with that part is that we don't have any sane network block
> > devices atm, NBD is utter crap, and iSCSI is too complex to be called
> > sane.
> >
> > Maybe Evgeniy's Distributed storage thingy would work, will have a look
> > at that.
>
> Andrew recently asked Evgeniy if his DST was ready for merging; to
> which Evgeniy basically said yes:
> http://lkml.org/lkml/2007/10/27/54
>
> It would be great if DST could be merged; whereby addressing the fact
> that NBD is lacking for net-vm. If DST were scrutinized in the
> context of net-vm it should help it get the review that is needed for
> merging.

By popular request I'm working on adding strong checksumming of the data
transferred, so I can not say that Andrew will want to merge this during
development phase. I expect to complete it quite soon (it is in testing
stage right now) though with new release scheduled this week. It will
also include some small features for userspace (hapiness).

Memory management is not changed.

--
Evgeniy Polyakov

2007-11-02 17:17:08

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

Hi!

> > 2) Nonetheless, swap over NFS is a pretty rare case. I view this work
> > as interesting, but I really don't see a huge need, for swapping over
> > NBD or swapping over NFS. I tend to think swapping to a remote resource
> > starts to approach "migration" rather than merely swapping. Yes, we can
> > do it... but given the lack of burning need one must examine the price.
>
> There is a large corporate demand for this, which is why I'm doing this.
>
> The typical usage scenarios are:
> - cluster/blades, where having local disks is a cost issue (maintenance
> of failures, heat, etc)
> - virtualisation, where dumping the storage on a networked storage unit
> makes for trivial migration and what not..
>
> But please, people who want this (I'm sure some of you are reading) do
> speak up. I'm just the motivated corporate drone implementing the
> feature :-)

I have wyse thin client here, geode (or something) cpu, 128MB flash,
256MB RAM (IIRC). You want to swap on this one, and no, you don't want
to swap to flash.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2007-11-18 18:20:31

by Robin Humble

[permalink] [raw]
Subject: Re: [PATCH 00/33] Swap over NFS -v14

<apologies for being insanely late into this thread>

On Wed, Oct 31, 2007 at 01:56:53PM +0100, Peter Zijlstra wrote:
>On Wed, 2007-10-31 at 08:16 -0400, Jeff Garzik wrote:
>> Thoughts:
>> 1) I absolutely agree that NFS is far more prominent and useful than any
>> network block device, at the present time.
>>
>> 2) Nonetheless, swap over NFS is a pretty rare case. I view this work
>> as interesting, but I really don't see a huge need, for swapping over
>> NBD or swapping over NFS. I tend to think swapping to a remote resource
>> starts to approach "migration" rather than merely swapping. Yes, we can
>> do it... but given the lack of burning need one must examine the price.
>
>There is a large corporate demand for this, which is why I'm doing this.
>
>The typical usage scenarios are:
> - cluster/blades, where having local disks is a cost issue (maintenance
> of failures, heat, etc)

HPC clusters are increasingly diskless, especially at the high end.
for all the reasons you mention, but also because networks are faster
than disks.

>But please, people who want this (I'm sure some of you are reading) do
>speak up. I'm just the motivated corporate drone implementing the
>feature :-)

swap to iSCSI has worked well in the past with your anti-deadlock
patches, and I'd definitely like to see that continue and to be merged
into mainline!! swap-to-network is a highly desirable feature for
modern clusters.

performance and scalability of NFS is poor, so it's not a good option.

actually swap to a file on Lustre(*) would be best, but iSER and iSCSI
would be my next choices. iSER is better than iSCSI as it's ~5x faster
in practice, and InfiniBand seems to be here to stay.

hmmm - any idea what the issues are with RDMA in low memory situations?
presumably if DMA regions are mapped early then there's not actually
much of a problem? I might try it with tgtd's iSER...

cheers,
robin

(*) obviously not your responsibility. although Lustre (Sun/CFS) could
presumably use your infrastructure once you have it in mainline.


>> 3) You note
>> > Swap over network has the problem that the network subsystem does not use fixed
>> > sized allocations, but heavily relies on kmalloc(). This makes mempools
>> > unusable.
>>
>> True, but IMO there are mitigating factors that should be researched and
>> taken into account:
>>
>> a) To give you some net driver background/history, most mainstream net
>> drivers were coded to allocate RX skbs of size 1538, under the theory
>> that they would all be allocating out of the same underlying slab cache.
>> It would not be difficult to update a great many of the [non-jumbo]
>> cases to create a fixed size allocation pattern.
>
>One issue that comes to mind is how to ensure we'd still overflow the
>IP-reassembly buffers. Currently those are managed on the number of
>bytes present, not the number of fragments.
>
>One of the goals of my approach was to not rewrite the network subsystem
>to accomodate this feature (and I hope I succeeded).
>
>> b) Spare-time experiments and anecdotal evidence points to RX and TX skb
>> recycling as a potentially valuable area of research. If you are able
>> to do something like that, then memory suddenly becomes a lot more
>> bounded and predictable.
>>
>>
>> So my gut feeling is that taking a hard look at how net drivers function
>> in the field should give you a lot of good ideas that approach the
>> shared goal of making network memory allocations more predictable and
>> bounded.
>
>Note that being bounded only comes from dropping most packets before
>trying them to a socket. That is the crucial part of the RX path, to
>receive all packets from the NIC (regardless their size) but to not pass
>them on to the network stack - unless they belong to a 'special' socket
>that promises undelayed processing.
>
>Thanks for these ideas, I'll look into them.