2002-11-11 06:00:27

by Mark Mielke

[permalink] [raw]
Subject: PROT_SEM + FUTEX

Is PROT_SEM necessary anymore? 2.5.46 does not seem to include any
references to it that adjust behaviour for pages. Would it be
reasonable to remove it, or #define PROT_SEM to (0) to avoid
confusion?

I am beginning to play with the FUTEX system call. I am hoping that
PROT_SEM is not required, as I intend to scatter the words throughout
memory, and it would be a real pain to mprotect(PROT_SEM) each page
that contains a FUTEX word.

For systems that do not support the FUTEX system call (2.4.x?),
is sched_yield() the best alternative?

Thanks,
mark

--
[email protected]/[email protected]/[email protected] __________________________
. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder
|\/| |_| |_| |/ |_ |\/| | |_ | |/ |_ |
| | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada

One ring to rule them all, one ring to find them, one ring to bring them all
and in the darkness bind them...

http://mark.mielke.cc/


2002-11-11 20:21:51

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: PROT_SEM + FUTEX


> I am beginning to play with the FUTEX system call. I am hoping that
> PROT_SEM is not required, as I intend to scatter the words throughout
> memory, and it would be a real pain to mprotect(PROT_SEM) each page
> that contains a FUTEX word.

Still you want to group them as much as possible - each time you lock
a futex you are pinning the containing page into physical memory, that
would cause that if you have, for example, 4000 futexes locked in 4000
different pages, there is going to be 4 MB of memory locked in ... it
helps to have an allocator that ties them together.

Cheers,

Inaky Perez-Gonzalez -- Not speaking for Intel - opinions are my own [or my
fault]

2002-11-11 21:34:23

by Mark Mielke

[permalink] [raw]
Subject: Re: PROT_SEM + FUTEX

On Mon, Nov 11, 2002 at 12:28:32PM -0800, Perez-Gonzalez, Inaky wrote:
> > I am beginning to play with the FUTEX system call. I am hoping that
> > PROT_SEM is not required, as I intend to scatter the words throughout
> > memory, and it would be a real pain to mprotect(PROT_SEM) each page
> > that contains a FUTEX word.
> Still you want to group them as much as possible - each time you lock
> a futex you are pinning the containing page into physical memory, that
> would cause that if you have, for example, 4000 futexes locked in 4000
> different pages, there is going to be 4 MB of memory locked in ... it
> helps to have an allocator that ties them together.

This is not necessarily correct for a high-capacity, low-latency
application (i.e. a poorly designed thread architecture might suffer
from this problem -- but a poorly designed thread architecture suffers
from more problems than pinned pages).

As long as the person attempting to manipulate the FUTEX word succeeds
(i.e. 0 -> 1, or 0 -> -1, or whatever), futex_wait() need not be issued.
futex_wake() only pins the page for a brief period of time.

On IA-32, especially for PIII + P4, my understanding is that memory
words used for the purpose of thread synchronization, on an SMP
machine, should not allow two independent memory words to occupy the
same cache line. Also, if the memory word is used to synchronize
access to a smaller data structure (<128 bytes), it is actually
optimal to include the memory word used to synchronize access to the
data, and the data itself, in the same cache line.

Since the synchronization memory + data for a data structure may be
128 bytes or more (to at least put the synchronization memory words on
separate cache lines), 4000 such data structures would use up 100+
pages whether or not the memory was scattered. The real benefit of the
FUTEX concept is that spinlocks are 'cheap' and can be used with such
a granularity that the possibility of contention is extremely low.

As far as I understand - PROT_SEM has no effect on the behaviour of
FUTEX operations. I think it should be removed.

mark

--
[email protected]/[email protected]/[email protected] __________________________
. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder
|\/| |_| |_| |/ |_ |\/| | |_ | |/ |_ |
| | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada

One ring to rule them all, one ring to find them, one ring to bring them all
and in the darkness bind them...

http://mark.mielke.cc/

2002-11-11 22:24:49

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: PROT_SEM + FUTEX


> hoping that
> > > PROT_SEM is not required, as I intend to scatter the
> words throughout
> > > memory, and it would be a real pain to mprotect(PROT_SEM)
> each page
> > > that contains a FUTEX word.
> >
> > Still you want to group them as much as possible - each
> time you lock
> > a futex you are pinning the containing page into physical
> memory, that
> > would cause that if you have, for example, 4000 futexes
> locked in 4000
> > different pages, there is going to be 4 MB of memory locked
> in ... it
> > helps to have an allocator that ties them together.
>
> This is not necessarily correct for a high-capacity, low-latency
> application (i.e. a poorly designed thread architecture might suffer
> from this problem -- but a poorly designed thread architecture suffers
> from more problems than pinned pages).

Too much thinking about the general case - you are absolutely right, but
only as long as you are sure that the contention rate across the many
futexes you have is going to be _really_ low; that, or if your applications
are always mlocked in memory, then you are ok.

I keep thinking about multi-thousand threads and their locks, so spare me, I
forget about the well-designed cases.

> As far as I understand - PROT_SEM has no effect on the behaviour of
> FUTEX operations. I think it should be removed.

Rusty (Russel) declared that Linus declared that platforms that don't
implement a sane cache architecture can not implement futexes ... so I guess
this means we can forget about PROT_SEM for futexes.

> As long as the person attempting to manipulate the FUTEX word succeeds
> (i.e. 0 -> 1, or 0 -> -1, or whatever), futex_wait() need not
> be issued.
> futex_wake() only pins the page for a brief period of time.

Define brief - remember that if the futex is locked, the page is already
pinned, futex_wake() is just making sure it is there while it uses it - and
again, as you said before, this is completely application specific; the
kernel cannot count on any specific behaviour on the user side.

> same cache line. Also, if the memory word is used to synchronize
> access to a smaller data structure (<128 bytes), it is actually
> optimal to include the memory word used to synchronize access to the
> data, and the data itself, in the same cache line.

Sure, this makes full sense; if you are using the futexes straight off from
the kernel for synchronization; however, when used by something like NGPT's
mutex system, the story changes, because you cannot assume anything, you
have to be generic - and there is my bias.

Lucky you that don't need to worry about that :)

Inaky Perez-Gonzalez -- Not speaking for Intel - opinions are my own [or my
fault]

2002-11-11 22:56:16

by Mark Mielke

[permalink] [raw]
Subject: Re: PROT_SEM + FUTEX

On Mon, Nov 11, 2002 at 02:31:28PM -0800, Perez-Gonzalez, Inaky wrote:
> > As long as the person attempting to manipulate the FUTEX word succeeds
> > (i.e. 0 -> 1, or 0 -> -1, or whatever), futex_wait() need not
> > be issued. futex_wake() only pins the page for a brief period of time.
> Define brief - remember that if the futex is locked, the page is already
> pinned, futex_wake() is just making sure it is there while it uses it - and
> again, as you said before, this is completely application specific; the
> kernel cannot count on any specific behaviour on the user side.

I am defining "brief" as the length of time that futex_wake() takes to
pin and unpin the page, which I hope is quite short as the internal
futex locks are also held during this time.

I might be doing something wrong -- but it seems to me that using inc,
dec, xchg or cmpxchg (depending on the object being implemented) is
all that is necessary for IA-32. futex_wait() should only be executed
by threads which decides that they need to wait, which on an
application with a well designed thread architecture, should not occur
frequently. I would find any application that needed to actively wait
on 4000 futex objects to be either incorrectly designed, or under
enough load that I think an investment in a few more CPU's would be
worthwhile... :-)

> > same cache line. Also, if the memory word is used to synchronize
> > access to a smaller data structure (<128 bytes), it is actually
> > optimal to include the memory word used to synchronize access to the
> > data, and the data itself, in the same cache line.
> Sure, this makes full sense; if you are using the futexes straight off from
> the kernel for synchronization; however, when used by something like NGPT's
> mutex system, the story changes, because you cannot assume anything, you
> have to be generic - and there is my bias.
> Lucky you that don't need to worry about that :)

In this case it isn't luck -- although I am certain that NGPT, and the
other recent projects to improve the speed of threads and thread
synchronization on Linux are doing very well, I have been dabbing with
purposefully avoiding 'pthreads-like' libraries for synchronization
primitives. Originally my goal was to reduce the overhead of a
MUTEX-like object and a RWLOCK-like object to be a single word. The
increased efficiency, and reduced storage requirement for these
storage primitives would allow me to use them at more granular levels,
which reduces the potential for contention.

At some point, the need to be absolutely general and portable gets in
the way of being efficient. You seem to be trying to accomplish all
three goals (NGPT), a task that I can appreciate, but one that I
cannot envy... :-)

mark

--
[email protected]/[email protected]/[email protected] __________________________
. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder
|\/| |_| |_| |/ |_ |\/| | |_ | |/ |_ |
| | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada

One ring to rule them all, one ring to find them, one ring to bring them all
and in the darkness bind them...

http://mark.mielke.cc/

2002-11-12 00:25:42

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: PROT_SEM + FUTEX


> frequently. I would find any application that needed to actively wait
> on 4000 futex objects to be either incorrectly designed, or under
> enough load that I think an investment in a few more CPU's would be
> worthwhile... :-)

Not really; well, yeah really, but on other side; the case
I am talking about is threaded applications with thousands of threads
[no kidding], where for example, 2K are producers and 4K are consumers;
there you have a rough 50% rate of contention for each lock [assume
each producer has a lock that the two consumers need to acquire]; the
contention might not be very bad, but as an average you might have
around 1000 futexes locked, that would add up to, in worst case, 1000
pages locked ...

However, if you do some thousand threads, you better have more memory
so that 1000 pages locked does not really mind :]

Inaky Perez-Gonzalez -- Not speaking for Intel - opinions are my own [or my
fault]

2002-11-12 03:40:08

by Jamie Lokier

[permalink] [raw]
Subject: Users locking memory using futexes

Perez-Gonzalez, Inaky wrote:
> [...] each time you lock a futex you are pinning the containing page
> into physical memory, that would cause that if you have, for
> example, 4000 futexes locked in 4000 different pages, there is going
> to be 4 MB of memory locked in [...]

Ouch! It looks to me like userspace can use FUTEX_FD to lock many
pages of memory, achieving the same as mlock() but without the
resource checks.

Denial of service attack?

-- Jamie

2002-11-12 04:09:23

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: Users locking memory using futexes


> Perez-Gonzalez, Inaky wrote:
> > [...] each time you lock a futex you are pinning the containing page
> > into physical memory, that would cause that if you have, for
> > example, 4000 futexes locked in 4000 different pages, there is going
> > to be 4 MB of memory locked in [...]
>
> Ouch! It looks to me like userspace can use FUTEX_FD to lock many
> pages of memory, achieving the same as mlock() but without the
> resource checks.

This raises a good point - I guess we should be doing something like
checking user limits (against locked memory, 'ulimit -l'). Something along
the lines of this [warning, dirty-fastly-scratched-draft, untested]:

diff -u futex.c.orig futex.c
--- futex.c.orig 2002-11-11 20:06:22.000000000 -0800
+++ futex.c 2002-11-11 20:08:48.000000000 -0800
@@ -261,8 +261,12 @@
struct page *page;
struct futex_q q;

+ if (current->mm->total_vm + 1 >
+ (current->rlim[RLIMIT_MEMLOCK].rlim_cur >> PAGE_SHIFT))
+ return -ENOMEM;
+
init_waitqueue_head(&q.waiters);
-
+
lock_futex_mm();

page = __pin_page(uaddr - offset);
@@ -358,6 +362,11 @@
if (signal < 0 || signal > _NSIG)
goto out;

+ ret = -ENOMEM;
+ if (current->mm->total_vm + 1 >
+ (current->rlim[RLIMIT_MEMLOCK].rlim_cur >> PAGE_SHIFT))
+ goto out;
+
ret = get_unused_fd();
if (ret < 0)
goto out;

However, we could break the semantics of other programs that expect that the
amount of memory they lock is the only one that is used in the rlimit ...

What else could be done?

Inaky Perez-Gonzalez -- Not speaking for Intel - opinions are my own [or my
fault]

2002-11-12 05:14:35

by Jamie Lokier

[permalink] [raw]
Subject: Re: Users locking memory using futexes

Perez-Gonzalez, Inaky wrote:
> This raises a good point - I guess we should be doing something like
> checking user limits (against locked memory, 'ulimit -l').

If futexes are limited by user limits, that's going to mean some
threading program gets a surprise when too many threads decide to
block on a resource. That's really nasty. (Of course, a program can
get a surprise due to just running out of memory in sys_futex() too,
but that's much rarer).

It would be nice if the futex waitqueues could be re-hashed against
swap entries when pages are swapped out, somehow, but this sounds hard.

-- Jamie

2002-11-12 05:49:48

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: Users locking memory using futexes


> > This raises a good point - I guess we should be doing something like
> > checking user limits (against locked memory, 'ulimit -l').

> If futexes are limited by user limits, that's going to mean some
> threading program gets a surprise when too many threads decide to
> block on a resource. That's really nasty. (Of course, a program can
> get a surprise due to just running out of memory in sys_futex() too,
> but that's much rarer).

Sure, as I mentioned in my email, that'd be _a_ way to do it, but I am not
convinced at all it is the best -- of course, I don't know what could be the
best way to do it; maybe a capability? a per-process tunable in /proc?
another rlimit and we break POSIX? [do we?]

Good thing is - I just found out after reading twice - that FUTEX_FD does
not lock the page in memory, so that is one case less to worry about.

In this context I was wondering it it really makes sense to worry about too
many threads of a DoS process blocking on futex_wait() to lock memory out.
At least, as an excercise ...

> It would be nice if the futex waitqueues could be re-hashed against
> swap entries when pages are swapped out, somehow, but this
> sounds hard.

I am starting to think it could be done with no effort -- just off my
little-knoledgeable-head -- let's say it can be done:

In futex_wait(), we lock the page, store it and the offset [and whatever
else] as now, and then releasing just after queueing in the hash table; this
way the page can go wild to swap.

Some other process has locked it, then goes on to do something else and the
page ends up in swap. Whenever we call _wake() - or tell_waiters(), we need
to make sure the page is in RAM - if not, we can page it in (__pin_page()
does it already) and lock it, do the thing, unlock it.

So, this would mean this patch should suffice:
--- futex.c 12 Nov 2002 05:38:55 -0000 1.1.1.3.8.1
+++ futex.c 12 Nov 2002 05:50:35 -0000
@@ -281,10 +277,12 @@
/* Page is pinned, but may no longer be in this address space. */
if (get_user(curval, (int *)uaddr) != 0) {
ret = -EFAULT;
+ unpin_page(page);
goto out;
}
if (curval != val) {
ret = -EWOULDBLOCK;
+ unpin_page(page);
goto out;
}
/*
@@ -295,6 +293,7 @@
* the waiter from the list.
*/
add_wait_queue(&q.waiters, &wait);
+ unpin_page(page);
set_current_state(TASK_INTERRUPTIBLE);
if (!list_empty(&q.list))
time = schedule_timeout(time);
@@ -313,7 +312,6 @@
/* Were we woken up anyway? */
if (!unqueue_me(&q))
ret = 0;
- unpin_page(page);

return ret;
}

Rusty, Ingo: am I missing something big in here? I am kind of green in the
interactions between the address spaces.

Inaky Perez-Gonzalez -- Not speaking for Intel - opinions are my own [or my
fault]

2002-11-12 07:49:31

by Ingo Molnar

[permalink] [raw]
Subject: Re: Users locking memory using futexes


On Tue, 12 Nov 2002, Jamie Lokier wrote:

> It would be nice if the futex waitqueues could be re-hashed against swap
> entries when pages are swapped out, somehow, but this sounds hard.

yes it sounds hard (and somewhat expensive). The simple solution would be
to hash against the pte address, which is an invariant over swapout - but
that breaks inter-process futexes. The hard way would be to rehash the
futex at the pte address upon swapout, and rehash it with the new physical
page upon swapin. The pte chain case has to be careful, and rehashing
should only be done when the physical page is truly unmapped even in the
last process context.

but this should indeed solve the page lockdown problem.

Ingo

2002-11-12 17:17:34

by Rusty Russell

[permalink] [raw]
Subject: Re: Users locking memory using futexes

In message <[email protected]> you write:
> Perez-Gonzalez, Inaky wrote:
> > [...] each time you lock a futex you are pinning the containing page
> > into physical memory, that would cause that if you have, for
> > example, 4000 futexes locked in 4000 different pages, there is going
> > to be 4 MB of memory locked in [...]
>
> Ouch! It looks to me like userspace can use FUTEX_FD to lock many
> pages of memory, achieving the same as mlock() but without the
> resource checks.
>
> Denial of service attack?

See "pipe".

Rusty.
--
Anyone who quotes me in their sig is an idiot. -- Rusty Russell.

2002-11-12 17:30:14

by Jamie Lokier

[permalink] [raw]
Subject: Re: Users locking memory using futexes

Perez-Gonzalez, Inaky wrote:
> Good thing is - I just found out after reading twice - that FUTEX_FD does
> not lock the page in memory, so that is one case less to worry about.

Oh yes it does - the page isn't unpinned until wakeup or close.
See where it says in futex_fd():

page = NULL;
out:
if (page)
unpin_page(page);

Rusty's got a good point about pipe() though.

Btw, maybe GnuPG can use this "feature" to lock it's crypto memory in RAM :)

-- Jamie

2002-11-12 17:37:39

by Alan

[permalink] [raw]
Subject: Re: Users locking memory using futexes

On Tue, 2002-11-12 at 17:17, Rusty Russell wrote:
> > Ouch! It looks to me like userspace can use FUTEX_FD to lock many
> > pages of memory, achieving the same as mlock() but without the
> > resource checks.
> >
> > Denial of service attack?
>
> See "pipe".

Thats not an excuse. If the futex stuff allows arbitary memory locking
and it isnt properly accounted then its a bug, with the added problem
that its easier to have nasty accidents with than pipes.

We have a per user object nowdays so accounting per user locked memory
looks rather doable both for mlock, pipe, af_unix socket and for other
things

2002-11-12 17:33:34

by Jamie Lokier

[permalink] [raw]
Subject: Re: Users locking memory using futexes

Ingo Molnar wrote:
> > It would be nice if the futex waitqueues could be re-hashed against swap
> > entries when pages are swapped out, somehow, but this sounds hard.
>
> yes it sounds hard (and somewhat expensive). The simple solution would be
> to hash against the pte address, which is an invariant over swapout - but
> that breaks inter-process futexes. The hard way would be to rehash the
> futex at the pte address upon swapout, and rehash it with the new physical
> page upon swapin. The pte chain case has to be careful, and rehashing
> should only be done when the physical page is truly unmapped even in the
> last process context.

Can't it be hashed against (address space, offset) for shared
mappings, and against (mm, pte address) for private mappings?

-- Jamie

2002-11-12 17:50:41

by Perez-Gonzalez, Inaky

[permalink] [raw]
Subject: RE: Users locking memory using futexes


> > Good thing is - I just found out after reading twice - that
> FUTEX_FD does
> > not lock the page in memory, so that is one case less to
> worry about.
>
> Oh yes it does - the page isn't unpinned until wakeup or close.
> See where it says in futex_fd():
>
> page = NULL;
> out:
> if (page)
> unpin_page(page);

Bang, bang, bang ... assshoooole [hearing whispers in my ears]. Great point:
Inaky 0, Jamie 1 - this will teach me to read _three_ times on Monday
evenings. I am supposed to know all that code by heart ... oh well.

> Rusty's got a good point about pipe() though.

He does; grumble, grumble ... let's see ... with pipe you have an implicit
limit that controls you, the number of open files, that you also hit with
futex_fd() (in ... get_unused_fd()) - so that is covered. OTOH, with just
futex_wait(), if you are up to use one page per futex you lock on, you are
also limited by RLIMIT_NPROC for every process you lock on [asides from
wasting a lot of memory], so looks like there is another roadblock there to
control it.

Hum ... still I want to try Ingo's approach on the ptes; that is the part I
was missing [knowing that struct page * is not invariant as the pte number
... even being as obvious as it is].

Inaky Perez-Gonzalez -- Not speaking for Intel - opinions are my own [or my
fault]

2002-11-12 19:10:40

by Jamie Lokier

[permalink] [raw]
Subject: Re: Users locking memory using futexes

Perez-Gonzalez, Inaky wrote:
> Hum ... still I want to try Ingo's approach on the ptes; that is the part I
> was missing [knowing that struct page * is not invariant as the pte number
> ... even being as obvious as it is].

Btw, pte number of private mapping is not invariant over mremap(), but
otherwise I think it is fine.

- Jamie

2002-11-13 07:09:40

by Rusty Russell

[permalink] [raw]
Subject: Re: Users locking memory using futexes

In message <[email protected]> you write:
> On Tue, 2002-11-12 at 17:17, Rusty Russell wrote:
> > > Ouch! It looks to me like userspace can use FUTEX_FD to lock many
> > > pages of memory, achieving the same as mlock() but without the
> > > resource checks.
> > >
> > > Denial of service attack?
> >
> > See "pipe".
>
> Thats not an excuse. If the futex stuff allows arbitary memory locking
> and it isnt properly accounted then its a bug, with the added problem
> that its easier to havie nasty accidents with than pipes.

It's bounded by one page per fd. If you want better than that, then
yes we'll need to thihk harder.

Frobbing futexes on COW and page-in/out is a possible solution, but
requires careful thought.

Rusty.
--
Anyone who quotes me in their sig is an idiot. -- Rusty Russell.

2002-11-13 13:55:15

by Alan

[permalink] [raw]
Subject: Re: Users locking memory using futexes

On Tue, 2002-11-12 at 18:13, Rusty Russell wrote:
> It's bounded by one page per fd. If you want better than that, then
> yes we'll need to thihk harder.

one page per fd is "unbounded" to all intents and purposes. Doing the
page accounting per user doesnt look too scary if we ignore stuff like
page tables for a first cut.