2010-06-05 16:51:20

by Marcin Slusarz

[permalink] [raw]
Subject: [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages

After every iounmap mmiotrace has to free kmmio_fault_pages, but it
can't do it directly, so it defers freeing by RCU.

It usually works, but when mmiotraced code calls ioremap-iounmap
multiple times without sleeping between (so RCU won't kick in and
start freeing) it can be given the same virtual address, so at
every iounmap mmiotrace will schedule the same pages for release.
Obviously it will explode on second free.

Fix it by marking kmmio_fault_pages which are scheduled for release
and not adding them second time.

Signed-off-by: Marcin Slusarz <[email protected]>
Cc: Pekka Paalanen <[email protected]>
Cc: Stuart Bennett <[email protected]>
---
arch/x86/mm/kmmio.c | 16 +++++++++++++---
1 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
index 5d0e67f..e5d5e2c 100644
--- a/arch/x86/mm/kmmio.c
+++ b/arch/x86/mm/kmmio.c
@@ -45,6 +45,8 @@ struct kmmio_fault_page {
* Protected by kmmio_lock, when linked into kmmio_page_table.
*/
int count;
+
+ bool scheduled_for_release;
};

struct kmmio_delayed_release {
@@ -398,8 +400,11 @@ static void release_kmmio_fault_page(unsigned long page,
BUG_ON(f->count < 0);
if (!f->count) {
disarm_kmmio_fault_page(f);
- f->release_next = *release_list;
- *release_list = f;
+ if (!f->scheduled_for_release) {
+ f->release_next = *release_list;
+ *release_list = f;
+ f->scheduled_for_release = true;
+ }
}
}

@@ -471,8 +476,10 @@ static void remove_kmmio_fault_pages(struct rcu_head *head)
prevp = &f->release_next;
} else {
*prevp = f->release_next;
+ f->release_next = NULL;
+ f->scheduled_for_release = false;
}
- f = f->release_next;
+ f = *prevp;
}
spin_unlock_irqrestore(&kmmio_lock, flags);

@@ -510,6 +517,9 @@ void unregister_kmmio_probe(struct kmmio_probe *p)
kmmio_count--;
spin_unlock_irqrestore(&kmmio_lock, flags);

+ if (!release_list)
+ return;
+
drelease = kmalloc(sizeof(*drelease), GFP_ATOMIC);
if (!drelease) {
pr_crit("leaking kmmio_fault_page objects.\n");
--
1.7.1


2010-06-05 17:29:42

by Pekka Paalanen

[permalink] [raw]
Subject: Re: [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages

On Sat, 5 Jun 2010 18:49:42 +0200
Marcin Slusarz <[email protected]> wrote:

> After every iounmap mmiotrace has to free kmmio_fault_pages, but
> it can't do it directly, so it defers freeing by RCU.
>
> It usually works, but when mmiotraced code calls ioremap-iounmap
> multiple times without sleeping between (so RCU won't kick in and
> start freeing) it can be given the same virtual address, so at
> every iounmap mmiotrace will schedule the same pages for release.
> Obviously it will explode on second free.
>
> Fix it by marking kmmio_fault_pages which are scheduled for
> release and not adding them second time.
>
> Signed-off-by: Marcin Slusarz <[email protected]>
> Cc: Pekka Paalanen <[email protected]>
> Cc: Stuart Bennett <[email protected]>

Excellent work! Unfortunately I cannot review this patch
right now, I am sick. The description sounds good, though,
and I have no objections.


Thank you very much!

> ---
> arch/x86/mm/kmmio.c | 16 +++++++++++++---
> 1 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
> index 5d0e67f..e5d5e2c 100644
> --- a/arch/x86/mm/kmmio.c
> +++ b/arch/x86/mm/kmmio.c
> @@ -45,6 +45,8 @@ struct kmmio_fault_page {
> * Protected by kmmio_lock, when linked into
> kmmio_page_table. */
> int count;
> +
> + bool scheduled_for_release;
> };
>
> struct kmmio_delayed_release {
> @@ -398,8 +400,11 @@ static void
> release_kmmio_fault_page(unsigned long page, BUG_ON(f->count < 0);
> if (!f->count) {
> disarm_kmmio_fault_page(f);
> - f->release_next = *release_list;
> - *release_list = f;
> + if (!f->scheduled_for_release) {
> + f->release_next = *release_list;
> + *release_list = f;
> + f->scheduled_for_release = true;
> + }
> }
> }
>
> @@ -471,8 +476,10 @@ static void remove_kmmio_fault_pages(struct
> rcu_head *head) prevp = &f->release_next;
> } else {
> *prevp = f->release_next;
> + f->release_next = NULL;
> + f->scheduled_for_release = false;
> }
> - f = f->release_next;
> + f = *prevp;
> }
> spin_unlock_irqrestore(&kmmio_lock, flags);
>
> @@ -510,6 +517,9 @@ void unregister_kmmio_probe(struct
> kmmio_probe *p) kmmio_count--;
> spin_unlock_irqrestore(&kmmio_lock, flags);
>
> + if (!release_list)
> + return;
> +
> drelease = kmalloc(sizeof(*drelease), GFP_ATOMIC);
> if (!drelease) {
> pr_crit("leaking kmmio_fault_page objects.\n");
> --
> 1.7.1
>
>


--
Pekka Paalanen
http://www.iki.fi/pq/

2010-06-05 19:34:40

by Marcin Slusarz

[permalink] [raw]
Subject: Re: [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages

On Sat, Jun 05, 2010 at 06:49:42PM +0200, Marcin Slusarz wrote:
> After every iounmap mmiotrace has to free kmmio_fault_pages, but it
> can't do it directly, so it defers freeing by RCU.
>
> It usually works, but when mmiotraced code calls ioremap-iounmap
> multiple times without sleeping between (so RCU won't kick in and
> start freeing) it can be given the same virtual address, so at
> every iounmap mmiotrace will schedule the same pages for release.
> Obviously it will explode on second free.
>
> Fix it by marking kmmio_fault_pages which are scheduled for release
> and not adding them second time.
>

Attached patch for mmiotrace testing module allows to reliably reproduce
the bug. It can be folded into the main patch.

---
diff --git a/arch/x86/mm/testmmiotrace.c b/arch/x86/mm/testmmiotrace.c
index 8565d94..5f0937b 100644
--- a/arch/x86/mm/testmmiotrace.c
+++ b/arch/x86/mm/testmmiotrace.c
@@ -90,6 +90,19 @@ static void do_test(unsigned long size)
iounmap(p);
}

+static void do_test2(void)
+{
+ void __iomem *p;
+ int i;
+
+ for (i = 0; i < 10; ++i) {
+ p = ioremap_nocache(mmio_address, 4096);
+ if (p)
+ iounmap(p);
+ }
+ synchronize_rcu(); /* will freeing work? */
+}
+
static int __init init(void)
{
unsigned long size = (read_far) ? (8 << 20) : (16 << 10);
@@ -104,6 +117,7 @@ static int __init init(void)
"and writing 16 kB of rubbish in there.\n",
size >> 10, mmio_address);
do_test(size);
+ do_test2();
pr_info("All done.\n");
return 0;
}

2010-06-05 20:45:38

by Pekka Paalanen

[permalink] [raw]
Subject: Re: [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages

On Sat, 5 Jun 2010 21:33:01 +0200
Marcin Slusarz <[email protected]> wrote:

> On Sat, Jun 05, 2010 at 06:49:42PM +0200, Marcin Slusarz wrote:
> > After every iounmap mmiotrace has to free kmmio_fault_pages,
> > but it can't do it directly, so it defers freeing by RCU.
> >
> > It usually works, but when mmiotraced code calls ioremap-iounmap
> > multiple times without sleeping between (so RCU won't kick in
> > and start freeing) it can be given the same virtual address, so
> > at every iounmap mmiotrace will schedule the same pages for
> > release. Obviously it will explode on second free.
> >
> > Fix it by marking kmmio_fault_pages which are scheduled for
> > release and not adding them second time.
> >
>
> Attached patch for mmiotrace testing module allows to reliably
> reproduce the bug. It can be folded into the main patch.
>
> ---
> diff --git a/arch/x86/mm/testmmiotrace.c
> b/arch/x86/mm/testmmiotrace.c index 8565d94..5f0937b 100644
> --- a/arch/x86/mm/testmmiotrace.c
> +++ b/arch/x86/mm/testmmiotrace.c
> @@ -90,6 +90,19 @@ static void do_test(unsigned long size)
> iounmap(p);
> }
>
> +static void do_test2(void)
> +{
> + void __iomem *p;
> + int i;
> +
> + for (i = 0; i < 10; ++i) {
> + p = ioremap_nocache(mmio_address, 4096);
> + if (p)
> + iounmap(p);
> + }
> + synchronize_rcu(); /* will freeing work? */
> +}
> +
> static int __init init(void)
> {
> unsigned long size = (read_far) ? (8 << 20) : (16 << 10);
> @@ -104,6 +117,7 @@ static int __init init(void)
> "and writing 16 kB of rubbish in there.\n",
> size >> 10, mmio_address);
> do_test(size);
> + do_test2();
> pr_info("All done.\n");
> return 0;
> }

Acked-by: Pekka Paalanen <[email protected]>

--
Pekka Paalanen
http://www.iki.fi/pq/

2010-06-07 13:33:47

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages


* Marcin Slusarz <[email protected]> wrote:

> On Sat, Jun 05, 2010 at 06:49:42PM +0200, Marcin Slusarz wrote:
> > After every iounmap mmiotrace has to free kmmio_fault_pages, but it
> > can't do it directly, so it defers freeing by RCU.
> >
> > It usually works, but when mmiotraced code calls ioremap-iounmap
> > multiple times without sleeping between (so RCU won't kick in and
> > start freeing) it can be given the same virtual address, so at
> > every iounmap mmiotrace will schedule the same pages for release.
> > Obviously it will explode on second free.
> >
> > Fix it by marking kmmio_fault_pages which are scheduled for release
> > and not adding them second time.
> >
>
> Attached patch for mmiotrace testing module allows to reliably reproduce
> the bug. It can be folded into the main patch.
>
> ---
> diff --git a/arch/x86/mm/testmmiotrace.c b/arch/x86/mm/testmmiotrace.c
> index 8565d94..5f0937b 100644
> --- a/arch/x86/mm/testmmiotrace.c
> +++ b/arch/x86/mm/testmmiotrace.c
> @@ -90,6 +90,19 @@ static void do_test(unsigned long size)
> iounmap(p);
> }
>
> +static void do_test2(void)

Please add a comment about what the test function achieves.

> +{
> + void __iomem *p;
> + int i;
> +
> + for (i = 0; i < 10; ++i) {
> + p = ioremap_nocache(mmio_address, 4096);

s/4096/PAGE_SIZE

> + if (p)
> + iounmap(p);
> + }
> + synchronize_rcu(); /* will freeing work? */
> +}
> +
> static int __init init(void)
> {
> unsigned long size = (read_far) ? (8 << 20) : (16 << 10);
> @@ -104,6 +117,7 @@ static int __init init(void)
> "and writing 16 kB of rubbish in there.\n",
> size >> 10, mmio_address);
> do_test(size);
> + do_test2();

Please name the new function in a bit more meaningful way (such as
do_test_remap()).

Looks good, please send the full (folded back) patch anew, with Pekka's Ack in
place as well.

Thanks,

Ingo