GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other
flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an
atomic allocation, the code must test __GFP_WAIT flag presence. This patch
fixes the issue introduced in v3.5-rc1
CC: [email protected]
Signed-off-by: Marek Szyprowski <[email protected]>
---
arch/x86/kernel/pci-dma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index 872079a..32a81c9 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -100,7 +100,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
flag |= __GFP_ZERO;
again:
page = NULL;
- if (!(flag & GFP_ATOMIC))
+ if (flag & __GFP_WAIT)
page = dma_alloc_from_contiguous(dev, count, get_order(size));
if (!page)
page = alloc_pages_node(dev_to_node(dev), flag, get_order(size));
--
1.7.9.5
On Fri, Jan 17, 2014 at 8:46 AM, Marek Szyprowski
<[email protected]> wrote:
> GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other
> flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an
> atomic allocation, the code must test __GFP_WAIT flag presence. This patch
> fixes the issue introduced in v3.5-rc1
>
> CC: [email protected]
> Signed-off-by: Marek Szyprowski <[email protected]>
> ---
> arch/x86/kernel/pci-dma.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
> index 872079a..32a81c9 100644
> --- a/arch/x86/kernel/pci-dma.c
> +++ b/arch/x86/kernel/pci-dma.c
> @@ -100,7 +100,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
> flag |= __GFP_ZERO;
> again:
> page = NULL;
> - if (!(flag & GFP_ATOMIC))
> + if (flag & __GFP_WAIT)
>From that description should this not actually be:
if (!(flag & (GFP_ATOMIC|__GFP_WAIT) == GFP_ATOMIC))
Else we will start using this pool for more than __GFP_HIGH allocations?
That said, it is possible this is right and the intent was to allow
__GFP_HIGH allocations (in general) to use this contigious pool, but I
will let someone more intimate with the code comment to that. I would
have hoped the code would have been as below in that case:
if (!(flag & __GFP_HIGH))
Either way once this is resolved a nice comment should be added to
make it really clear:
-apw
Hello,
On 2014-01-17 11:49, Andy Whitcroft wrote:
> On Fri, Jan 17, 2014 at 8:46 AM, Marek Szyprowski
> <[email protected]> wrote:
> > GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other
> > flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an
> > atomic allocation, the code must test __GFP_WAIT flag presence. This patch
> > fixes the issue introduced in v3.5-rc1
> >
> > CC: [email protected]
> > Signed-off-by: Marek Szyprowski <[email protected]>
> > ---
> > arch/x86/kernel/pci-dma.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
> > index 872079a..32a81c9 100644
> > --- a/arch/x86/kernel/pci-dma.c
> > +++ b/arch/x86/kernel/pci-dma.c
> > @@ -100,7 +100,7 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
> > flag |= __GFP_ZERO;
> > again:
> > page = NULL;
> > - if (!(flag & GFP_ATOMIC))
> > + if (flag & __GFP_WAIT)
>
> >From that description should this not actually be:
>
> if (!(flag & (GFP_ATOMIC|__GFP_WAIT) == GFP_ATOMIC))
>
> Else we will start using this pool for more than __GFP_HIGH allocations?
>
> That said, it is possible this is right and the intent was to allow
> __GFP_HIGH allocations (in general) to use this contigious pool, but I
> will let someone more intimate with the code comment to that. I would
> have hoped the code would have been as below in that case:
>
> if (!(flag & __GFP_HIGH))
>
> Either way once this is resolved a nice comment should be added to
> make it really clear:
Exactly in this case, the GFP_ATOMIC check was (incorrectly) added by me in
commit 0a2b9a6ea936 ("X86: integrate CMA with DMA-mapping subsystem"). My
intention was to use CMA only if caller use other allocation flags than
GFP_ATOMIC, because CMA cannot be used from atomic context. The pool is not
aimed for __GFP_HIGH alocations. I will add additional comment to make clear
why __GFP_WAIT flag is being checked.
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other
flags, where meaningful is the LACK of __GFP_WAIT flag. To check if caller
wants to perform anatomic allocation, the code must test for a lack of the
__GFP_WAIT flag. This patch fixes the issue introduced in v3.5-rc1.
CC: [email protected]
Signed-off-by: Marek Szyprowski <[email protected]>
---
arch/x86/kernel/pci-dma.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index 872079a..3519a78 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -100,8 +100,10 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
flag |= __GFP_ZERO;
again:
page = NULL;
- if (!(flag & GFP_ATOMIC))
+ /* CMA can be used only in the context which allows sleeping */
+ if (flag & __GFP_WAIT)
page = dma_alloc_from_contiguous(dev, count, get_order(size));
+ /* fallback */
if (!page)
page = alloc_pages_node(dev_to_node(dev), flag, get_order(size));
if (!page)
--
1.7.9.5