Hi all,
I'm working on an MMC driver with a DMA capability. All has been working
well, until at some point I've got a bus error, when the mmc driver had
been handed in a buffer at 0x3000 physical RAM address. The reason is,
that on Zynq arch bus masters cannot access RAM below 0x80000. Therefore
my question: how shall I configure this in software?
The way I found was to use ARM-specific struct dmabounce_device_info and
implement its .needs_bounce() method to return true for those addresses.
Is this the right way or is there a better / more straight-forward one?
To do the above I have to enable CONFIG_DMABOUNCE, which then selects
CONFIG_ZONE_DMA. Having done just that I suddenly discover, that 0x3000
buffers aren't used any more, so, I cannot actually verify my
implementation :) Looking at ZONE_DMA it looks like it is still covering
the whole RAM range (/proc/zoneinfo shows start_pfn=0 in zone DMA), so, I
don't see why 0x3000 should be excluded now.
So, is using the .needs_bounce() method the correct way to support DMA on
this arch or is there a better one?
Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
On Mon, Jan 27, 2014 at 04:13:56PM +0100, Guennadi Liakhovetski wrote:
> Hi all,
>
> I'm working on an MMC driver with a DMA capability. All has been working
> well, until at some point I've got a bus error, when the mmc driver had
> been handed in a buffer at 0x3000 physical RAM address. The reason is,
> that on Zynq arch bus masters cannot access RAM below 0x80000. Therefore
> my question: how shall I configure this in software?
>
> The way I found was to use ARM-specific struct dmabounce_device_info and
> implement its .needs_bounce() method to return true for those addresses.
> Is this the right way or is there a better / more straight-forward one?
>
> To do the above I have to enable CONFIG_DMABOUNCE, which then selects
> CONFIG_ZONE_DMA. Having done just that I suddenly discover, that 0x3000
> buffers aren't used any more, so, I cannot actually verify my
> implementation :) Looking at ZONE_DMA it looks like it is still covering
> the whole RAM range (/proc/zoneinfo shows start_pfn=0 in zone DMA), so, I
> don't see why 0x3000 should be excluded now.
>
> So, is using the .needs_bounce() method the correct way to support DMA on
> this arch or is there a better one?
I have a similar issue with Renesas R8A7790 where there is a bus bridge
that can only deal with transactions to one half of the available RAM.
--
Ben Dooks, [email protected], http://www.fluff.org/ben/
Large Hadron Colada: A large Pina Colada that makes the universe disappear.
Hi Ben,
On Mon, 27 Jan 2014, Ben Dooks wrote:
> On Mon, Jan 27, 2014 at 04:13:56PM +0100, Guennadi Liakhovetski wrote:
> > Hi all,
> >
> > I'm working on an MMC driver with a DMA capability. All has been working
> > well, until at some point I've got a bus error, when the mmc driver had
> > been handed in a buffer at 0x3000 physical RAM address. The reason is,
> > that on Zynq arch bus masters cannot access RAM below 0x80000. Therefore
> > my question: how shall I configure this in software?
> >
> > The way I found was to use ARM-specific struct dmabounce_device_info and
> > implement its .needs_bounce() method to return true for those addresses.
> > Is this the right way or is there a better / more straight-forward one?
> >
> > To do the above I have to enable CONFIG_DMABOUNCE, which then selects
> > CONFIG_ZONE_DMA. Having done just that I suddenly discover, that 0x3000
> > buffers aren't used any more, so, I cannot actually verify my
> > implementation :) Looking at ZONE_DMA it looks like it is still covering
> > the whole RAM range (/proc/zoneinfo shows start_pfn=0 in zone DMA), so, I
> > don't see why 0x3000 should be excluded now.
> >
> > So, is using the .needs_bounce() method the correct way to support DMA on
> > this arch or is there a better one?
>
> I have a similar issue with Renesas R8A7790 where there is a bus bridge
> that can only deal with transactions to one half of the available RAM.
Have you tried enabling CONFIG_DMABOUNCE?
Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
On Mon, Jan 27, 2014 at 04:13:56PM +0100, Guennadi Liakhovetski wrote:
> I'm working on an MMC driver with a DMA capability. All has been working
> well, until at some point I've got a bus error, when the mmc driver had
> been handed in a buffer at 0x3000 physical RAM address. The reason is,
> that on Zynq arch bus masters cannot access RAM below 0x80000. Therefore
> my question: how shall I configure this in software?
You're going to run into all sorts of problems here. Normally, the
DMA-able memory is limited to the first N bytes of memory, not "you must
avoid the first N bytes of memory".
Linux has it hard-coded into the memory subsystems that the DMA zone
is from the start of memory to N, the normal zone is from N to H, and
high memory is from H upwards - and allocations for high can fall back
to normal, which can fall back to DMA but not the other way around.
Short of permanently reserving the first 0x80000 bytes of memory, I'm
not sure that there's much which can be done. You may wish to talk to
the MM gurus to see whether there's a modern alternative.
--
FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up. Estimation
in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad.
Estimate before purchase was "up to 13.2Mbit".
On 01/27/2014 06:02 PM, Russell King - ARM Linux wrote:
> On Mon, Jan 27, 2014 at 04:13:56PM +0100, Guennadi Liakhovetski wrote:
>> I'm working on an MMC driver with a DMA capability. All has been working
>> well, until at some point I've got a bus error, when the mmc driver had
>> been handed in a buffer at 0x3000 physical RAM address. The reason is,
>> that on Zynq arch bus masters cannot access RAM below 0x80000. Therefore
>> my question: how shall I configure this in software?
>
> You're going to run into all sorts of problems here. Normally, the
> DMA-able memory is limited to the first N bytes of memory, not "you must
> avoid the first N bytes of memory".
>
> Linux has it hard-coded into the memory subsystems that the DMA zone
> is from the start of memory to N, the normal zone is from N to H, and
> high memory is from H upwards - and allocations for high can fall back
> to normal, which can fall back to DMA but not the other way around.
>
> Short of permanently reserving the first 0x80000 bytes of memory, I'm
> not sure that there's much which can be done. You may wish to talk to
> the MM gurus to see whether there's a modern alternative.
We use memblock_reserve for allocation of this space in .reserse phase.
Look at git.xilinx.com - linux repo arch/arm/mach-zynq/common.c
/**
* zynq_memory_init() - Initialize special memory
*
* We need to stop things allocating the low memory as DMA can't work in
* the 1st 512K of memory. Using reserve vs remove is not totally clear yet.
*/
static void __init zynq_memory_init(void)
{
/*
* Reserve the 0-0x4000 addresses (before page tables and kernel)
* which can't be used for DMA
*/
if (!__pa(PAGE_OFFSET))
memblock_reserve(0, 0x4000);
}
DT_MACHINE_START(XILINX_EP107, "Xilinx Zynq Platform")
...
.reserve = zynq_memory_init,
...
MACHINE_END
I have checked why we are reserving just 0 - 0x4000 when kernel starts from 0x8000
and maybe Russell can help me with this better.
I got answer that using memblock_reserve was recommended in past for that.
Why 0x4000? IRC Linux for ARM is using space for any purpose.
Russell knows this much better than I.
Thanks,
Michal
--
Michal Simek, Ing. (M.Eng), OpenPGP -> KeyID: FE3D1F91
w: http://www.monstr.eu p: +42-0-721842854
Maintainer of Linux kernel - Microblaze cpu - http://www.monstr.eu/fdt/
Maintainer of Linux kernel - Xilinx Zynq ARM architecture
Microblaze U-BOOT custodian and responsible for u-boot arm zynq platform
On Mon, Jan 27, 2014 at 06:45:50PM +0100, Michal Simek wrote:
> Why 0x4000? IRC Linux for ARM is using space for any purpose.
> Russell knows this much better than I.
Probably because as the kernel is loaded at 0x8000, it will place the
swapper page table at 0x4000, thus covering from 0x4000 upwards.
Thus, the majority of your un-DMA-able memory will be kernel text or
swapper page tables.
--
FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up. Estimation
in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad.
Estimate before purchase was "up to 13.2Mbit".
On 01/27/2014 06:52 PM, Russell King - ARM Linux wrote:
> On Mon, Jan 27, 2014 at 06:45:50PM +0100, Michal Simek wrote:
>> Why 0x4000? IRC Linux for ARM is using space for any purpose.
>> Russell knows this much better than I.
>
> Probably because as the kernel is loaded at 0x8000, it will place the
> swapper page table at 0x4000, thus covering from 0x4000 upwards.
Ah yeah swapper.
>
> Thus, the majority of your un-DMA-able memory will be kernel text or
> swapper page tables.
Yes, exactly.
0x0 - 0x4000 - reserving not to be used by DMA
0x4000 - 0x8000 swapper page table
0x8000 - 0x80000 kernel text + up
Thanks,
Michal
--
Michal Simek, Ing. (M.Eng), OpenPGP -> KeyID: FE3D1F91
w: http://www.monstr.eu p: +42-0-721842854
Maintainer of Linux kernel - Microblaze cpu - http://www.monstr.eu/fdt/
Maintainer of Linux kernel - Xilinx Zynq ARM architecture
Microblaze U-BOOT custodian and responsible for u-boot arm zynq platform
Hi Michal, Russell,
On Mon, 27 Jan 2014, Michal Simek wrote:
> On 01/27/2014 06:52 PM, Russell King - ARM Linux wrote:
> > On Mon, Jan 27, 2014 at 06:45:50PM +0100, Michal Simek wrote:
> >> Why 0x4000? IRC Linux for ARM is using space for any purpose.
> >> Russell knows this much better than I.
> >
> > Probably because as the kernel is loaded at 0x8000, it will place the
> > swapper page table at 0x4000, thus covering from 0x4000 upwards.
>
> Ah yeah swapper.
>
> >
> > Thus, the majority of your un-DMA-able memory will be kernel text or
> > swapper page tables.
>
> Yes, exactly.
> 0x0 - 0x4000 - reserving not to be used by DMA
> 0x4000 - 0x8000 swapper page table
> 0x8000 - 0x80000 kernel text + up
Good, thanks for the explanations and examples, we'll do the same then!
Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/