I was looking at a problem with a new system we are trying to get up and
running. It has a 32-bit only PCI network device, but is a 64-bit
(x86_64) system. Looking at the code for NETIF_F_HIGHDMA (which, when
not set on a PCI network device, means that it cannot do 64-bit
accesses) in net/core/dev.c, it seems wrong to me.
It is dependent on HIGHMEM, but HIGHMEM has nothing to do with 32/64 bit
accesses. On 64-bit systems, HIGHMEM is not set, thus the network code
will pass any address (including those >32bits) to the driver. Plus,
highmem on 32-bit systems may very well be 32-bit accessible, possibly
resulting in unecessary copies. AFAICT, the current code will only work
with i386 and PAE and is sub-optimal.
If I am right, it is a little messy to fix, but I think doable. I
propose the following:
* Create a new zone named ZONE_HIGHMEM32 for 32-bit HIGHMEM addresses.
* Modify 64-bit architectures (and i386 with HIGHMEM) to put the
proper pages into the new zone.
* Add a "PageIn32Bits()" function/macro to check for this, and use
it in illegal_highdma() in net/core/dev.c
* Allocate from ZONE_HIGHMEM32 if illegal_highdma() returns true.
I think this will solve the problem. I haven't looked at other parts of
the kernel (IDE, SCSI, etc.) to see if they have similar problems.
Anyway, does the above change sound reasonable? Maybe there's an easier
way? Maybe I've missed something?
Thanks,
-Corey
On Wed, 2006-02-22 at 12:47 -0600, Corey Minyard wrote:
> I was looking at a problem with a new system we are trying to get up and
> running. It has a 32-bit only PCI network device, but is a 64-bit
> (x86_64) system. Looking at the code for NETIF_F_HIGHDMA (which, when
> not set on a PCI network device, means that it cannot do 64-bit
> accesses) in net/core/dev.c, it seems wrong to me.
>
> It is dependent on HIGHMEM, but HIGHMEM has nothing to do with 32/64 bit
> accesses. On 64-bit systems, HIGHMEM is not set, thus the network code
> will pass any address (including those >32bits) to the driver. Plus,
> highmem on 32-bit systems may very well be 32-bit accessible, possibly
> resulting in unecessary copies. AFAICT, the current code will only work
> with i386 and PAE and is sub-optim
you use the PCI mapping api right? if you do that then there's no
problem, after pci mapping the addresses will be in the lower address
range perfectly fine....
* Corey Minyard ([email protected]) wrote:
> I was looking at a problem with a new system we are trying to get up and
> running. It has a 32-bit only PCI network device, but is a 64-bit
> (x86_64) system. Looking at the code for NETIF_F_HIGHDMA (which, when
> not set on a PCI network device, means that it cannot do 64-bit
> accesses) in net/core/dev.c, it seems wrong to me.
Why would NETIF_F_HIGHDMA if it's 32-bit only PCI network device?
Typically net drivers do pci_set_dma_mask, and set features accordingly.
Then pci_map_* should just work.
thanks,
-chris
Arjan van de Ven wrote:
>On Wed, 2006-02-22 at 12:47 -0600, Corey Minyard wrote:
>
>
>>I was looking at a problem with a new system we are trying to get up and
>>running. It has a 32-bit only PCI network device, but is a 64-bit
>>(x86_64) system. Looking at the code for NETIF_F_HIGHDMA (which, when
>>not set on a PCI network device, means that it cannot do 64-bit
>>accesses) in net/core/dev.c, it seems wrong to me.
>>
>>It is dependent on HIGHMEM, but HIGHMEM has nothing to do with 32/64 bit
>>accesses. On 64-bit systems, HIGHMEM is not set, thus the network code
>>will pass any address (including those >32bits) to the driver. Plus,
>>highmem on 32-bit systems may very well be 32-bit accessible, possibly
>>resulting in unecessary copies. AFAICT, the current code will only work
>>with i386 and PAE and is sub-optim
>>
>>
>
>you use the PCI mapping api right? if you do that then there's no
>problem, after pci mapping the addresses will be in the lower address
>range perfectly fine....
>
>
Ah, cool, physical memory remapping. Then the problem lies elsewhere.
Thanks.
-Corey
On Wed, 2006-02-22 at 14:14 -0600, Corey Minyard wrote:
> >you use the PCI mapping api right? if you do that then there's no
> >problem, after pci mapping the addresses will be in the lower address
> >range perfectly fine....
> >
> >
> Ah, cool, physical memory remapping. Then the problem lies elsewhere.
> Thanks.
iommu's are cool things ;)
Corey Minyard <[email protected]> writes:
> I was looking at a problem with a new system we are trying to get up and
> running. It has a 32-bit only PCI network device, but is a 64-bit
> (x86_64) system. Looking at the code for NETIF_F_HIGHDMA (which, when
> not set on a PCI network device, means that it cannot do 64-bit
> accesses) in net/core/dev.c, it seems wrong to me.
> It is dependent on HIGHMEM, but HIGHMEM has nothing to do with 32/64 bit
> accesses. On 64-bit systems, HIGHMEM is not set, thus the network code
> will pass any address (including those >32bits) to the driver. Plus,
> highmem on 32-bit systems may very well be 32-bit accessible, possibly
> resulting in unecessary copies. AFAICT, the current code will only work
> with i386 and PAE and is sub-optimal.
x86-64 uses the PCI DMA API to handle this. NETIF_F_HIGHDMA is only
a i386 specific hack, mostly to work around the fact that not all
memory might be accessible from the CPU and thus break PIO drivers.
This problem doesn't exist on 64bit.
-Andi
On Wed, 2006-02-22 at 21:21 +0100, Arjan van de Ven wrote:
> iommu's are cool things ;)
Say that again from your other email address :)
--
dwmw2