Hi all,
On 07/05/2021 22:00, osstest service owner wrote:
> flight 161829 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/161829/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
[...]
> test-arm64-arm64-examine 8 reboot fail REGR. vs. 152332
> test-arm64-arm64-libvirt-xsm 8 xen-boot fail REGR. vs. 152332
> test-arm64-arm64-xl-credit2 8 xen-boot fail REGR. vs. 152332
> test-arm64-arm64-xl-credit1 8 xen-boot fail REGR. vs. 152332
> test-arm64-arm64-xl 8 xen-boot fail REGR. vs. 152332
> test-arm64-arm64-xl-xsm 8 xen-boot fail REGR. vs. 152332
> test-amd64-amd64-amd64-pvgrub 20 guest-stop fail REGR. vs. 152332
> test-amd64-amd64-i386-pvgrub 20 guest-stop fail REGR. vs. 152332
> test-arm64-arm64-xl-thunderx 8 xen-boot fail REGR. vs. 152332
> test-amd64-amd64-xl-qcow2 13 guest-start fail REGR. vs. 152332
> test-amd64-amd64-libvirt-vhd 13 guest-start fail REGR. vs. 152332
> test-amd64-amd64-examine 4 memdisk-try-append fail REGR. vs. 152332
> test-arm64-arm64-xl-seattle 8 xen-boot fail REGR. vs. 152332
> test-armhf-armhf-xl-vhd 13 guest-start fail REGR. vs. 152332
Osstest reported dom0 boot failure on all the arm64 platform we have.
The stack trace is similar everywhere:
[ 18.101077] Unable to handle kernel NULL pointer dereference at
virtual address 0000000000000008
[ 18.101441] Mem abort info:
[ 18.101625] ESR = 0x96000004
[ 18.101839] EC = 0x25: DABT (current EL), IL = 32 bits
[ 18.102111] SET = 0, FnV = 0
[ 18.102327] EA = 0, S1PTW = 0
[ 18.102544] Data abort info:
[ 18.102747] ISV = 0, ISS = 0x00000004
[ 18.102968] CM = 0, WnR = 0
[ 18.103183] [0000000000000008] user address but active_mm is swapper
[ 18.103476] Internal error: Oops: 96000004 [#1] SMP
[ 18.103689] Modules linked in:
[ 18.103881] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.12.0-rc3+ #126
[ 18.104172] Hardware name: Foundation-v8A (DT)
[ 18.104376] pstate: 60000005 (nZCv daif -PAN -UAO -TCO BTYPE=--)
[ 18.104653] pc : xen_swiotlb_dma_supported+0x30/0xc8
[ 18.104893] lr : dma_supported+0x38/0x68
[ 18.105118] sp : ffff80001295bac0
[ 18.105289] x29: ffff80001295bac0 x28: ffff8000114f44c0
[ 18.105600] x27: 0000000000000007 x26: ffff8000113a1000
[ 18.105906] x25: 0000000000000000 x24: ffff800011d2e910
[ 18.106213] x23: ffff800011d4d000 x22: ffff000012fad810
[ 18.106525] x21: ffffffffffffffff x20: ffffffffffffffff
[ 18.106837] x19: ffff000012fad810 x18: 00000000ffffffeb
[ 18.107146] x17: 0000000000000000 x16: 00000000493f1445
[ 18.107450] x15: ffff80001132d000 x14: 000000001c131000
[ 18.107759] x13: 00000000498d0616 x12: ffff8000129f7000
[ 18.108068] x11: ffff000012c08710 x10: ffff800011a91000
[ 18.108380] x9 : 0000000000003000 x8 : ffff00001ffff000
[ 18.108722] x7 : ffff800011a90a88 x6 : ffff800010a7275c
[ 18.109031] x5 : 0000000000000000 x4 : 0000000000000001
[ 18.109331] x3 : 2cd8f9dc91b3df00 x2 : ffff8000109c7578
[ 18.109640] x1 : 0000000000000000 x0 : 0000000000000000
[ 18.109940] Call trace:
[ 18.110079] xen_swiotlb_dma_supported+0x30/0xc8
[ 18.110319] dma_supported+0x38/0x68
[ 18.110543] dma_set_mask+0x30/0x58
[ 18.110765] virtio_mmio_probe+0x1c8/0x238
[ 18.110979] platform_probe+0x6c/0x108
[ 18.111188] really_probe+0xfc/0x3c8
[ 18.111413] driver_probe_device+0x68/0xe8
[ 18.111647] device_driver_attach+0x74/0x98
[ 18.111883] __driver_attach+0x98/0xe0
[ 18.112111] bus_for_each_dev+0x84/0xd8
[ 18.112334] driver_attach+0x30/0x40
[ 18.112557] bus_add_driver+0x168/0x228
[ 18.112784] driver_register+0x64/0x110
[ 18.113016] __platform_driver_register+0x34/0x40
[ 18.113257] virtio_mmio_init+0x20/0x28
[ 18.113480] do_one_initcall+0x90/0x470
[ 18.113694] kernel_init_freeable+0x3ec/0x440
[ 18.113950] kernel_init+0x1c/0x138
[ 18.114158] ret_from_fork+0x10/0x18
[ 18.114409] Code: f000f641 f000a200 f944e821 f9410400 (f9400434)
[ 18.114718] ---[ end trace d39406719f25d228 ]---
[ 18.115044] Kernel panic - not syncing: Attempted to kill init!
exitcode=0x0000000b
[ 18.115339] SMP: stopping secondary CPUs
[ 18.115584] Kernel Offset: disabled
[ 18.115743] CPU features: 0x00240000,61802000
[ 18.115954] Memory Limit: none
[ 18.116173] ---[ end Kernel panic - not syncing: Attempted to kill
init! exitcode=0x0000000b ]---
I have bisected manually and pinpoint the following commit:
commit 2726bf3ff2520dba61fafc90a055640f7ad54e05 (refs/bisect/bad)
Author: Florian Fainelli <[email protected]>
Date: Mon Mar 22 18:53:49 2021 -0700
swiotlb: Make SWIOTLB_NO_FORCE perform no allocation
When SWIOTLB_NO_FORCE is used, there should really be no allocations of
default_nslabs to occur since we are not going to use those slabs. If a
platform was somehow setting swiotlb_no_force and a later call to
swiotlb_init() was to be made we would still be proceeding with
allocating the default SWIOTLB size (64MB), whereas if swiotlb=noforce
was set on the kernel command line we would have only allocated 2KB.
This would be inconsistent and the point of initializing default_nslabs
to 1, was intended to allocate the minimum amount of memory
possible, so
simply remove that minimal allocation period.
Signed-off-by: Florian Fainelli <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
The pointer dereferenced seems to suggest that the swiotlb hasn't been
allocated. From what I can tell, this may be because swiotlb_force is
set to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running
on top of Xen.
I am not entirely sure what would be the correct fix. Any opinions?
Cheers,
>
> Regressions which are regarded as allowable (not blocking):
> test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail REGR. vs. 152332
>
> Tests which did not succeed, but are not blocking:
> test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop fail like 152332
> test-armhf-armhf-libvirt 16 saverestore-support-check fail like 152332
> test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop fail like 152332
> test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
> test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop fail like 152332
> test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail like 152332
> test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop fail like 152332
> test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail never pass
> test-amd64-amd64-libvirt 15 migrate-support-check fail never pass
> test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
> test-armhf-armhf-xl-credit2 15 migrate-support-check fail never pass
> test-armhf-armhf-xl-credit2 16 saverestore-support-check fail never pass
> test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail never pass
> test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail never pass
> test-armhf-armhf-xl 15 migrate-support-check fail never pass
> test-armhf-armhf-xl 16 saverestore-support-check fail never pass
> test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail never pass
> test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail never pass
> test-armhf-armhf-xl-credit1 15 migrate-support-check fail never pass
> test-armhf-armhf-xl-credit1 16 saverestore-support-check fail never pass
> test-armhf-armhf-libvirt 15 migrate-support-check fail never pass
> test-armhf-armhf-libvirt-raw 14 migrate-support-check fail never pass
> test-armhf-armhf-xl-arndale 15 migrate-support-check fail never pass
> test-armhf-armhf-xl-arndale 16 saverestore-support-check fail never pass
> test-armhf-armhf-xl-rtds 15 migrate-support-check fail never pass
> test-armhf-armhf-xl-rtds 16 saverestore-support-check fail never pass
>
> version targeted for testing:
> linux e48661230cc35b3d0f4367eddfc19f86463ab917
> baseline version:
> linux deacdb3e3979979016fcd0ffd518c320a62ad166
>
> Last test of basis 152332 2020-07-31 19:41:23 Z 280 days
> Failing since 152366 2020-08-01 20:49:34 Z 279 days 465 attempts
> Testing same since 161829 2021-05-07 05:12:46 Z 0 days 1 attempts
>
> ------------------------------------------------------------
> 5992 people touched revisions under test,
> not listing them all
>
> jobs:
> build-amd64-xsm pass
> build-arm64-xsm pass
> build-i386-xsm pass
> build-amd64 pass
> build-arm64 pass
> build-armhf pass
> build-i386 pass
> build-amd64-libvirt pass
> build-arm64-libvirt pass
> build-armhf-libvirt pass
> build-i386-libvirt pass
> build-amd64-pvops pass
> build-arm64-pvops pass
> build-armhf-pvops pass
> build-i386-pvops pass
> test-amd64-amd64-xl pass
> test-amd64-coresched-amd64-xl pass
> test-arm64-arm64-xl fail
> test-armhf-armhf-xl pass
> test-amd64-i386-xl fail
> test-amd64-coresched-i386-xl fail
> test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass
> test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm fail
> test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass
> test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm fail
> test-amd64-amd64-xl-qemut-debianhvm-i386-xsm pass
> test-amd64-i386-xl-qemut-debianhvm-i386-xsm fail
> test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm pass
> test-amd64-i386-xl-qemuu-debianhvm-i386-xsm fail
> test-amd64-amd64-libvirt-xsm pass
> test-arm64-arm64-libvirt-xsm fail
> test-amd64-i386-libvirt-xsm fail
> test-amd64-amd64-xl-xsm pass
> test-arm64-arm64-xl-xsm fail
> test-amd64-i386-xl-xsm fail
> test-amd64-amd64-qemuu-nested-amd fail
> test-amd64-amd64-xl-pvhv2-amd pass
> test-amd64-i386-qemut-rhel6hvm-amd fail
> test-amd64-i386-qemuu-rhel6hvm-amd fail
> test-amd64-amd64-dom0pvh-xl-amd pass
> test-amd64-amd64-xl-qemut-debianhvm-amd64 pass
> test-amd64-i386-xl-qemut-debianhvm-amd64 fail
> test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass
> test-amd64-i386-xl-qemuu-debianhvm-amd64 fail
> test-amd64-i386-freebsd10-amd64 fail
> test-amd64-amd64-qemuu-freebsd11-amd64 pass
> test-amd64-amd64-qemuu-freebsd12-amd64 pass
> test-amd64-amd64-xl-qemuu-ovmf-amd64 pass
> test-amd64-i386-xl-qemuu-ovmf-amd64 fail
> test-amd64-amd64-xl-qemut-win7-amd64 fail
> test-amd64-i386-xl-qemut-win7-amd64 fail
> test-amd64-amd64-xl-qemuu-win7-amd64 fail
> test-amd64-i386-xl-qemuu-win7-amd64 fail
> test-amd64-amd64-xl-qemut-ws16-amd64 fail
> test-amd64-i386-xl-qemut-ws16-amd64 fail
> test-amd64-amd64-xl-qemuu-ws16-amd64 fail
> test-amd64-i386-xl-qemuu-ws16-amd64 fail
> test-armhf-armhf-xl-arndale pass
> test-amd64-amd64-xl-credit1 pass
> test-arm64-arm64-xl-credit1 fail
> test-armhf-armhf-xl-credit1 pass
> test-amd64-amd64-xl-credit2 pass
> test-arm64-arm64-xl-credit2 fail
> test-armhf-armhf-xl-credit2 pass
> test-armhf-armhf-xl-cubietruck pass
> test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict pass
> test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict fail
> test-amd64-amd64-examine pass
> test-arm64-arm64-examine fail
> test-armhf-armhf-examine pass
> test-amd64-i386-examine fail
> test-amd64-i386-freebsd10-i386 fail
> test-amd64-amd64-qemuu-nested-intel pass
> test-amd64-amd64-xl-pvhv2-intel pass
> test-amd64-i386-qemut-rhel6hvm-intel fail
> test-amd64-i386-qemuu-rhel6hvm-intel fail
> test-amd64-amd64-dom0pvh-xl-intel pass
> test-amd64-amd64-libvirt pass
> test-armhf-armhf-libvirt pass
> test-amd64-i386-libvirt fail
> test-amd64-amd64-xl-multivcpu pass
> test-armhf-armhf-xl-multivcpu pass
> test-amd64-amd64-pair pass
> test-amd64-i386-pair fail
> test-amd64-amd64-libvirt-pair pass
> test-amd64-i386-libvirt-pair fail
> test-amd64-amd64-amd64-pvgrub fail
> test-amd64-amd64-i386-pvgrub fail
> test-amd64-amd64-xl-pvshim pass
> test-amd64-i386-xl-pvshim fail
> test-amd64-amd64-pygrub pass
> test-amd64-amd64-xl-qcow2 fail
> test-armhf-armhf-libvirt-raw pass
> test-amd64-i386-xl-raw fail
> test-amd64-amd64-xl-rtds fail
> test-armhf-armhf-xl-rtds pass
> test-arm64-arm64-xl-seattle fail
> test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass
> test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow fail
> test-amd64-amd64-xl-shadow pass
> test-amd64-i386-xl-shadow fail
> test-arm64-arm64-xl-thunderx fail
> test-amd64-amd64-libvirt-vhd fail
> test-armhf-armhf-xl-vhd fail
>
>
> ------------------------------------------------------------
> sg-report-flight on osstest.test-lab.xenproject.org
> logs: /home/logs/logs
> images: /home/logs/images
>
> Logs, config files, etc. are available at
> http://logs.test-lab.xenproject.org/osstest/logs
>
> Explanation of these reports, and of osstest in general, is at
> http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
> http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
>
> Test harness code can be found at
> http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
>
>
> Not pushing.
>
> (No revision log; it would be 1625261 lines long.)
>
--
Julien Grall
On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> The pointer dereferenced seems to suggest that the swiotlb hasn't been
> allocated. From what I can tell, this may be because swiotlb_force is set
> to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top
> of Xen.
>
> I am not entirely sure what would be the correct fix. Any opinions?
Can you try something like the patch below (not even compile tested, but
the intent should be obvious?
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 16a2b2b1c54d..7671bc153fb1 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -44,6 +44,8 @@
#include <asm/tlb.h>
#include <asm/alternative.h>
+#include <xen/arm/swiotlb-xen.h>
+
/*
* We need to be able to catch inadvertent references to memstart_addr
* that occur (potentially in generic code) before arm64_memblock_init()
@@ -482,7 +484,7 @@ void __init mem_init(void)
if (swiotlb_force == SWIOTLB_FORCE ||
max_pfn > PFN_DOWN(arm64_dma_phys_limit))
swiotlb_init(1);
- else
+ else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
swiotlb_force = SWIOTLB_NO_FORCE;
set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
Hi Christoph,
On 10/05/2021 09:40, Christoph Hellwig wrote:
> On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
>> The pointer dereferenced seems to suggest that the swiotlb hasn't been
>> allocated. From what I can tell, this may be because swiotlb_force is set
>> to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top
>> of Xen.
>>
>> I am not entirely sure what would be the correct fix. Any opinions?
>
> Can you try something like the patch below (not even compile tested, but
> the intent should be obvious?
>
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 16a2b2b1c54d..7671bc153fb1 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -44,6 +44,8 @@
> #include <asm/tlb.h>
> #include <asm/alternative.h>
>
> +#include <xen/arm/swiotlb-xen.h>
> +
> /*
> * We need to be able to catch inadvertent references to memstart_addr
> * that occur (potentially in generic code) before arm64_memblock_init()
> @@ -482,7 +484,7 @@ void __init mem_init(void)
> if (swiotlb_force == SWIOTLB_FORCE ||
> max_pfn > PFN_DOWN(arm64_dma_phys_limit))
> swiotlb_init(1);
> - else
> + else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
> swiotlb_force = SWIOTLB_NO_FORCE;
>
> set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
I have applied the patch on top of 5.13-rc1 and can confirm I am able to
boot dom0. Are you going to submit the patch?
Thank you for your help!
Best regards,
--
Julien Grall
On 5/10/2021 11:15 AM, Julien Grall wrote:
> Hi Christoph,
>
> On 10/05/2021 09:40, Christoph Hellwig wrote:
>> On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
>>> The pointer dereferenced seems to suggest that the swiotlb hasn't been
>>> allocated. From what I can tell, this may be because swiotlb_force is
>>> set
>>> to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on
>>> top
>>> of Xen.
>>>
>>> I am not entirely sure what would be the correct fix. Any opinions?
>>
>> Can you try something like the patch below (not even compile tested, but
>> the intent should be obvious?
>>
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 16a2b2b1c54d..7671bc153fb1 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -44,6 +44,8 @@
>> #include <asm/tlb.h>
>> #include <asm/alternative.h>
>> +#include <xen/arm/swiotlb-xen.h>
>> +
>> /*
>> * We need to be able to catch inadvertent references to memstart_addr
>> * that occur (potentially in generic code) before
>> arm64_memblock_init()
>> @@ -482,7 +484,7 @@ void __init mem_init(void)
>> if (swiotlb_force == SWIOTLB_FORCE ||
>> max_pfn > PFN_DOWN(arm64_dma_phys_limit))
>> swiotlb_init(1);
>> - else
>> + else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
>> swiotlb_force = SWIOTLB_NO_FORCE;
>> set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
>
> I have applied the patch on top of 5.13-rc1 and can confirm I am able to
> boot dom0. Are you going to submit the patch?
Sorry about that Julien and thanks Christoph!
--
Florian
On Mon, 10 May 2021, Christoph Hellwig wrote:
> On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> > The pointer dereferenced seems to suggest that the swiotlb hasn't been
> > allocated. From what I can tell, this may be because swiotlb_force is set
> > to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top
> > of Xen.
> >
> > I am not entirely sure what would be the correct fix. Any opinions?
>
> Can you try something like the patch below (not even compile tested, but
> the intent should be obvious?
>
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 16a2b2b1c54d..7671bc153fb1 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -44,6 +44,8 @@
> #include <asm/tlb.h>
> #include <asm/alternative.h>
>
> +#include <xen/arm/swiotlb-xen.h>
> +
> /*
> * We need to be able to catch inadvertent references to memstart_addr
> * that occur (potentially in generic code) before arm64_memblock_init()
> @@ -482,7 +484,7 @@ void __init mem_init(void)
> if (swiotlb_force == SWIOTLB_FORCE ||
> max_pfn > PFN_DOWN(arm64_dma_phys_limit))
> swiotlb_init(1);
> - else
> + else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
> swiotlb_force = SWIOTLB_NO_FORCE;
>
> set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
The "IS_ENABLED(CONFIG_XEN)" is not needed as the check is already part
of xen_swiotlb_detect().
But let me ask another question first. Do you think it makes sense to have:
if (swiotlb_force == SWIOTLB_NO_FORCE)
return 0;
at the beginning of swiotlb_late_init_with_tbl? I am asking because
swiotlb_late_init_with_tbl is meant for special late initializations,
right? It shouldn't really matter the presence or absence of
SWIOTLB_NO_FORCE in regards to swiotlb_late_init_with_tbl. Also the
commit message for "swiotlb: Make SWIOTLB_NO_FORCE perform no
allocation" says that "If a platform was somehow setting
swiotlb_no_force and a later call to swiotlb_init() was to be made we
would still be proceeding with allocating the default SWIOTLB size
(64MB)." Our case here is very similar, right? So the allocation should
proceed?
Which brings me to a separate unrelated issue, still affecting the path
xen_swiotlb_init -> swiotlb_late_init_with_tbl. If swiotlb_init(1) is
called by mem_init then swiotlb_late_init_with_tbl will fail due to the
check:
/* protect against double initialization */
if (WARN_ON_ONCE(io_tlb_default_mem))
return -ENOMEM;
xen_swiotlb_init is meant to ask Xen to make a bunch of pages physically
contiguous. Then, it initializes the swiotlb buffer based on those
pages. So it is a problem that swiotlb_late_init_with_tbl refuses to
continue. However, in practice it is not a problem today because on ARM
we don't actually make any special requests to Xen to make the pages
physically contiguous (yet). See the empty implementation of
arch/arm/xen/mm.c:xen_create_contiguous_region. I don't know about x86.
So maybe we should instead do something like the appended?
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f8f07469d259..f5a3638d1dee 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -152,6 +152,7 @@ static int __init xen_mm_init(void)
struct gnttab_cache_flush cflush;
if (!xen_swiotlb_detect())
return 0;
+ swiotlb_exit();
xen_swiotlb_init();
cflush.op = 0;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..f17be37298a7 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -285,9 +285,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
struct io_tlb_mem *mem;
- if (swiotlb_force == SWIOTLB_NO_FORCE)
- return 0;
-
/* protect against double initialization */
if (WARN_ON_ONCE(io_tlb_default_mem))
return -ENOMEM;
On Mon, May 10, 2021 at 06:46:34PM -0700, Stefano Stabellini wrote:
> On Mon, 10 May 2021, Christoph Hellwig wrote:
> > On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> > > The pointer dereferenced seems to suggest that the swiotlb hasn't been
> > > allocated. From what I can tell, this may be because swiotlb_force is set
> > > to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top
> > > of Xen.
> > >
> > > I am not entirely sure what would be the correct fix. Any opinions?
> >
> > Can you try something like the patch below (not even compile tested, but
> > the intent should be obvious?
> >
> >
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 16a2b2b1c54d..7671bc153fb1 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -44,6 +44,8 @@
> > #include <asm/tlb.h>
> > #include <asm/alternative.h>
> >
> > +#include <xen/arm/swiotlb-xen.h>
> > +
> > /*
> > * We need to be able to catch inadvertent references to memstart_addr
> > * that occur (potentially in generic code) before arm64_memblock_init()
> > @@ -482,7 +484,7 @@ void __init mem_init(void)
> > if (swiotlb_force == SWIOTLB_FORCE ||
> > max_pfn > PFN_DOWN(arm64_dma_phys_limit))
> > swiotlb_init(1);
> > - else
> > + else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
> > swiotlb_force = SWIOTLB_NO_FORCE;
> >
> > set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
>
> The "IS_ENABLED(CONFIG_XEN)" is not needed as the check is already part
> of xen_swiotlb_detect().
As far as I can tell the x86 version of xen_swiotlb_detect has a
!CONFIG_XEN stub. The arm/arm64 version in uncoditionally declared, but
the implementation only compiled when Xen support is enabled.
>
>
> But let me ask another question first. Do you think it makes sense to have:
>
> if (swiotlb_force == SWIOTLB_NO_FORCE)
> return 0;
>
> at the beginning of swiotlb_late_init_with_tbl? I am asking because
> swiotlb_late_init_with_tbl is meant for special late initializations,
> right? It shouldn't really matter the presence or absence of
> SWIOTLB_NO_FORCE in regards to swiotlb_late_init_with_tbl. Also the
> commit message for "swiotlb: Make SWIOTLB_NO_FORCE perform no
> allocation" says that "If a platform was somehow setting
> swiotlb_no_force and a later call to swiotlb_init() was to be made we
> would still be proceeding with allocating the default SWIOTLB size
> (64MB)." Our case here is very similar, right? So the allocation should
> proceed?
Well, right now SWIOTLB_NO_FORCE is checked in dma_direct_map_page.
We need to clean all this up a bit, especially with the work to support
multiple swiotlb buffers, but I think for now this is the best we can
do.
> Which brings me to a separate unrelated issue, still affecting the path
> xen_swiotlb_init -> swiotlb_late_init_with_tbl. If swiotlb_init(1) is
> called by mem_init then swiotlb_late_init_with_tbl will fail due to the
> check:
>
> /* protect against double initialization */
> if (WARN_ON_ONCE(io_tlb_default_mem))
> return -ENOMEM;
>
> xen_swiotlb_init is meant to ask Xen to make a bunch of pages physically
> contiguous. Then, it initializes the swiotlb buffer based on those
> pages. So it is a problem that swiotlb_late_init_with_tbl refuses to
> continue. However, in practice it is not a problem today because on ARM
> we don't actually make any special requests to Xen to make the pages
> physically contiguous (yet). See the empty implementation of
> arch/arm/xen/mm.c:xen_create_contiguous_region. I don't know about x86.
>
> So maybe we should instead do something like the appended?
So I'd like to change the core swiotlb initialization to just use
a callback into the arch/xen code to make the pages contigous and
kill all that code duplication. Together with the multiple swiotlb
buffer work I'd rather avoid churn that goes into a different direction
if possible.
On Tue, 11 May 2021, Christoph Hellwig wrote:
> On Mon, May 10, 2021 at 06:46:34PM -0700, Stefano Stabellini wrote:
> > On Mon, 10 May 2021, Christoph Hellwig wrote:
> > > On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> > > > The pointer dereferenced seems to suggest that the swiotlb hasn't been
> > > > allocated. From what I can tell, this may be because swiotlb_force is set
> > > > to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top
> > > > of Xen.
> > > >
> > > > I am not entirely sure what would be the correct fix. Any opinions?
> > >
> > > Can you try something like the patch below (not even compile tested, but
> > > the intent should be obvious?
> > >
> > >
> > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > > index 16a2b2b1c54d..7671bc153fb1 100644
> > > --- a/arch/arm64/mm/init.c
> > > +++ b/arch/arm64/mm/init.c
> > > @@ -44,6 +44,8 @@
> > > #include <asm/tlb.h>
> > > #include <asm/alternative.h>
> > >
> > > +#include <xen/arm/swiotlb-xen.h>
> > > +
> > > /*
> > > * We need to be able to catch inadvertent references to memstart_addr
> > > * that occur (potentially in generic code) before arm64_memblock_init()
> > > @@ -482,7 +484,7 @@ void __init mem_init(void)
> > > if (swiotlb_force == SWIOTLB_FORCE ||
> > > max_pfn > PFN_DOWN(arm64_dma_phys_limit))
> > > swiotlb_init(1);
> > > - else
> > > + else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
> > > swiotlb_force = SWIOTLB_NO_FORCE;
> > >
> > > set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
> >
> > The "IS_ENABLED(CONFIG_XEN)" is not needed as the check is already part
> > of xen_swiotlb_detect().
>
> As far as I can tell the x86 version of xen_swiotlb_detect has a
> !CONFIG_XEN stub. The arm/arm64 version in uncoditionally declared, but
> the implementation only compiled when Xen support is enabled.
The implementation of xen_swiotlb_detect should work fine if
!CONFIG_XEN, but the issue is that it is implemented in
arch/arm/xen/mm.c, so it is not going to be available.
I think it would be good to turn it into a static inline so that we can
call it from arch/arm64/mm/init.c and other similar places with or
without CONFIG_XEN, see appended patch below. It compiles without
CONFIG_XEN.
> > But let me ask another question first. Do you think it makes sense to have:
> >
> > if (swiotlb_force == SWIOTLB_NO_FORCE)
> > return 0;
> >
> > at the beginning of swiotlb_late_init_with_tbl? I am asking because
> > swiotlb_late_init_with_tbl is meant for special late initializations,
> > right? It shouldn't really matter the presence or absence of
> > SWIOTLB_NO_FORCE in regards to swiotlb_late_init_with_tbl. Also the
> > commit message for "swiotlb: Make SWIOTLB_NO_FORCE perform no
> > allocation" says that "If a platform was somehow setting
> > swiotlb_no_force and a later call to swiotlb_init() was to be made we
> > would still be proceeding with allocating the default SWIOTLB size
> > (64MB)." Our case here is very similar, right? So the allocation should
> > proceed?
>
> Well, right now SWIOTLB_NO_FORCE is checked in dma_direct_map_page.
> We need to clean all this up a bit, especially with the work to support
> multiple swiotlb buffers, but I think for now this is the best we can
> do.
OK
> > Which brings me to a separate unrelated issue, still affecting the path
> > xen_swiotlb_init -> swiotlb_late_init_with_tbl. If swiotlb_init(1) is
> > called by mem_init then swiotlb_late_init_with_tbl will fail due to the
> > check:
> >
> > /* protect against double initialization */
> > if (WARN_ON_ONCE(io_tlb_default_mem))
> > return -ENOMEM;
> >
> > xen_swiotlb_init is meant to ask Xen to make a bunch of pages physically
> > contiguous. Then, it initializes the swiotlb buffer based on those
> > pages. So it is a problem that swiotlb_late_init_with_tbl refuses to
> > continue. However, in practice it is not a problem today because on ARM
> > we don't actually make any special requests to Xen to make the pages
> > physically contiguous (yet). See the empty implementation of
> > arch/arm/xen/mm.c:xen_create_contiguous_region. I don't know about x86.
> >
> > So maybe we should instead do something like the appended?
>
> So I'd like to change the core swiotlb initialization to just use
> a callback into the arch/xen code to make the pages contigous and
> kill all that code duplication. Together with the multiple swiotlb
> buffer work I'd rather avoid churn that goes into a different direction
> if possible.
That's a much better plan. It is also not super urgent, so maybe for now
we could add an explicit check for io_tlb_default_mem != NULL at the
beginning of xen_swiotlb_init? So that at least we can fail explicitly
or ignore it explicitly rather than by accident.
---
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f8f07469d259..223b1151fd7d 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -135,18 +135,6 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
return;
}
-int xen_swiotlb_detect(void)
-{
- if (!xen_domain())
- return 0;
- if (xen_feature(XENFEAT_direct_mapped))
- return 1;
- /* legacy case */
- if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
- return 1;
- return 0;
-}
-
static int __init xen_mm_init(void)
{
struct gnttab_cache_flush cflush;
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 16a2b2b1c54d..e55409caaee3 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -43,6 +43,7 @@
#include <linux/sizes.h>
#include <asm/tlb.h>
#include <asm/alternative.h>
+#include <asm/xen/swiotlb-xen.h>
/*
* We need to be able to catch inadvertent references to memstart_addr
@@ -482,7 +483,7 @@ void __init mem_init(void)
if (swiotlb_force == SWIOTLB_FORCE ||
max_pfn > PFN_DOWN(arm64_dma_phys_limit))
swiotlb_init(1);
- else
+ else if (!xen_swiotlb_detect())
swiotlb_force = SWIOTLB_NO_FORCE;
set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
diff --git a/include/xen/arm/swiotlb-xen.h b/include/xen/arm/swiotlb-xen.h
index 2994fe6031a0..33336ab58afc 100644
--- a/include/xen/arm/swiotlb-xen.h
+++ b/include/xen/arm/swiotlb-xen.h
@@ -2,6 +2,19 @@
#ifndef _ASM_ARM_SWIOTLB_XEN_H
#define _ASM_ARM_SWIOTLB_XEN_H
-extern int xen_swiotlb_detect(void);
+#include <xen/features.h>
+#include <xen/xen.h>
+
+static inline int xen_swiotlb_detect(void)
+{
+ if (!xen_domain())
+ return 0;
+ if (xen_feature(XENFEAT_direct_mapped))
+ return 1;
+ /* legacy case */
+ if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
+ return 1;
+ return 0;
+}
#endif /* _ASM_ARM_SWIOTLB_XEN_H */
On Tue, May 11, 2021 at 09:47:33AM -0700, Stefano Stabellini wrote:
> That's a much better plan. It is also not super urgent, so maybe for now
> we could add an explicit check for io_tlb_default_mem != NULL at the
> beginning of xen_swiotlb_init? So that at least we can fail explicitly
> or ignore it explicitly rather than by accident.
Fine with me. Do you want to take over from here and test and submit
your version?