2015-04-03 08:41:20

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

> To fix this problem, we modifies the behaviors of the intel vt-d in the
> crashdump kernel:
>
> For DMA Remapping:
> 1. To accept the vt-d hardware in an active state,
> 2. Do not disable and re-enable the translation, keep it enabled.
> 3. Use the old root entry table, do not rewrite the RTA register.
> 4. Malloc and use new context entry table, copy data from the old ones that
> used by the old kernel.

Have not read all the patches, but I have a question, not sure this has been
answered before. Old memory is not reliable, what if the old memory get corrupted
before panic? Is it safe to continue using it in 2nd kernel, I worry that it will
cause problems.

Hope I'm wrong though.

Thanks
Dave


2015-04-03 09:02:39

by Li, ZhenHua

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

Hi Dave,

There may be some possibilities that the old iommu data is corrupted by
some other modules. Currently we do not have a better solution for the
dmar faults.

But I think when this happens, we need to fix the module that corrupted
the old iommu data. I once met a similar problem in normal kernel, the
queue used by the qi_* functions was written again by another module.
The fix was in that module, not in iommu module.


Thanks
Zhenhua

On 04/03/2015 04:40 PM, Dave Young wrote:
>> To fix this problem, we modifies the behaviors of the intel vt-d in the
>> crashdump kernel:
>>
>> For DMA Remapping:
>> 1. To accept the vt-d hardware in an active state,
>> 2. Do not disable and re-enable the translation, keep it enabled.
>> 3. Use the old root entry table, do not rewrite the RTA register.
>> 4. Malloc and use new context entry table, copy data from the old ones that
>> used by the old kernel.
>
> Have not read all the patches, but I have a question, not sure this has been
> answered before. Old memory is not reliable, what if the old memory get corrupted
> before panic? Is it safe to continue using it in 2nd kernel, I worry that it will
> cause problems.
>
> Hope I'm wrong though.
>
> Thanks
> Dave
>
>

2015-04-03 09:22:03

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
> Hi Dave,
>
> There may be some possibilities that the old iommu data is corrupted by
> some other modules. Currently we do not have a better solution for the
> dmar faults.
>
> But I think when this happens, we need to fix the module that corrupted
> the old iommu data. I once met a similar problem in normal kernel, the
> queue used by the qi_* functions was written again by another module.
> The fix was in that module, not in iommu module.

It is too late, there will be no chance to save vmcore then.

Also if it is possible to continue corrupt other area of oldmem because
of using old iommu tables then it will cause more problems.

So I think the tables at least need some verifycation before being used.

>
>
> Thanks
> Zhenhua
>
> On 04/03/2015 04:40 PM, Dave Young wrote:
> >>To fix this problem, we modifies the behaviors of the intel vt-d in the
> >>crashdump kernel:
> >>
> >>For DMA Remapping:
> >>1. To accept the vt-d hardware in an active state,
> >>2. Do not disable and re-enable the translation, keep it enabled.
> >>3. Use the old root entry table, do not rewrite the RTA register.
> >>4. Malloc and use new context entry table, copy data from the old ones that
> >> used by the old kernel.
> >
> >Have not read all the patches, but I have a question, not sure this has been
> >answered before. Old memory is not reliable, what if the old memory get corrupted
> >before panic? Is it safe to continue using it in 2nd kernel, I worry that it will
> >cause problems.
> >
> >Hope I'm wrong though.
> >
> >Thanks
> >Dave
> >
> >
>

2015-04-03 14:06:46

by Li, ZhenHua

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

The hardware will do some verification, but not completely. If people think the OS should also do this, then it should be another patchset, I think.

Thanks
Zhenhua

> ?? 2015??4??3?գ?17:21??Dave Young <[email protected]> д????
>
>> On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
>> Hi Dave,
>>
>> There may be some possibilities that the old iommu data is corrupted by
>> some other modules. Currently we do not have a better solution for the
>> dmar faults.
>>
>> But I think when this happens, we need to fix the module that corrupted
>> the old iommu data. I once met a similar problem in normal kernel, the
>> queue used by the qi_* functions was written again by another module.
>> The fix was in that module, not in iommu module.
>
> It is too late, there will be no chance to save vmcore then.
>
> Also if it is possible to continue corrupt other area of oldmem because
> of using old iommu tables then it will cause more problems.
>
> So I think the tables at least need some verifycation before being used.
>
>>
>>
>> Thanks
>> Zhenhua
>>
>> On 04/03/2015 04:40 PM, Dave Young wrote:
>>>> To fix this problem, we modifies the behaviors of the intel vt-d in the
>>>> crashdump kernel:
>>>>
>>>> For DMA Remapping:
>>>> 1. To accept the vt-d hardware in an active state,
>>>> 2. Do not disable and re-enable the translation, keep it enabled.
>>>> 3. Use the old root entry table, do not rewrite the RTA register.
>>>> 4. Malloc and use new context entry table, copy data from the old ones that
>>>> used by the old kernel.
>>>
>>> Have not read all the patches, but I have a question, not sure this has been
>>> answered before. Old memory is not reliable, what if the old memory get corrupted
>>> before panic? Is it safe to continue using it in 2nd kernel, I worry that it will
>>> cause problems.
>>>
>>> Hope I'm wrong though.
>>>
>>> Thanks
>>> Dave
>>
????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?

2015-04-05 01:55:48

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/03/15 at 05:21pm, Dave Young wrote:
> On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
> > Hi Dave,
> >
> > There may be some possibilities that the old iommu data is corrupted by
> > some other modules. Currently we do not have a better solution for the
> > dmar faults.
> >
> > But I think when this happens, we need to fix the module that corrupted
> > the old iommu data. I once met a similar problem in normal kernel, the
> > queue used by the qi_* functions was written again by another module.
> > The fix was in that module, not in iommu module.
>
> It is too late, there will be no chance to save vmcore then.
>
> Also if it is possible to continue corrupt other area of oldmem because
> of using old iommu tables then it will cause more problems.
>
> So I think the tables at least need some verifycation before being used.
>

Yes, it's a good thinking anout this and verification is also an
interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
and then verify this again when panic happens in purgatory. This checks
whether any code stomps into region reserved for kexec/kernel and corrupt
the loaded kernel.

If this is decided to do it should be an enhancement to current
patchset but not a approach change. Since this patchset is going very
close to point as maintainers expected maybe this can be merged firstly,
then think about enhancement. After all without this patchset vt-d often
raised error message, hung.

By the way I tested this patchset it works very well on my HP z420 work
station.

Thanks
Baoquan

2015-04-07 03:46:42

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/05/15 at 09:54am, Baoquan He wrote:
> On 04/03/15 at 05:21pm, Dave Young wrote:
> > On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
> > > Hi Dave,
> > >
> > > There may be some possibilities that the old iommu data is corrupted by
> > > some other modules. Currently we do not have a better solution for the
> > > dmar faults.
> > >
> > > But I think when this happens, we need to fix the module that corrupted
> > > the old iommu data. I once met a similar problem in normal kernel, the
> > > queue used by the qi_* functions was written again by another module.
> > > The fix was in that module, not in iommu module.
> >
> > It is too late, there will be no chance to save vmcore then.
> >
> > Also if it is possible to continue corrupt other area of oldmem because
> > of using old iommu tables then it will cause more problems.
> >
> > So I think the tables at least need some verifycation before being used.
> >
>
> Yes, it's a good thinking anout this and verification is also an
> interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
> and then verify this again when panic happens in purgatory. This checks
> whether any code stomps into region reserved for kexec/kernel and corrupt
> the loaded kernel.
>
> If this is decided to do it should be an enhancement to current
> patchset but not a approach change. Since this patchset is going very
> close to point as maintainers expected maybe this can be merged firstly,
> then think about enhancement. After all without this patchset vt-d often
> raised error message, hung.

It does not convince me, we should do it right at the beginning instead of
introduce something wrong.

I wonder why the old dma can not be remap to a specific page in kdump kernel
so that it will not corrupt more memory. But I may missed something, I will
looking for old threads and catch up.

Thanks
Dave

2015-04-07 03:50:49

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/03/15 at 02:05pm, Li, Zhen-Hua wrote:
> The hardware will do some verification, but not completely. If people think the OS should also do this, then it should be another patchset, I think.

If there is chance to corrupt more memory I think it is not a right way.
We should think about a better solution instead of fix it later.

Thanks
Dave

2015-04-07 09:09:35

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/07/15 at 11:46am, Dave Young wrote:
> On 04/05/15 at 09:54am, Baoquan He wrote:
> > On 04/03/15 at 05:21pm, Dave Young wrote:
> > > On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
> > > > Hi Dave,
> > > >
> > > > There may be some possibilities that the old iommu data is corrupted by
> > > > some other modules. Currently we do not have a better solution for the
> > > > dmar faults.
> > > >
> > > > But I think when this happens, we need to fix the module that corrupted
> > > > the old iommu data. I once met a similar problem in normal kernel, the
> > > > queue used by the qi_* functions was written again by another module.
> > > > The fix was in that module, not in iommu module.
> > >
> > > It is too late, there will be no chance to save vmcore then.
> > >
> > > Also if it is possible to continue corrupt other area of oldmem because
> > > of using old iommu tables then it will cause more problems.
> > >
> > > So I think the tables at least need some verifycation before being used.
> > >
> >
> > Yes, it's a good thinking anout this and verification is also an
> > interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
> > and then verify this again when panic happens in purgatory. This checks
> > whether any code stomps into region reserved for kexec/kernel and corrupt
> > the loaded kernel.
> >
> > If this is decided to do it should be an enhancement to current
> > patchset but not a approach change. Since this patchset is going very
> > close to point as maintainers expected maybe this can be merged firstly,
> > then think about enhancement. After all without this patchset vt-d often
> > raised error message, hung.
>
> It does not convince me, we should do it right at the beginning instead of
> introduce something wrong.
>
> I wonder why the old dma can not be remap to a specific page in kdump kernel
> so that it will not corrupt more memory. But I may missed something, I will
> looking for old threads and catch up.

I have read the old discussion, above way was dropped because it could corrupt
filesystem. Apologize about late commenting.

But current solution sounds bad to me because of using old memory which is not
reliable.

Thanks
Dave

2015-04-07 09:55:49

by Li, ZhenHua

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/07/2015 05:08 PM, Dave Young wrote:
> On 04/07/15 at 11:46am, Dave Young wrote:
>> On 04/05/15 at 09:54am, Baoquan He wrote:
>>> On 04/03/15 at 05:21pm, Dave Young wrote:
>>>> On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
>>>>> Hi Dave,
>>>>>
>>>>> There may be some possibilities that the old iommu data is corrupted by
>>>>> some other modules. Currently we do not have a better solution for the
>>>>> dmar faults.
>>>>>
>>>>> But I think when this happens, we need to fix the module that corrupted
>>>>> the old iommu data. I once met a similar problem in normal kernel, the
>>>>> queue used by the qi_* functions was written again by another module.
>>>>> The fix was in that module, not in iommu module.
>>>>
>>>> It is too late, there will be no chance to save vmcore then.
>>>>
>>>> Also if it is possible to continue corrupt other area of oldmem because
>>>> of using old iommu tables then it will cause more problems.
>>>>
>>>> So I think the tables at least need some verifycation before being used.
>>>>
>>>
>>> Yes, it's a good thinking anout this and verification is also an
>>> interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
>>> and then verify this again when panic happens in purgatory. This checks
>>> whether any code stomps into region reserved for kexec/kernel and corrupt
>>> the loaded kernel.
>>>
>>> If this is decided to do it should be an enhancement to current
>>> patchset but not a approach change. Since this patchset is going very
>>> close to point as maintainers expected maybe this can be merged firstly,
>>> then think about enhancement. After all without this patchset vt-d often
>>> raised error message, hung.
>>
>> It does not convince me, we should do it right at the beginning instead of
>> introduce something wrong.
>>
>> I wonder why the old dma can not be remap to a specific page in kdump kernel
>> so that it will not corrupt more memory. But I may missed something, I will
>> looking for old threads and catch up.
>
> I have read the old discussion, above way was dropped because it could corrupt
> filesystem. Apologize about late commenting.
>
> But current solution sounds bad to me because of using old memory which is not
> reliable.
>
> Thanks
> Dave
>
Seems we do not have a better solution for the dmar faults. But I
believe we can find out how to verify the iommu data which is located in
old memory.

Thanks
Zhenhua

2015-04-07 18:13:24

by Donald Dutile

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/06/2015 11:46 PM, Dave Young wrote:
> On 04/05/15 at 09:54am, Baoquan He wrote:
>> On 04/03/15 at 05:21pm, Dave Young wrote:
>>> On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
>>>> Hi Dave,
>>>>
>>>> There may be some possibilities that the old iommu data is corrupted by
>>>> some other modules. Currently we do not have a better solution for the
>>>> dmar faults.
>>>>
>>>> But I think when this happens, we need to fix the module that corrupted
>>>> the old iommu data. I once met a similar problem in normal kernel, the
>>>> queue used by the qi_* functions was written again by another module.
>>>> The fix was in that module, not in iommu module.
>>>
>>> It is too late, there will be no chance to save vmcore then.
>>>
>>> Also if it is possible to continue corrupt other area of oldmem because
>>> of using old iommu tables then it will cause more problems.
>>>
>>> So I think the tables at least need some verifycation before being used.
>>>
>>
>> Yes, it's a good thinking anout this and verification is also an
>> interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
>> and then verify this again when panic happens in purgatory. This checks
>> whether any code stomps into region reserved for kexec/kernel and corrupt
>> the loaded kernel.
>>
>> If this is decided to do it should be an enhancement to current
>> patchset but not a approach change. Since this patchset is going very
>> close to point as maintainers expected maybe this can be merged firstly,
>> then think about enhancement. After all without this patchset vt-d often
>> raised error message, hung.
>
> It does not convince me, we should do it right at the beginning instead of
> introduce something wrong.
>
> I wonder why the old dma can not be remap to a specific page in kdump kernel
> so that it will not corrupt more memory. But I may missed something, I will
> looking for old threads and catch up.
>
> Thanks
> Dave
>
The (only) issue is not corruption, but once the iommu is re-configured, the old,
not-stopped-yet, dma engines will use iova's that will generate dmar faults, which
will be enabled when the iommu is re-configured (even to a single/simple paging scheme)
in the kexec kernel.

2015-04-08 03:34:52

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/07/15 at 05:55pm, Li, ZhenHua wrote:
> On 04/07/2015 05:08 PM, Dave Young wrote:
> >On 04/07/15 at 11:46am, Dave Young wrote:
> >>On 04/05/15 at 09:54am, Baoquan He wrote:
> >>>On 04/03/15 at 05:21pm, Dave Young wrote:
> >>>>On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
> >>>>>Hi Dave,
> >>>>>
> >>>>>There may be some possibilities that the old iommu data is corrupted by
> >>>>>some other modules. Currently we do not have a better solution for the
> >>>>>dmar faults.
> >>>>>
> >>>>>But I think when this happens, we need to fix the module that corrupted
> >>>>>the old iommu data. I once met a similar problem in normal kernel, the
> >>>>>queue used by the qi_* functions was written again by another module.
> >>>>>The fix was in that module, not in iommu module.
> >>>>
> >>>>It is too late, there will be no chance to save vmcore then.
> >>>>
> >>>>Also if it is possible to continue corrupt other area of oldmem because
> >>>>of using old iommu tables then it will cause more problems.
> >>>>
> >>>>So I think the tables at least need some verifycation before being used.
> >>>>
> >>>
> >>>Yes, it's a good thinking anout this and verification is also an
> >>>interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
> >>>and then verify this again when panic happens in purgatory. This checks
> >>>whether any code stomps into region reserved for kexec/kernel and corrupt
> >>>the loaded kernel.
> >>>
> >>>If this is decided to do it should be an enhancement to current
> >>>patchset but not a approach change. Since this patchset is going very
> >>>close to point as maintainers expected maybe this can be merged firstly,
> >>>then think about enhancement. After all without this patchset vt-d often
> >>>raised error message, hung.
> >>
> >>It does not convince me, we should do it right at the beginning instead of
> >>introduce something wrong.
> >>
> >>I wonder why the old dma can not be remap to a specific page in kdump kernel
> >>so that it will not corrupt more memory. But I may missed something, I will
> >>looking for old threads and catch up.
> >
> >I have read the old discussion, above way was dropped because it could corrupt
> >filesystem. Apologize about late commenting.
> >
> >But current solution sounds bad to me because of using old memory which is not
> >reliable.
> >
> >Thanks
> >Dave
> >
> Seems we do not have a better solution for the dmar faults. But I believe
> we can find out how to verify the iommu data which is located in old memory.

That will be great, thanks.

So there's two things:
1) make sure old pg tables are right, this is what we were talking about.
2) avoid writing old memory, I suppose only dma read could corrupt filesystem,
right? So how about for any dma writes just create a scratch page in 2nd kernel
memory. Only using old page table for dma read.

Thanks
Dave

2015-05-04 11:06:05

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On Fri, Apr 03, 2015 at 04:40:31PM +0800, Dave Young wrote:
> Have not read all the patches, but I have a question, not sure this
> has been answered before. Old memory is not reliable, what if the old
> memory get corrupted before panic? Is it safe to continue using it in
> 2nd kernel, I worry that it will cause problems.

Yes, the old memory could be corrupted, and there are more failure cases
left which we have no way of handling yet (if iommu data structures are
in kdump backup areas).

The question is what to do if we find some of the old data structures
corrupted, hand how far should the tests go. Should we also check the
page-tables, for example? I think if some of the data structures for a
device are corrupted it probably already failed in the old kernel and
things won't get worse in the new one.

So checking is not strictly necessary in the first version of these
patches (unless we find a valid failure scenario). Once we have some
good plan on what to do if we find corruption, we can add checking of
course.


Regards,

Joerg

2015-05-04 15:22:04

by Donald Dutile

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/04/2015 07:05 AM, Joerg Roedel wrote:
> On Fri, Apr 03, 2015 at 04:40:31PM +0800, Dave Young wrote:
>> Have not read all the patches, but I have a question, not sure this
>> has been answered before. Old memory is not reliable, what if the old
>> memory get corrupted before panic? Is it safe to continue using it in
>> 2nd kernel, I worry that it will cause problems.
>
> Yes, the old memory could be corrupted, and there are more failure cases
> left which we have no way of handling yet (if iommu data structures are
> in kdump backup areas).
>
> The question is what to do if we find some of the old data structures
> corrupted, hand how far should the tests go. Should we also check the
> page-tables, for example? I think if some of the data structures for a
> device are corrupted it probably already failed in the old kernel and
> things won't get worse in the new one.
>
> So checking is not strictly necessary in the first version of these
> patches (unless we find a valid failure scenario). Once we have some
> good plan on what to do if we find corruption, we can add checking of
> course.
>
>
> Regards,
>
> Joerg
>

Agreed. This is a significant improvement over what we (don') have.

Corruption related to IOMMU must occur within the host, and it must be
a software corruption, b/c the IOMMU inherently protects itself by protecting
all of memory from errant DMAs. Therefore, if the only IOMMU corruptor is
in the host, it's likely the entire host kernel crash dump will either be
useless, or corrupted by the security breach, at which point,
this is just another scenario of a failed crash dump that will never be taken.

The kernel can't protect the mapping tables, which are the most likely area to
be corrupted, b/c it'd (minimally) have to be per-device (to avoid locking
& coherency issues), and would require significant
overhead to keep/update a checksum-like scheme on (potentially) 4 levels of page tables.

2015-05-05 06:10:32

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/04/15 at 01:05pm, Joerg Roedel wrote:
> On Fri, Apr 03, 2015 at 04:40:31PM +0800, Dave Young wrote:
> > Have not read all the patches, but I have a question, not sure this
> > has been answered before. Old memory is not reliable, what if the old
> > memory get corrupted before panic? Is it safe to continue using it in
> > 2nd kernel, I worry that it will cause problems.
>
> Yes, the old memory could be corrupted, and there are more failure cases
> left which we have no way of handling yet (if iommu data structures are
> in kdump backup areas).
>
> The question is what to do if we find some of the old data structures
> corrupted, hand how far should the tests go. Should we also check the
> page-tables, for example? I think if some of the data structures for a
> device are corrupted it probably already failed in the old kernel and
> things won't get worse in the new one.

I agree that we can do nothing with the old corrupted data, but I worry
about the future corruption after using the old corrupted data. I wonder
if we can mark all the oldmem as readonly so that we can lower the risk.
Is it resonable?

Thanks
Dave

2015-05-05 16:10:07

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On Tue, May 05, 2015 at 02:09:31PM +0800, Dave Young wrote:
> I agree that we can do nothing with the old corrupted data, but I worry
> about the future corruption after using the old corrupted data. I wonder
> if we can mark all the oldmem as readonly so that we can lower the risk.
> Is it resonable?

Do you mean marking it read-only for the devices? That will very likely
cause DMAR faults, re-introducing the problem this patch-set tries to
fix.


Joerg

2015-05-06 01:47:11

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/05/15 at 05:23pm, Joerg Roedel wrote:
> On Tue, May 05, 2015 at 02:09:31PM +0800, Dave Young wrote:
> > I agree that we can do nothing with the old corrupted data, but I worry
> > about the future corruption after using the old corrupted data. I wonder
> > if we can mark all the oldmem as readonly so that we can lower the risk.
> > Is it resonable?
>
> Do you mean marking it read-only for the devices? That will very likely
> cause DMAR faults, re-introducing the problem this patch-set tries to
> fix.

I means to block all dma write to oldmem, I believe it will cause DMA error.
But all other DMA reading requests will continue and work. This will avoid
future possible corruption. It will solve half of the problem at least?

For the original problem, the key issue is dmar faults cause kdump kernel
hang so that vmcore can not be saved. I do not know the reason why it hangs
I think it is acceptable if kdump kernel boot ok with some dma errors..

Thanks
Dave

2015-05-06 08:16:23

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On Wed, May 06, 2015 at 09:46:49AM +0800, Dave Young wrote:
> For the original problem, the key issue is dmar faults cause kdump kernel
> hang so that vmcore can not be saved. I do not know the reason why it hangs
> I think it is acceptable if kdump kernel boot ok with some dma errors..

It hangs because some devices can't handle the DMAR faults and the kdump
kernel can't initialize them and will hang itself. For that it doesn't
matter whether the fault was caused by a read or write request.


Joerg

2015-05-07 13:25:56

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/06/15 at 10:16am, Joerg Roedel wrote:
> On Wed, May 06, 2015 at 09:46:49AM +0800, Dave Young wrote:
> > For the original problem, the key issue is dmar faults cause kdump kernel
> > hang so that vmcore can not be saved. I do not know the reason why it hangs
> > I think it is acceptable if kdump kernel boot ok with some dma errors..
>
> It hangs because some devices can't handle the DMAR faults and the kdump
> kernel can't initialize them and will hang itself. For that it doesn't
> matter whether the fault was caused by a read or write request.

Ok, thanks for explanation. so it explains sometimes kdump kernel boot ok
with faults, sometimes it hangs instead.

Dave

2015-05-07 13:56:13

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/04/15 at 01:05pm, Joerg Roedel wrote:
> On Fri, Apr 03, 2015 at 04:40:31PM +0800, Dave Young wrote:
> > Have not read all the patches, but I have a question, not sure this
> > has been answered before. Old memory is not reliable, what if the old
> > memory get corrupted before panic? Is it safe to continue using it in
> > 2nd kernel, I worry that it will cause problems.
>
> Yes, the old memory could be corrupted, and there are more failure cases
> left which we have no way of handling yet (if iommu data structures are
> in kdump backup areas).
>
> The question is what to do if we find some of the old data structures
> corrupted, hand how far should the tests go. Should we also check the
> page-tables, for example? I think if some of the data structures for a
> device are corrupted it probably already failed in the old kernel and
> things won't get worse in the new one.

Joreg, I can not find the last reply from you, so just reply here about
my worries here.

I said that the patchset will cause more problems, let me explain about
it more here:

Suppose page table was corrupted, ie. original mapping iova1 -> page 1
it was changed to iova1 -> page 2 accidently while crash happening,
thus future dma will read/write page 2 instead page 1, right?

so the behavior changes like:
originally, dmar faults happen, but kdump kernel may boot ok with these
faults, and vmcore can be saved.
with the patchset, dmar faults does not happen, dma translation will be
handled, but dma write could corrupt unrelevant memory.

This might be corner case, but who knows because kernel paniced we can
not assume old page table is right.

But seems you all think it is safe, but let us understand each other
first then go to a solution.

Today we talked with Zhenhua about the problem I think both of us are clear
about the problems. Just he think it can be left as future work.

Thanks
Dave

2015-05-07 14:00:44

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 04/07/15 at 10:12am, Don Dutile wrote:
> On 04/06/2015 11:46 PM, Dave Young wrote:
> >On 04/05/15 at 09:54am, Baoquan He wrote:
> >>On 04/03/15 at 05:21pm, Dave Young wrote:
> >>>On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
> >>>>Hi Dave,
> >>>>
> >>>>There may be some possibilities that the old iommu data is corrupted by
> >>>>some other modules. Currently we do not have a better solution for the
> >>>>dmar faults.
> >>>>
> >>>>But I think when this happens, we need to fix the module that corrupted
> >>>>the old iommu data. I once met a similar problem in normal kernel, the
> >>>>queue used by the qi_* functions was written again by another module.
> >>>>The fix was in that module, not in iommu module.
> >>>
> >>>It is too late, there will be no chance to save vmcore then.
> >>>
> >>>Also if it is possible to continue corrupt other area of oldmem because
> >>>of using old iommu tables then it will cause more problems.
> >>>
> >>>So I think the tables at least need some verifycation before being used.
> >>>
> >>
> >>Yes, it's a good thinking anout this and verification is also an
> >>interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
> >>and then verify this again when panic happens in purgatory. This checks
> >>whether any code stomps into region reserved for kexec/kernel and corrupt
> >>the loaded kernel.
> >>
> >>If this is decided to do it should be an enhancement to current
> >>patchset but not a approach change. Since this patchset is going very
> >>close to point as maintainers expected maybe this can be merged firstly,
> >>then think about enhancement. After all without this patchset vt-d often
> >>raised error message, hung.
> >
> >It does not convince me, we should do it right at the beginning instead of
> >introduce something wrong.
> >
> >I wonder why the old dma can not be remap to a specific page in kdump kernel
> >so that it will not corrupt more memory. But I may missed something, I will
> >looking for old threads and catch up.
> >
> >Thanks
> >Dave
> >
> The (only) issue is not corruption, but once the iommu is re-configured, the old,
> not-stopped-yet, dma engines will use iova's that will generate dmar faults, which
> will be enabled when the iommu is re-configured (even to a single/simple paging scheme)
> in the kexec kernel.
>

Don, so if iommu is not reconfigured then these faults will not happen?

Baoquan and me has a confusion below today about iommu=off/intel_iommu=off:

intel_iommu_init()
{
...

dmar_table_init();

disable active iommu translations;

if (no_iommu || dmar_disabled)
goto out_free_dmar;

...
}

Any reason not move no_iommu check to the begining of intel_iommu_init function?

Thanks
Dave

2015-05-07 14:25:24

by Donald Dutile

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/07/2015 10:00 AM, Dave Young wrote:
> On 04/07/15 at 10:12am, Don Dutile wrote:
>> On 04/06/2015 11:46 PM, Dave Young wrote:
>>> On 04/05/15 at 09:54am, Baoquan He wrote:
>>>> On 04/03/15 at 05:21pm, Dave Young wrote:
>>>>> On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
>>>>>> Hi Dave,
>>>>>>
>>>>>> There may be some possibilities that the old iommu data is corrupted by
>>>>>> some other modules. Currently we do not have a better solution for the
>>>>>> dmar faults.
>>>>>>
>>>>>> But I think when this happens, we need to fix the module that corrupted
>>>>>> the old iommu data. I once met a similar problem in normal kernel, the
>>>>>> queue used by the qi_* functions was written again by another module.
>>>>>> The fix was in that module, not in iommu module.
>>>>>
>>>>> It is too late, there will be no chance to save vmcore then.
>>>>>
>>>>> Also if it is possible to continue corrupt other area of oldmem because
>>>>> of using old iommu tables then it will cause more problems.
>>>>>
>>>>> So I think the tables at least need some verifycation before being used.
>>>>>
>>>>
>>>> Yes, it's a good thinking anout this and verification is also an
>>>> interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
>>>> and then verify this again when panic happens in purgatory. This checks
>>>> whether any code stomps into region reserved for kexec/kernel and corrupt
>>>> the loaded kernel.
>>>>
>>>> If this is decided to do it should be an enhancement to current
>>>> patchset but not a approach change. Since this patchset is going very
>>>> close to point as maintainers expected maybe this can be merged firstly,
>>>> then think about enhancement. After all without this patchset vt-d often
>>>> raised error message, hung.
>>>
>>> It does not convince me, we should do it right at the beginning instead of
>>> introduce something wrong.
>>>
>>> I wonder why the old dma can not be remap to a specific page in kdump kernel
>>> so that it will not corrupt more memory. But I may missed something, I will
>>> looking for old threads and catch up.
>>>
>>> Thanks
>>> Dave
>>>
>> The (only) issue is not corruption, but once the iommu is re-configured, the old,
>> not-stopped-yet, dma engines will use iova's that will generate dmar faults, which
>> will be enabled when the iommu is re-configured (even to a single/simple paging scheme)
>> in the kexec kernel.
>>
>
> Don, so if iommu is not reconfigured then these faults will not happen?
>
Well, if iommu is not reconfigured, then if the crash isn't caused by
an IOMMU fault (some systems have firmware-first catch the IOMMU fault & convert
them into NMI_IOCK), then the DMA's will continue into the old kernel memory space.

> Baoquan and me has a confusion below today about iommu=off/intel_iommu=off:
>
> intel_iommu_init()
> {
> ...
>
> dmar_table_init();
>
> disable active iommu translations;
>
> if (no_iommu || dmar_disabled)
> goto out_free_dmar;
>
> ...
> }
>
> Any reason not move no_iommu check to the begining of intel_iommu_init function?
>
What does that do/help?

> Thanks
> Dave
>

2015-05-08 01:21:43

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/07/15 at 10:25am, Don Dutile wrote:
> On 05/07/2015 10:00 AM, Dave Young wrote:
> >On 04/07/15 at 10:12am, Don Dutile wrote:
> >>On 04/06/2015 11:46 PM, Dave Young wrote:
> >>>On 04/05/15 at 09:54am, Baoquan He wrote:
> >>>>On 04/03/15 at 05:21pm, Dave Young wrote:
> >>>>>On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
> >>>>>>Hi Dave,
> >>>>>>
> >>>>>>There may be some possibilities that the old iommu data is corrupted by
> >>>>>>some other modules. Currently we do not have a better solution for the
> >>>>>>dmar faults.
> >>>>>>
> >>>>>>But I think when this happens, we need to fix the module that corrupted
> >>>>>>the old iommu data. I once met a similar problem in normal kernel, the
> >>>>>>queue used by the qi_* functions was written again by another module.
> >>>>>>The fix was in that module, not in iommu module.
> >>>>>
> >>>>>It is too late, there will be no chance to save vmcore then.
> >>>>>
> >>>>>Also if it is possible to continue corrupt other area of oldmem because
> >>>>>of using old iommu tables then it will cause more problems.
> >>>>>
> >>>>>So I think the tables at least need some verifycation before being used.
> >>>>>
> >>>>
> >>>>Yes, it's a good thinking anout this and verification is also an
> >>>>interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
> >>>>and then verify this again when panic happens in purgatory. This checks
> >>>>whether any code stomps into region reserved for kexec/kernel and corrupt
> >>>>the loaded kernel.
> >>>>
> >>>>If this is decided to do it should be an enhancement to current
> >>>>patchset but not a approach change. Since this patchset is going very
> >>>>close to point as maintainers expected maybe this can be merged firstly,
> >>>>then think about enhancement. After all without this patchset vt-d often
> >>>>raised error message, hung.
> >>>
> >>>It does not convince me, we should do it right at the beginning instead of
> >>>introduce something wrong.
> >>>
> >>>I wonder why the old dma can not be remap to a specific page in kdump kernel
> >>>so that it will not corrupt more memory. But I may missed something, I will
> >>>looking for old threads and catch up.
> >>>
> >>>Thanks
> >>>Dave
> >>>
> >>The (only) issue is not corruption, but once the iommu is re-configured, the old,
> >>not-stopped-yet, dma engines will use iova's that will generate dmar faults, which
> >>will be enabled when the iommu is re-configured (even to a single/simple paging scheme)
> >>in the kexec kernel.
> >>
> >
> >Don, so if iommu is not reconfigured then these faults will not happen?
> >
> Well, if iommu is not reconfigured, then if the crash isn't caused by
> an IOMMU fault (some systems have firmware-first catch the IOMMU fault & convert
> them into NMI_IOCK), then the DMA's will continue into the old kernel memory space.

So NMI_IOCK is one reason to cause kernel hang, I think I'm still not clear about
what does re-configured means though. DMAR faults will happen originally this is the old
behavior but we are removing the faults by alowing DMA continuing into old memory
space.

>
> >Baoquan and me has a confusion below today about iommu=off/intel_iommu=off:
> >
> >intel_iommu_init()
> >{
> >...
> >
> > dmar_table_init();
> >
> > disable active iommu translations;
> >
> > if (no_iommu || dmar_disabled)
> > goto out_free_dmar;
> >
> >...
> >}
> >
> >Any reason not move no_iommu check to the begining of intel_iommu_init function?
> >
> What does that do/help?

Just do not know why the previous handling is necessary with iommu=off, shouldn't
we do noting and return earlier?

Also there is a guess, dmar faults appears after iommu_init, so not sure if the codes
before dmar_disabled checking have some effect about enabling the faults messages.

Thanks
Dave

2015-05-11 01:31:56

by Donald Dutile

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/07/2015 10:00 AM, Dave Young wrote:
> On 04/07/15 at 10:12am, Don Dutile wrote:
>> On 04/06/2015 11:46 PM, Dave Young wrote:
>>> On 04/05/15 at 09:54am, Baoquan He wrote:
>>>> On 04/03/15 at 05:21pm, Dave Young wrote:
>>>>> On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
>>>>>> Hi Dave,
>>>>>>
>>>>>> There may be some possibilities that the old iommu data is corrupted by
>>>>>> some other modules. Currently we do not have a better solution for the
>>>>>> dmar faults.
>>>>>>
>>>>>> But I think when this happens, we need to fix the module that corrupted
>>>>>> the old iommu data. I once met a similar problem in normal kernel, the
>>>>>> queue used by the qi_* functions was written again by another module.
>>>>>> The fix was in that module, not in iommu module.
>>>>>
>>>>> It is too late, there will be no chance to save vmcore then.
>>>>>
>>>>> Also if it is possible to continue corrupt other area of oldmem because
>>>>> of using old iommu tables then it will cause more problems.
>>>>>
>>>>> So I think the tables at least need some verifycation before being used.
>>>>>
>>>>
>>>> Yes, it's a good thinking anout this and verification is also an
>>>> interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
>>>> and then verify this again when panic happens in purgatory. This checks
>>>> whether any code stomps into region reserved for kexec/kernel and corrupt
>>>> the loaded kernel.
>>>>
>>>> If this is decided to do it should be an enhancement to current
>>>> patchset but not a approach change. Since this patchset is going very
>>>> close to point as maintainers expected maybe this can be merged firstly,
>>>> then think about enhancement. After all without this patchset vt-d often
>>>> raised error message, hung.
>>>
>>> It does not convince me, we should do it right at the beginning instead of
>>> introduce something wrong.
>>>
>>> I wonder why the old dma can not be remap to a specific page in kdump kernel
>>> so that it will not corrupt more memory. But I may missed something, I will
>>> looking for old threads and catch up.
>>>
>>> Thanks
>>> Dave
>>>
>> The (only) issue is not corruption, but once the iommu is re-configured, the old,
>> not-stopped-yet, dma engines will use iova's that will generate dmar faults, which
>> will be enabled when the iommu is re-configured (even to a single/simple paging scheme)
>> in the kexec kernel.
>>
>
> Don, so if iommu is not reconfigured then these faults will not happen?
>
> Baoquan and me has a confusion below today about iommu=off/intel_iommu=off:
>
> intel_iommu_init()
> {
> ...
>
> dmar_table_init();
>
> disable active iommu translations;
>
> if (no_iommu || dmar_disabled)
> goto out_free_dmar;
>
> ...
> }
>
> Any reason not move no_iommu check to the begining of intel_iommu_init function?
>
> Thanks
> Dave
Looks like you could.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

2015-05-11 01:38:29

by Donald Dutile

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/07/2015 09:21 PM, Dave Young wrote:
> On 05/07/15 at 10:25am, Don Dutile wrote:
>> On 05/07/2015 10:00 AM, Dave Young wrote:
>>> On 04/07/15 at 10:12am, Don Dutile wrote:
>>>> On 04/06/2015 11:46 PM, Dave Young wrote:
>>>>> On 04/05/15 at 09:54am, Baoquan He wrote:
>>>>>> On 04/03/15 at 05:21pm, Dave Young wrote:
>>>>>>> On 04/03/15 at 05:01pm, Li, ZhenHua wrote:
>>>>>>>> Hi Dave,
>>>>>>>>
>>>>>>>> There may be some possibilities that the old iommu data is corrupted by
>>>>>>>> some other modules. Currently we do not have a better solution for the
>>>>>>>> dmar faults.
>>>>>>>>
>>>>>>>> But I think when this happens, we need to fix the module that corrupted
>>>>>>>> the old iommu data. I once met a similar problem in normal kernel, the
>>>>>>>> queue used by the qi_* functions was written again by another module.
>>>>>>>> The fix was in that module, not in iommu module.
>>>>>>>
>>>>>>> It is too late, there will be no chance to save vmcore then.
>>>>>>>
>>>>>>> Also if it is possible to continue corrupt other area of oldmem because
>>>>>>> of using old iommu tables then it will cause more problems.
>>>>>>>
>>>>>>> So I think the tables at least need some verifycation before being used.
>>>>>>>
>>>>>>
>>>>>> Yes, it's a good thinking anout this and verification is also an
>>>>>> interesting idea. kexec/kdump do a sha256 calculation on loaded kernel
>>>>>> and then verify this again when panic happens in purgatory. This checks
>>>>>> whether any code stomps into region reserved for kexec/kernel and corrupt
>>>>>> the loaded kernel.
>>>>>>
>>>>>> If this is decided to do it should be an enhancement to current
>>>>>> patchset but not a approach change. Since this patchset is going very
>>>>>> close to point as maintainers expected maybe this can be merged firstly,
>>>>>> then think about enhancement. After all without this patchset vt-d often
>>>>>> raised error message, hung.
>>>>>
>>>>> It does not convince me, we should do it right at the beginning instead of
>>>>> introduce something wrong.
>>>>>
>>>>> I wonder why the old dma can not be remap to a specific page in kdump kernel
>>>>> so that it will not corrupt more memory. But I may missed something, I will
>>>>> looking for old threads and catch up.
>>>>>
>>>>> Thanks
>>>>> Dave
>>>>>
>>>> The (only) issue is not corruption, but once the iommu is re-configured, the old,
>>>> not-stopped-yet, dma engines will use iova's that will generate dmar faults, which
>>>> will be enabled when the iommu is re-configured (even to a single/simple paging scheme)
>>>> in the kexec kernel.
>>>>
>>>
>>> Don, so if iommu is not reconfigured then these faults will not happen?
>>>
>> Well, if iommu is not reconfigured, then if the crash isn't caused by
>> an IOMMU fault (some systems have firmware-first catch the IOMMU fault & convert
>> them into NMI_IOCK), then the DMA's will continue into the old kernel memory space.
>
> So NMI_IOCK is one reason to cause kernel hang, I think I'm still not clear about
> what does re-configured means though. DMAR faults will happen originally this is the old
> behavior but we are removing the faults by alowing DMA continuing into old memory
> space.
>
A flood of faults occur when the 2nd kernel (re-)configures the IOMMU because
the second kernel effectively clears/disable all DMA except RMRRs, so any DMA from 1st kernel will flood
the system with faults. Its the flood of dmar faults that eventually wedges &/or crashes the system
while trying to take a kdump.

>>
>>> Baoquan and me has a confusion below today about iommu=off/intel_iommu=off:
>>>
>>> intel_iommu_init()
>>> {
>>> ...
>>>
>>> dmar_table_init();
>>>
>>> disable active iommu translations;
>>>
>>> if (no_iommu || dmar_disabled)
>>> goto out_free_dmar;
>>>
>>> ...
>>> }
>>>
>>> Any reason not move no_iommu check to the begining of intel_iommu_init function?
>>>
>> What does that do/help?
>
> Just do not know why the previous handling is necessary with iommu=off, shouldn't
> we do noting and return earlier?
>
> Also there is a guess, dmar faults appears after iommu_init, so not sure if the codes
> before dmar_disabled checking have some effect about enabling the faults messages.
>
> Thanks
> Dave
>

2015-05-11 10:11:29

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On Thu, May 07, 2015 at 09:56:00PM +0800, Dave Young wrote:
> Joreg, I can not find the last reply from you, so just reply here about
> my worries here.
>
> I said that the patchset will cause more problems, let me explain about
> it more here:
>
> Suppose page table was corrupted, ie. original mapping iova1 -> page 1
> it was changed to iova1 -> page 2 accidently while crash happening,
> thus future dma will read/write page 2 instead page 1, right?

When the page-table is corrupted then it is a left-over from the old
kernel. When the kdump kernel boots the situation can at least not get
worse. For the page tables it is also hard to detect wrong mappings (if
this would be possible the hardware could already do it), so any checks
we could do there are of limited use anyway.


Joerg

2015-05-12 05:57:23

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/11/15 at 12:11pm, Joerg Roedel wrote:
> On Thu, May 07, 2015 at 09:56:00PM +0800, Dave Young wrote:
> > Joreg, I can not find the last reply from you, so just reply here about
> > my worries here.
> >
> > I said that the patchset will cause more problems, let me explain about
> > it more here:
> >
> > Suppose page table was corrupted, ie. original mapping iova1 -> page 1
> > it was changed to iova1 -> page 2 accidently while crash happening,
> > thus future dma will read/write page 2 instead page 1, right?
>
> When the page-table is corrupted then it is a left-over from the old
> kernel. When the kdump kernel boots the situation can at least not get
> worse. For the page tables it is also hard to detect wrong mappings (if
> this would be possible the hardware could already do it), so any checks
> we could do there are of limited use anyway.

Joerg, since both of you guys do not think it is a problem I will object it
any more though I still do not like reusing the old page tables. So let's
leave it as a future issue.

Thanks
Dave

2015-05-12 06:41:25

by Dave Young

[permalink] [raw]
Subject: Re: [PATCH v9 0/10] iommu/vt-d: Fix intel vt-d faults in kdump kernel

On 05/12/15 at 01:57pm, Dave Young wrote:
> On 05/11/15 at 12:11pm, Joerg Roedel wrote:
> > On Thu, May 07, 2015 at 09:56:00PM +0800, Dave Young wrote:
> > > Joreg, I can not find the last reply from you, so just reply here about
> > > my worries here.
> > >
> > > I said that the patchset will cause more problems, let me explain about
> > > it more here:
> > >
> > > Suppose page table was corrupted, ie. original mapping iova1 -> page 1
> > > it was changed to iova1 -> page 2 accidently while crash happening,
> > > thus future dma will read/write page 2 instead page 1, right?
> >
> > When the page-table is corrupted then it is a left-over from the old
> > kernel. When the kdump kernel boots the situation can at least not get
> > worse. For the page tables it is also hard to detect wrong mappings (if
> > this would be possible the hardware could already do it), so any checks
> > we could do there are of limited use anyway.
>
> Joerg, since both of you guys do not think it is a problem I will object it

s/will object/will not object

> any more though I still do not like reusing the old page tables. So let's
> leave it as a future issue.
>
> Thanks
> Dave