2018-04-20 00:49:43

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

From: Wanpeng Li <[email protected]>

Our virtual machines make use of device assignment by configuring
12 NVMe disks for high I/O performance. Each NVMe device has 129
MSI-X Table entries:
Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
The windows virtual machines fail to boot since they will map the number of
MSI-table entries that the NVMe hardware reported to the bus to msi routing
table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
for all archs, in the future this might be extended again if needed.

Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: Tonny Lu <[email protected]>
Cc: Cornelia Huck <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
Signed-off-by: Tonny Lu <[email protected]>
---
v1 -> v2:
* extend MAX_IRQ_ROUTES to 4096 for all archs

include/linux/kvm_host.h | 6 ------
1 file changed, 6 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6930c63..0a5c299 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)

#ifdef CONFIG_HAVE_KVM_IRQ_ROUTING

-#ifdef CONFIG_S390
#define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
-#elif defined(CONFIG_ARM64)
-#define KVM_MAX_IRQ_ROUTES 4096
-#else
-#define KVM_MAX_IRQ_ROUTES 1024
-#endif

bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
int kvm_set_irq_routing(struct kvm *kvm,
--
2.7.4



2018-04-20 07:17:00

by Cornelia Huck

[permalink] [raw]
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

On Thu, 19 Apr 2018 17:47:28 -0700
Wanpeng Li <[email protected]> wrote:

> From: Wanpeng Li <[email protected]>
>
> Our virtual machines make use of device assignment by configuring
> 12 NVMe disks for high I/O performance. Each NVMe device has 129
> MSI-X Table entries:
> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> The windows virtual machines fail to boot since they will map the number of
> MSI-table entries that the NVMe hardware reported to the bus to msi routing
> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> for all archs, in the future this might be extended again if needed.
>
> Cc: Paolo Bonzini <[email protected]>
> Cc: Radim Krčmář <[email protected]>
> Cc: Tonny Lu <[email protected]>
> Cc: Cornelia Huck <[email protected]>
> Signed-off-by: Wanpeng Li <[email protected]>
> Signed-off-by: Tonny Lu <[email protected]>
> ---
> v1 -> v2:
> * extend MAX_IRQ_ROUTES to 4096 for all archs
>
> include/linux/kvm_host.h | 6 ------
> 1 file changed, 6 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6930c63..0a5c299 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>
> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>
> -#ifdef CONFIG_S390
> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...

What about /* might need extension/rework in the future */ instead of
the FIXME?

As far as I understand, 4096 should cover most architectures and the
sane end of s390 configurations, but will not be enough at the scarier
end of s390. (I'm not sure how much it matters in practice.)

Do we want to make this a tuneable in the future? Do some kind of
dynamic allocation? Not sure whether it is worth the trouble.

> -#elif defined(CONFIG_ARM64)
> -#define KVM_MAX_IRQ_ROUTES 4096
> -#else
> -#define KVM_MAX_IRQ_ROUTES 1024
> -#endif
>
> bool kvm_arch_can_set_irq_routing(struct kvm *kvm);
> int kvm_set_irq_routing(struct kvm *kvm,


2018-04-20 13:53:18

by Wanpeng Li

[permalink] [raw]
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

2018-04-20 15:15 GMT+08:00 Cornelia Huck <[email protected]>:
> On Thu, 19 Apr 2018 17:47:28 -0700
> Wanpeng Li <[email protected]> wrote:
>
>> From: Wanpeng Li <[email protected]>
>>
>> Our virtual machines make use of device assignment by configuring
>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> MSI-X Table entries:
>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> The windows virtual machines fail to boot since they will map the number of
>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> for all archs, in the future this might be extended again if needed.
>>
>> Cc: Paolo Bonzini <[email protected]>
>> Cc: Radim Krčmář <[email protected]>
>> Cc: Tonny Lu <[email protected]>
>> Cc: Cornelia Huck <[email protected]>
>> Signed-off-by: Wanpeng Li <[email protected]>
>> Signed-off-by: Tonny Lu <[email protected]>
>> ---
>> v1 -> v2:
>> * extend MAX_IRQ_ROUTES to 4096 for all archs
>>
>> include/linux/kvm_host.h | 6 ------
>> 1 file changed, 6 deletions(-)
>>
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index 6930c63..0a5c299 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>
>> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>
>> -#ifdef CONFIG_S390
>> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>
> What about /* might need extension/rework in the future */ instead of
> the FIXME?

Yeah, I guess the maintainers can help to fix it when applying. :)

>
> As far as I understand, 4096 should cover most architectures and the
> sane end of s390 configurations, but will not be enough at the scarier
> end of s390. (I'm not sure how much it matters in practice.)
>
> Do we want to make this a tuneable in the future? Do some kind of
> dynamic allocation? Not sure whether it is worth the trouble.

I think keep as it is currently.

Regards,
Wanpeng Li

2018-04-20 14:23:28

by Cornelia Huck

[permalink] [raw]
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

On Fri, 20 Apr 2018 21:51:13 +0800
Wanpeng Li <[email protected]> wrote:

> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <[email protected]>:
> > On Thu, 19 Apr 2018 17:47:28 -0700
> > Wanpeng Li <[email protected]> wrote:
> >
> >> From: Wanpeng Li <[email protected]>
> >>
> >> Our virtual machines make use of device assignment by configuring
> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
> >> MSI-X Table entries:
> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> >> The windows virtual machines fail to boot since they will map the number of
> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> >> for all archs, in the future this might be extended again if needed.
> >>
> >> Cc: Paolo Bonzini <[email protected]>
> >> Cc: Radim Krčmář <[email protected]>
> >> Cc: Tonny Lu <[email protected]>
> >> Cc: Cornelia Huck <[email protected]>
> >> Signed-off-by: Wanpeng Li <[email protected]>
> >> Signed-off-by: Tonny Lu <[email protected]>
> >> ---
> >> v1 -> v2:
> >> * extend MAX_IRQ_ROUTES to 4096 for all archs
> >>
> >> include/linux/kvm_host.h | 6 ------
> >> 1 file changed, 6 deletions(-)
> >>
> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >> index 6930c63..0a5c299 100644
> >> --- a/include/linux/kvm_host.h
> >> +++ b/include/linux/kvm_host.h
> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> >>
> >> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
> >>
> >> -#ifdef CONFIG_S390
> >> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
> >
> > What about /* might need extension/rework in the future */ instead of
> > the FIXME?
>
> Yeah, I guess the maintainers can help to fix it when applying. :)
>
> >
> > As far as I understand, 4096 should cover most architectures and the
> > sane end of s390 configurations, but will not be enough at the scarier
> > end of s390. (I'm not sure how much it matters in practice.)
> >
> > Do we want to make this a tuneable in the future? Do some kind of
> > dynamic allocation? Not sure whether it is worth the trouble.
>
> I think keep as it is currently.

My main question here is how long this is enough... the number of
virtqueues per device is up to 1K from the initial 64, which makes it
possible to hit the 4K limit with fewer virtio devices than before (on
s390, each virtqueue uses a routing table entry). OTOH, we don't want
giant tables everywhere just to accommodate s390.

If the s390 maintainers tell me that nobody is doing the really insane
stuff, I'm happy as well :)

2018-04-21 00:44:34

by Wanpeng Li

[permalink] [raw]
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

2018-04-20 22:21 GMT+08:00 Cornelia Huck <[email protected]>:
> On Fri, 20 Apr 2018 21:51:13 +0800
> Wanpeng Li <[email protected]> wrote:
>
>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <[email protected]>:
>> > On Thu, 19 Apr 2018 17:47:28 -0700
>> > Wanpeng Li <[email protected]> wrote:
>> >
>> >> From: Wanpeng Li <[email protected]>
>> >>
>> >> Our virtual machines make use of device assignment by configuring
>> >> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>> >> MSI-X Table entries:
>> >> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>> >> The windows virtual machines fail to boot since they will map the number of
>> >> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>> >> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>> >> for all archs, in the future this might be extended again if needed.
>> >>
>> >> Cc: Paolo Bonzini <[email protected]>
>> >> Cc: Radim Krčmář <[email protected]>
>> >> Cc: Tonny Lu <[email protected]>
>> >> Cc: Cornelia Huck <[email protected]>
>> >> Signed-off-by: Wanpeng Li <[email protected]>
>> >> Signed-off-by: Tonny Lu <[email protected]>
>> >> ---
>> >> v1 -> v2:
>> >> * extend MAX_IRQ_ROUTES to 4096 for all archs
>> >>
>> >> include/linux/kvm_host.h | 6 ------
>> >> 1 file changed, 6 deletions(-)
>> >>
>> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> >> index 6930c63..0a5c299 100644
>> >> --- a/include/linux/kvm_host.h
>> >> +++ b/include/linux/kvm_host.h
>> >> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>> >>
>> >> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>> >>
>> >> -#ifdef CONFIG_S390
>> >> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>> >
>> > What about /* might need extension/rework in the future */ instead of
>> > the FIXME?
>>
>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>
>> >
>> > As far as I understand, 4096 should cover most architectures and the
>> > sane end of s390 configurations, but will not be enough at the scarier
>> > end of s390. (I'm not sure how much it matters in practice.)
>> >
>> > Do we want to make this a tuneable in the future? Do some kind of
>> > dynamic allocation? Not sure whether it is worth the trouble.
>>
>> I think keep as it is currently.
>
> My main question here is how long this is enough... the number of
> virtqueues per device is up to 1K from the initial 64, which makes it
> possible to hit the 4K limit with fewer virtio devices than before (on
> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> giant tables everywhere just to accommodate s390.

I suspect there is no real scenario to futher extend for s390 since no
guys report.

> If the s390 maintainers tell me that nobody is doing the really insane
> stuff, I'm happy as well :)

Christian, any thoughts?

Regards,
Wanpeng Li

2018-04-23 11:52:48

by Christian Borntraeger

[permalink] [raw]
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs



On 04/21/2018 02:38 AM, Wanpeng Li wrote:
> 2018-04-20 22:21 GMT+08:00 Cornelia Huck <[email protected]>:
>> On Fri, 20 Apr 2018 21:51:13 +0800
>> Wanpeng Li <[email protected]> wrote:
>>
>>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <[email protected]>:
>>>> On Thu, 19 Apr 2018 17:47:28 -0700
>>>> Wanpeng Li <[email protected]> wrote:
>>>>
>>>>> From: Wanpeng Li <[email protected]>
>>>>>
>>>>> Our virtual machines make use of device assignment by configuring
>>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>>>>> MSI-X Table entries:
>>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>>>>> The windows virtual machines fail to boot since they will map the number of
>>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>>>>> for all archs, in the future this might be extended again if needed.
>>>>>
>>>>> Cc: Paolo Bonzini <[email protected]>
>>>>> Cc: Radim Krčmář <[email protected]>
>>>>> Cc: Tonny Lu <[email protected]>
>>>>> Cc: Cornelia Huck <[email protected]>
>>>>> Signed-off-by: Wanpeng Li <[email protected]>
>>>>> Signed-off-by: Tonny Lu <[email protected]>
>>>>> ---
>>>>> v1 -> v2:
>>>>> * extend MAX_IRQ_ROUTES to 4096 for all archs
>>>>>
>>>>> include/linux/kvm_host.h | 6 ------
>>>>> 1 file changed, 6 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>>>> index 6930c63..0a5c299 100644
>>>>> --- a/include/linux/kvm_host.h
>>>>> +++ b/include/linux/kvm_host.h
>>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>>>>
>>>>> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>>>>
>>>>> -#ifdef CONFIG_S390
>>>>> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>>>>
>>>> What about /* might need extension/rework in the future */ instead of
>>>> the FIXME?
>>>
>>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>>
>>>>
>>>> As far as I understand, 4096 should cover most architectures and the
>>>> sane end of s390 configurations, but will not be enough at the scarier
>>>> end of s390. (I'm not sure how much it matters in practice.)
>>>>
>>>> Do we want to make this a tuneable in the future? Do some kind of
>>>> dynamic allocation? Not sure whether it is worth the trouble.
>>>
>>> I think keep as it is currently.
>>
>> My main question here is how long this is enough... the number of
>> virtqueues per device is up to 1K from the initial 64, which makes it
>> possible to hit the 4K limit with fewer virtio devices than before (on
>> s390, each virtqueue uses a routing table entry). OTOH, we don't want
>> giant tables everywhere just to accommodate s390.
>
> I suspect there is no real scenario to futher extend for s390 since no
> guys report.
>
>> If the s390 maintainers tell me that nobody is doing the really insane
>> stuff, I'm happy as well :)
>
> Christian, any thoughts?

For now this patch is a no-op for s390 so as long as nobody complains today we are good.
If it turns out to be "not enough" we can then add a configurable number or whatever.


2018-04-23 11:57:41

by Wanpeng Li

[permalink] [raw]
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

2018-04-23 19:50 GMT+08:00 Christian Borntraeger <[email protected]>:
>
>
> On 04/21/2018 02:38 AM, Wanpeng Li wrote:
>> 2018-04-20 22:21 GMT+08:00 Cornelia Huck <[email protected]>:
>>> On Fri, 20 Apr 2018 21:51:13 +0800
>>> Wanpeng Li <[email protected]> wrote:
>>>
>>>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <[email protected]>:
>>>>> On Thu, 19 Apr 2018 17:47:28 -0700
>>>>> Wanpeng Li <[email protected]> wrote:
>>>>>
>>>>>> From: Wanpeng Li <[email protected]>
>>>>>>
>>>>>> Our virtual machines make use of device assignment by configuring
>>>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
>>>>>> MSI-X Table entries:
>>>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
>>>>>> The windows virtual machines fail to boot since they will map the number of
>>>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
>>>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
>>>>>> for all archs, in the future this might be extended again if needed.
>>>>>>
>>>>>> Cc: Paolo Bonzini <[email protected]>
>>>>>> Cc: Radim Krčmář <[email protected]>
>>>>>> Cc: Tonny Lu <[email protected]>
>>>>>> Cc: Cornelia Huck <[email protected]>
>>>>>> Signed-off-by: Wanpeng Li <[email protected]>
>>>>>> Signed-off-by: Tonny Lu <[email protected]>
>>>>>> ---
>>>>>> v1 -> v2:
>>>>>> * extend MAX_IRQ_ROUTES to 4096 for all archs
>>>>>>
>>>>>> include/linux/kvm_host.h | 6 ------
>>>>>> 1 file changed, 6 deletions(-)
>>>>>>
>>>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>>>>> index 6930c63..0a5c299 100644
>>>>>> --- a/include/linux/kvm_host.h
>>>>>> +++ b/include/linux/kvm_host.h
>>>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
>>>>>>
>>>>>> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
>>>>>>
>>>>>> -#ifdef CONFIG_S390
>>>>>> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
>>>>>
>>>>> What about /* might need extension/rework in the future */ instead of
>>>>> the FIXME?
>>>>
>>>> Yeah, I guess the maintainers can help to fix it when applying. :)
>>>>
>>>>>
>>>>> As far as I understand, 4096 should cover most architectures and the
>>>>> sane end of s390 configurations, but will not be enough at the scarier
>>>>> end of s390. (I'm not sure how much it matters in practice.)
>>>>>
>>>>> Do we want to make this a tuneable in the future? Do some kind of
>>>>> dynamic allocation? Not sure whether it is worth the trouble.
>>>>
>>>> I think keep as it is currently.
>>>
>>> My main question here is how long this is enough... the number of
>>> virtqueues per device is up to 1K from the initial 64, which makes it
>>> possible to hit the 4K limit with fewer virtio devices than before (on
>>> s390, each virtqueue uses a routing table entry). OTOH, we don't want
>>> giant tables everywhere just to accommodate s390.
>>
>> I suspect there is no real scenario to futher extend for s390 since no
>> guys report.
>>
>>> If the s390 maintainers tell me that nobody is doing the really insane
>>> stuff, I'm happy as well :)
>>
>> Christian, any thoughts?
>
> For now this patch is a no-op for s390 so as long as nobody complains today we are good.
> If it turns out to be "not enough" we can then add a configurable number or whatever.

Thanks Christian. Paolo, could you pick this one w/ "/* might need
extension/rework in the future */ instead of
the FIXME" change or do you need I to send out a new version? :)

Regards,
Wanpeng Li

2018-04-23 11:58:55

by Cornelia Huck

[permalink] [raw]
Subject: Re: [PATCH v2] KVM: Extend MAX_IRQ_ROUTES to 4096 for all archs

On Mon, 23 Apr 2018 13:50:48 +0200
Christian Borntraeger <[email protected]> wrote:

> On 04/21/2018 02:38 AM, Wanpeng Li wrote:
> > 2018-04-20 22:21 GMT+08:00 Cornelia Huck <[email protected]>:
> >> On Fri, 20 Apr 2018 21:51:13 +0800
> >> Wanpeng Li <[email protected]> wrote:
> >>
> >>> 2018-04-20 15:15 GMT+08:00 Cornelia Huck <[email protected]>:
> >>>> On Thu, 19 Apr 2018 17:47:28 -0700
> >>>> Wanpeng Li <[email protected]> wrote:
> >>>>
> >>>>> From: Wanpeng Li <[email protected]>
> >>>>>
> >>>>> Our virtual machines make use of device assignment by configuring
> >>>>> 12 NVMe disks for high I/O performance. Each NVMe device has 129
> >>>>> MSI-X Table entries:
> >>>>> Capabilities: [50] MSI-X: Enable+ Count=129 Masked-Vector table: BAR=0 offset=00002000
> >>>>> The windows virtual machines fail to boot since they will map the number of
> >>>>> MSI-table entries that the NVMe hardware reported to the bus to msi routing
> >>>>> table, this will exceed the 1024. This patch extends MAX_IRQ_ROUTES to 4096
> >>>>> for all archs, in the future this might be extended again if needed.
> >>>>>
> >>>>> Cc: Paolo Bonzini <[email protected]>
> >>>>> Cc: Radim Krčmář <[email protected]>
> >>>>> Cc: Tonny Lu <[email protected]>
> >>>>> Cc: Cornelia Huck <[email protected]>
> >>>>> Signed-off-by: Wanpeng Li <[email protected]>
> >>>>> Signed-off-by: Tonny Lu <[email protected]>
> >>>>> ---
> >>>>> v1 -> v2:
> >>>>> * extend MAX_IRQ_ROUTES to 4096 for all archs
> >>>>>
> >>>>> include/linux/kvm_host.h | 6 ------
> >>>>> 1 file changed, 6 deletions(-)
> >>>>>
> >>>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >>>>> index 6930c63..0a5c299 100644
> >>>>> --- a/include/linux/kvm_host.h
> >>>>> +++ b/include/linux/kvm_host.h
> >>>>> @@ -1045,13 +1045,7 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> >>>>>
> >>>>> #ifdef CONFIG_HAVE_KVM_IRQ_ROUTING
> >>>>>
> >>>>> -#ifdef CONFIG_S390
> >>>>> #define KVM_MAX_IRQ_ROUTES 4096 //FIXME: we can have more than that...
> >>>>
> >>>> What about /* might need extension/rework in the future */ instead of
> >>>> the FIXME?
> >>>
> >>> Yeah, I guess the maintainers can help to fix it when applying. :)
> >>>
> >>>>
> >>>> As far as I understand, 4096 should cover most architectures and the
> >>>> sane end of s390 configurations, but will not be enough at the scarier
> >>>> end of s390. (I'm not sure how much it matters in practice.)
> >>>>
> >>>> Do we want to make this a tuneable in the future? Do some kind of
> >>>> dynamic allocation? Not sure whether it is worth the trouble.
> >>>
> >>> I think keep as it is currently.
> >>
> >> My main question here is how long this is enough... the number of
> >> virtqueues per device is up to 1K from the initial 64, which makes it
> >> possible to hit the 4K limit with fewer virtio devices than before (on
> >> s390, each virtqueue uses a routing table entry). OTOH, we don't want
> >> giant tables everywhere just to accommodate s390.
> >
> > I suspect there is no real scenario to futher extend for s390 since no
> > guys report.
> >
> >> If the s390 maintainers tell me that nobody is doing the really insane
> >> stuff, I'm happy as well :)
> >
> > Christian, any thoughts?
>
> For now this patch is a no-op for s390 so as long as nobody complains today we are good.
> If it turns out to be "not enough" we can then add a configurable number or whatever.

OK, then let's deal with the problem once it shows up.

With the comment changed as suggested above,

Reviewed-by: Cornelia Huck <[email protected]>