On 2017-02-27 09:21, Thomas Gleixner wrote:
> On Mon, 27 Feb 2017, Sodagudi Prasad wrote:
>> So I am thinking that, adding following sched_work() would notify
>> clients.
>
> And break the world and some more.
>
>> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
>> index 6b66959..5e4766b 100644
>> --- a/kernel/irq/manage.c
>> +++ b/kernel/irq/manage.c
>> @@ -207,6 +207,7 @@ int irq_do_set_affinity(struct irq_data *data,
>> const
>> struct cpumask *mask,
>> case IRQ_SET_MASK_OK_DONE:
>> cpumask_copy(desc->irq_common_data.affinity, mask);
>> case IRQ_SET_MASK_OK_NOCOPY:
>> + schedule_work(&desc->affinity_notify->work);
>> irq_set_thread_affinity(desc);
>> ret = 0;
>
> You cannot do that unconditionally and just slap that schedule_work()
> call
> into the code. Aside of that schedule_work() would be invoked twice for
> all
> calls which come via irq_set_affinity_locked() ....
Hi Tglx,
Yes. I agree with you, schedule_work() gets invoked twice with previous
change.
How about calling irq_set_notify_locked() instead of
irq_do_set_notify()?
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index 011f8c4..e8ce0db 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -18,8 +18,8 @@ static bool migrate_one_irq(struct irq_desc *desc)
{
struct irq_data *d = irq_desc_get_irq_data(desc);
const struct cpumask *affinity = d->common->affinity;
- struct irq_chip *c;
bool ret = false;
+ int r;
/*
* If this is a per-CPU interrupt, or the affinity does not
@@ -34,15 +34,10 @@ static bool migrate_one_irq(struct irq_desc *desc)
ret = true;
}
- c = irq_data_get_irq_chip(d);
- if (!c->irq_set_affinity) {
- pr_debug("IRQ%u: unable to set affinity\n", d->irq);
- } else {
- int r = irq_do_set_affinity(d, affinity, false);
- if (r)
- pr_warn_ratelimited("IRQ%u: set affinity
failed(%d).\n",
+ r = irq_set_affinity_locked(d, affinity, false);
+ if (r)
+ pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n",
d->irq, r);
- }
return ret;
-Thanks, Prasad
>
> Thanks,
>
> tglx
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum,
Linux Foundation Collaborative Project
On Mon, 13 Mar 2017, Sodagudi Prasad wrote:
> On 2017-02-27 09:21, Thomas Gleixner wrote:
> > On Mon, 27 Feb 2017, Sodagudi Prasad wrote:
> > > So I am thinking that, adding following sched_work() would notify clients.
> >
> > And break the world and some more.
> >
> > > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> > > index 6b66959..5e4766b 100644
> > > --- a/kernel/irq/manage.c
> > > +++ b/kernel/irq/manage.c
> > > @@ -207,6 +207,7 @@ int irq_do_set_affinity(struct irq_data *data, const
> > > struct cpumask *mask,
> > > case IRQ_SET_MASK_OK_DONE:
> > > cpumask_copy(desc->irq_common_data.affinity, mask);
> > > case IRQ_SET_MASK_OK_NOCOPY:
> > > + schedule_work(&desc->affinity_notify->work);
> > > irq_set_thread_affinity(desc);
> > > ret = 0;
> >
> > You cannot do that unconditionally and just slap that schedule_work() call
> > into the code. Aside of that schedule_work() would be invoked twice for all
> > calls which come via irq_set_affinity_locked() ....
> Hi Tglx,
>
> Yes. I agree with you, schedule_work() gets invoked twice with previous
> change.
>
> How about calling irq_set_notify_locked() instead of irq_do_set_notify()?
Is this a quiz?
Can you actually see the difference between these functions? There is a
damned good reason WHY this calls irq_do_set_affinity().
Thanks,
tglx
On 2017-03-13 13:19, Thomas Gleixner wrote:
> On Mon, 13 Mar 2017, Sodagudi Prasad wrote:
>> On 2017-02-27 09:21, Thomas Gleixner wrote:
>> > On Mon, 27 Feb 2017, Sodagudi Prasad wrote:
>> > > So I am thinking that, adding following sched_work() would notify clients.
>> >
>> > And break the world and some more.
>> >
>> > > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
>> > > index 6b66959..5e4766b 100644
>> > > --- a/kernel/irq/manage.c
>> > > +++ b/kernel/irq/manage.c
>> > > @@ -207,6 +207,7 @@ int irq_do_set_affinity(struct irq_data *data, const
>> > > struct cpumask *mask,
>> > > case IRQ_SET_MASK_OK_DONE:
>> > > cpumask_copy(desc->irq_common_data.affinity, mask);
>> > > case IRQ_SET_MASK_OK_NOCOPY:
>> > > + schedule_work(&desc->affinity_notify->work);
>> > > irq_set_thread_affinity(desc);
>> > > ret = 0;
>> >
>> > You cannot do that unconditionally and just slap that schedule_work() call
>> > into the code. Aside of that schedule_work() would be invoked twice for all
>> > calls which come via irq_set_affinity_locked() ....
>> Hi Tglx,
>>
>> Yes. I agree with you, schedule_work() gets invoked twice with
>> previous
>> change.
>>
>> How about calling irq_set_notify_locked() instead of
>> irq_do_set_notify()?
>
> Is this a quiz?
>
> Can you actually see the difference between these functions? There is a
> damned good reason WHY this calls irq_do_set_affinity().
Other option is that, adding an argument to irq_do_set_affinity() and
queue
work to notify when that new parameter set. I have attached patch for
the same.
I tested this change on arm64 bit platform and observed that clients
drivers
are getting notified during cpu hot plug.
> Thanks,
>
> tglx
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum,
Linux Foundation Collaborative Project
On Fri, 17 Mar 2017, Sodagudi Prasad wrote:
> On 2017-03-13 13:19, Thomas Gleixner wrote:
> > Can you actually see the difference between these functions? There is a
> > damned good reason WHY this calls irq_do_set_affinity().
>
> Other option is that, adding an argument to irq_do_set_affinity() and queue
> work to notify when that new parameter set. I have attached patch for the
> same.
Documentation/process/submitting-patches.rst: Section #6
During the cpu hotplug, irq are getting migrated from
hotplugging core but not getting notitfied to client
drivers. So add parameter to irq_do_set_affinity(),
to check and notify client drivers during the cpu hotplug.
Signed-off-by: Prasad Sodagudi <[email protected]>
---
kernel/irq/cpuhotplug.c | 2 +-
kernel/irq/internals.h | 2 +-
kernel/irq/manage.c | 9 ++++++---
3 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c
index 011f8c4..e293d9b 100644
--- a/kernel/irq/cpuhotplug.c
+++ b/kernel/irq/cpuhotplug.c
@@ -38,7 +38,7 @@ static bool migrate_one_irq(struct irq_desc *desc)
if (!c->irq_set_affinity) {
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
} else {
- int r = irq_do_set_affinity(d, affinity, false);
+ int r = irq_do_set_affinity(d, affinity, false, true);
if (r)
pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n",
d->irq, r);
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index bc226e7..6abde48 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -114,7 +114,7 @@ static inline void unregister_handler_proc(unsigned int irq,
extern void irq_set_thread_affinity(struct irq_desc *desc);
extern int irq_do_set_affinity(struct irq_data *data,
- const struct cpumask *dest, bool force);
+ const struct cpumask *dest, bool force, bool notify);
/* Inline functions for support of irq chips on slow busses */
static inline void chip_bus_lock(struct irq_desc *desc)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index a4afe5c..fea8c8e 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -197,7 +197,7 @@ static inline bool irq_move_pending(struct irq_data *data)
#endif
int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
- bool force)
+ bool force, bool notify)
{
struct irq_desc *desc = irq_data_to_desc(data);
struct irq_chip *chip = irq_data_get_irq_chip(data);
@@ -209,6 +209,9 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask,
case IRQ_SET_MASK_OK_DONE:
cpumask_copy(desc->irq_common_data.affinity, mask);
case IRQ_SET_MASK_OK_NOCOPY:
+ if (notify && desc->affinity_notify)
+ schedule_work(&desc->affinity_notify->work);
+
irq_set_thread_affinity(desc);
ret = 0;
}
@@ -227,7 +230,7 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
return -EINVAL;
if (irq_can_move_pcntxt(data)) {
- ret = irq_do_set_affinity(data, mask, force);
+ ret = irq_do_set_affinity(data, mask, force, false);
} else {
irqd_set_move_pending(data);
irq_copy_pending(desc, mask);
@@ -375,7 +378,7 @@ static int setup_affinity(struct irq_desc *desc, struct cpumask *mask)
if (cpumask_intersects(mask, nodemask))
cpumask_and(mask, mask, nodemask);
}
- irq_do_set_affinity(&desc->irq_data, mask, false);
+ irq_do_set_affinity(&desc->irq_data, mask, false, true);
return 0;
}
#else
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
irq_do_set_affinity() last argument differentiates whether notify work
need to queued for this irq or not. So that we can avoid double queuing of notify
in the irq_set_affinity_locked() path.
Hi Prasad,
[auto build test ERROR on tip/irq/core]
[also build test ERROR on v4.11-rc3 next-20170322]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Prasad-Sodagudi/genirq-Notify-clients-whenever-there-is-change-in-affinity/20170323-094431
config: x86_64-randconfig-x015-201712 (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64
All errors (new ones prefixed by >>):
kernel/irq/migration.c: In function 'irq_move_masked_irq':
>> kernel/irq/migration.c:46:3: error: too few arguments to function 'irq_do_set_affinity'
irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
^~~~~~~~~~~~~~~~~~~
In file included from kernel/irq/migration.c:5:0:
kernel/irq/internals.h:116:12: note: declared here
extern int irq_do_set_affinity(struct irq_data *data,
^~~~~~~~~~~~~~~~~~~
vim +/irq_do_set_affinity +46 kernel/irq/migration.c
c777ac55 Andrew Morton 2006-03-25 40 * Being paranoid i guess!
e7b946e9 Eric W. Biederman 2006-10-04 41 *
e7b946e9 Eric W. Biederman 2006-10-04 42 * For correct operation this depends on the caller
e7b946e9 Eric W. Biederman 2006-10-04 43 * masking the irqs.
c777ac55 Andrew Morton 2006-03-25 44 */
818b0f3b Jiang Liu 2012-03-30 45 if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids)
818b0f3b Jiang Liu 2012-03-30 @46 irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
57b150cc Yinghai Lu 2009-04-27 47
7f7ace0c Mike Travis 2009-01-10 48 cpumask_clear(desc->pending_mask);
e7b946e9 Eric W. Biederman 2006-10-04 49 }
:::::: The code at line 46 was first introduced by commit
:::::: 818b0f3bfb236ae66cac3ff38e86b9e47f24b7aa genirq: Introduce irq_do_set_affinity() to reduce duplicated code
:::::: TO: Jiang Liu <[email protected]>
:::::: CC: Thomas Gleixner <[email protected]>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
Hi Prasad,
[auto build test WARNING on tip/irq/core]
[also build test WARNING on v4.11-rc3 next-20170322]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Prasad-Sodagudi/genirq-Notify-clients-whenever-there-is-change-in-affinity/20170323-094431
reproduce:
# apt-get install sparse
make ARCH=x86_64 allmodconfig
make C=1 CF=-D__CHECK_ENDIAN__
sparse warnings: (new ones prefixed by >>)
include/linux/compiler.h:264:8: sparse: attribute 'no_sanitize_address': unknown attribute
>> kernel/irq/migration.c:46:36: sparse: not enough arguments for function irq_do_set_affinity
kernel/irq/migration.c: In function 'irq_move_masked_irq':
kernel/irq/migration.c:46:3: error: too few arguments to function 'irq_do_set_affinity'
irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
^~~~~~~~~~~~~~~~~~~
In file included from kernel/irq/migration.c:5:0:
kernel/irq/internals.h:116:12: note: declared here
extern int irq_do_set_affinity(struct irq_data *data,
^~~~~~~~~~~~~~~~~~~
vim +46 kernel/irq/migration.c
c777ac55 Andrew Morton 2006-03-25 30
239007b8 Thomas Gleixner 2009-11-17 31 assert_raw_spin_locked(&desc->lock);
501f2499 Bryan Holty 2006-03-25 32
c777ac55 Andrew Morton 2006-03-25 33 /*
c777ac55 Andrew Morton 2006-03-25 34 * If there was a valid mask to work with, please
c777ac55 Andrew Morton 2006-03-25 35 * do the disable, re-program, enable sequence.
c777ac55 Andrew Morton 2006-03-25 36 * This is *not* particularly important for level triggered
c777ac55 Andrew Morton 2006-03-25 37 * but in a edge trigger case, we might be setting rte
25985edc Lucas De Marchi 2011-03-30 38 * when an active trigger is coming in. This could
c777ac55 Andrew Morton 2006-03-25 39 * cause some ioapics to mal-function.
c777ac55 Andrew Morton 2006-03-25 40 * Being paranoid i guess!
e7b946e9 Eric W. Biederman 2006-10-04 41 *
e7b946e9 Eric W. Biederman 2006-10-04 42 * For correct operation this depends on the caller
e7b946e9 Eric W. Biederman 2006-10-04 43 * masking the irqs.
c777ac55 Andrew Morton 2006-03-25 44 */
818b0f3b Jiang Liu 2012-03-30 45 if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids)
818b0f3b Jiang Liu 2012-03-30 @46 irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
57b150cc Yinghai Lu 2009-04-27 47
7f7ace0c Mike Travis 2009-01-10 48 cpumask_clear(desc->pending_mask);
e7b946e9 Eric W. Biederman 2006-10-04 49 }
e7b946e9 Eric W. Biederman 2006-10-04 50
a439520f Thomas Gleixner 2011-02-04 51 void irq_move_irq(struct irq_data *idata)
e7b946e9 Eric W. Biederman 2006-10-04 52 {
f1a06390 Thomas Gleixner 2011-01-28 53 bool masked;
e7b946e9 Eric W. Biederman 2006-10-04 54
:::::: The code at line 46 was first introduced by commit
:::::: 818b0f3bfb236ae66cac3ff38e86b9e47f24b7aa genirq: Introduce irq_do_set_affinity() to reduce duplicated code
:::::: TO: Jiang Liu <[email protected]>
:::::: CC: Thomas Gleixner <[email protected]>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation