The xfs_icsb_modify_counters() function no longer needs the cpu variable
if we use this_cpu_ptr() and we can get rid of get/put_cpu().
cc: Christoph Hellwig <[email protected]>
Acked-by: Olaf Weber <[email protected]>
Signed-off-by: Christoph Lameter <[email protected]>
---
fs/xfs/xfs_mount.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
Index: linux-2.6/fs/xfs/xfs_mount.c
===================================================================
--- linux-2.6.orig/fs/xfs/xfs_mount.c 2009-05-28 15:03:50.000000000 -0500
+++ linux-2.6/fs/xfs/xfs_mount.c 2009-05-28 15:09:05.000000000 -0500
@@ -2320,12 +2320,12 @@ xfs_icsb_modify_counters(
{
xfs_icsb_cnts_t *icsbp;
long long lcounter; /* long counter for 64 bit fields */
- int cpu, ret = 0;
+ int ret = 0;
might_sleep();
again:
- cpu = get_cpu();
- icsbp = (xfs_icsb_cnts_t *)per_cpu_ptr(mp->m_sb_cnts, cpu);
+ preempt_disable();
+ icsbp = (xfs_icsb_cnts_t *)this_cpu_ptr(mp->m_sb_cnts);
/*
* if the counter is disabled, go to slow path
@@ -2369,11 +2369,11 @@ again:
break;
}
xfs_icsb_unlock_cntr(icsbp);
- put_cpu();
+ preempt_enable();
return 0;
slow_path:
- put_cpu();
+ preempt_enable();
/*
* serialise with a mutex so we don't burn lots of cpu on
@@ -2421,7 +2421,7 @@ slow_path:
balance_counter:
xfs_icsb_unlock_cntr(icsbp);
- put_cpu();
+ preempt_enable();
/*
* We may have multiple threads here if multiple per-cpu
--
On Fri, Jun 05, 2009 at 03:18:26PM -0400, [email protected] wrote:
> The xfs_icsb_modify_counters() function no longer needs the cpu variable
> if we use this_cpu_ptr() and we can get rid of get/put_cpu().
Looks good to me. While you're at it you might also remove the
superflous cast of the this_cpu_ptr return value.
Reviewed-by: Christoph Hellwig <[email protected]>
Btw, any reason this_cpu_ptr doesn't do the preempt_disable itself
and has something paired to reverse it?
On Fri, 5 Jun 2009, Christoph Hellwig wrote:
> Looks good to me. While you're at it you might also remove the
> superflous cast of the this_cpu_ptr return value.
Ok.
> Reviewed-by: Christoph Hellwig <[email protected]>
>
> Btw, any reason this_cpu_ptr doesn't do the preempt_disable itself
> and has something paired to reverse it?
Would break the symmetry with the atomic per cpu ops introduced in the
same patch. Putting preempt side effects and RMWs together is making
things a bit complicated.
Also if the caller manages the preempt explicity (like this piece of code)
then may be better to have separate statements for clarity.