2006-01-25 11:26:23

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 0/6] RFC: use include/asm-generic/bitops.h

Large number of boilerplate bit operations written in C-language
are scattered around include/asm-*/bitops.h.
These patch series gather them into include/asm-generic/bitops.h. And

- kill duplicated code and comment (about 4000lines)
- use better C-language equivalents
- help porting new architecture (now include/asm-generic/bitops.h is not
referenced from anywhere)


2006-01-25 11:28:56

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 1/6] {set,clear,test}_bit() related cleanup

While working on these patch set, I found several possible cleanup
on x86-64 and ia64.

Signed-off-by: Akinobu Mita <[email protected]>
---
arch/ia64/kernel/mca.c | 3 ++-
arch/x86_64/kernel/mce.c | 3 +--
arch/x86_64/kernel/setup.c | 3 +--
arch/x86_64/pci/mmconfig.c | 4 ++--
include/asm-x86_64/mmu_context.h | 6 +++---
include/asm-x86_64/pgtable.h | 6 +++---
6 files changed, 12 insertions(+), 13 deletions(-)

Index: 2.6-git/arch/x86_64/kernel/mce.c
===================================================================
--- 2.6-git.orig/arch/x86_64/kernel/mce.c 2006-01-25 19:07:15.000000000 +0900
+++ 2.6-git/arch/x86_64/kernel/mce.c 2006-01-25 19:13:59.000000000 +0900
@@ -139,8 +139,7 @@

static int mce_available(struct cpuinfo_x86 *c)
{
- return test_bit(X86_FEATURE_MCE, &c->x86_capability) &&
- test_bit(X86_FEATURE_MCA, &c->x86_capability);
+ return cpu_has(c, X86_FEATURE_MCE) && cpu_has(c, X86_FEATURE_MCA);
}

static inline void mce_get_rip(struct mce *m, struct pt_regs *regs)
Index: 2.6-git/arch/x86_64/kernel/setup.c
===================================================================
--- 2.6-git.orig/arch/x86_64/kernel/setup.c 2006-01-25 19:07:15.000000000 +0900
+++ 2.6-git/arch/x86_64/kernel/setup.c 2006-01-25 19:13:59.000000000 +0900
@@ -1334,8 +1334,7 @@
{
int i;
for ( i = 0 ; i < 32*NCAPINTS ; i++ )
- if ( test_bit(i, &c->x86_capability) &&
- x86_cap_flags[i] != NULL )
+ if (cpu_has(c, i) && x86_cap_flags[i] != NULL )
seq_printf(m, " %s", x86_cap_flags[i]);
}

Index: 2.6-git/include/asm-x86_64/mmu_context.h
===================================================================
--- 2.6-git.orig/include/asm-x86_64/mmu_context.h 2006-01-25 19:07:15.000000000 +0900
+++ 2.6-git/include/asm-x86_64/mmu_context.h 2006-01-25 19:13:59.000000000 +0900
@@ -34,12 +34,12 @@
unsigned cpu = smp_processor_id();
if (likely(prev != next)) {
/* stop flush ipis for the previous mm */
- clear_bit(cpu, &prev->cpu_vm_mask);
+ cpu_clear(cpu, prev->cpu_vm_mask);
#ifdef CONFIG_SMP
write_pda(mmu_state, TLBSTATE_OK);
write_pda(active_mm, next);
#endif
- set_bit(cpu, &next->cpu_vm_mask);
+ cpu_set(cpu, next->cpu_vm_mask);
load_cr3(next->pgd);

if (unlikely(next->context.ldt != prev->context.ldt))
@@ -50,7 +50,7 @@
write_pda(mmu_state, TLBSTATE_OK);
if (read_pda(active_mm) != next)
out_of_line_bug();
- if(!test_and_set_bit(cpu, &next->cpu_vm_mask)) {
+ if(!cpu_test_and_set(cpu, next->cpu_vm_mask)) {
/* We were in lazy tlb mode and leave_mm disabled
* tlb flush IPI delivery. We must reload CR3
* to make sure to use no freed page tables.
Index: 2.6-git/arch/x86_64/pci/mmconfig.c
===================================================================
--- 2.6-git.orig/arch/x86_64/pci/mmconfig.c 2006-01-25 19:07:15.000000000 +0900
+++ 2.6-git/arch/x86_64/pci/mmconfig.c 2006-01-25 19:13:59.000000000 +0900
@@ -46,7 +46,7 @@
static char __iomem *pci_dev_base(unsigned int seg, unsigned int bus, unsigned int devfn)
{
char __iomem *addr;
- if (seg == 0 && bus == 0 && test_bit(PCI_SLOT(devfn), &fallback_slots))
+ if (seg == 0 && bus == 0 && test_bit(PCI_SLOT(devfn), fallback_slots))
return NULL;
addr = get_virt(seg, bus);
if (!addr)
@@ -134,7 +134,7 @@
continue;
addr = pci_dev_base(0, 0, PCI_DEVFN(i, 0));
if (addr == NULL|| readl(addr) != val1) {
- set_bit(i, &fallback_slots);
+ set_bit(i, fallback_slots);
}
}
}
Index: 2.6-git/include/asm-x86_64/pgtable.h
===================================================================
--- 2.6-git.orig/include/asm-x86_64/pgtable.h 2006-01-25 19:07:15.000000000 +0900
+++ 2.6-git/include/asm-x86_64/pgtable.h 2006-01-25 19:13:59.000000000 +0900
@@ -293,19 +293,19 @@
{
if (!pte_dirty(*ptep))
return 0;
- return test_and_clear_bit(_PAGE_BIT_DIRTY, ptep);
+ return test_and_clear_bit(_PAGE_BIT_DIRTY, &ptep->pte);
}

static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
{
if (!pte_young(*ptep))
return 0;
- return test_and_clear_bit(_PAGE_BIT_ACCESSED, ptep);
+ return test_and_clear_bit(_PAGE_BIT_ACCESSED, &ptep->pte);
}

static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
- clear_bit(_PAGE_BIT_RW, ptep);
+ clear_bit(_PAGE_BIT_RW, &ptep->pte);
}

/*
Index: 2.6-git/arch/ia64/kernel/mca.c
===================================================================
--- 2.6-git.orig/arch/ia64/kernel/mca.c 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/arch/ia64/kernel/mca.c 2006-01-25 19:14:01.000000000 +0900
@@ -69,6 +69,7 @@
#include <linux/kernel.h>
#include <linux/smp.h>
#include <linux/workqueue.h>
+#include <linux/cpumask.h>

#include <asm/delay.h>
#include <asm/kdebug.h>
@@ -1430,7 +1431,7 @@
ti->cpu = cpu;
p->thread_info = ti;
p->state = TASK_UNINTERRUPTIBLE;
- __set_bit(cpu, &p->cpus_allowed);
+ cpu_set(cpu, p->cpus_allowed);
INIT_LIST_HEAD(&p->tasks);
p->parent = p->real_parent = p->group_leader = p;
INIT_LIST_HEAD(&p->children);

2006-01-25 11:30:32

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 2/6] use non atomic operations for minix_*_bit() and ext2_*_bit()

Bitmap functions for the minix filesystem and the ext2 filesystem do not
require the atomic guarantees except ext2_set_bit_atomic() and
ext2_clear_bit_atomic().

But they are defined by using atomic bit operations on several architectures.
(h8300, ia64, mips, s390, sh, sh64, sparc, v850, and xtensa)
This patch switches to non atomic bit operation.

Signed-off-by: Akinobu Mita <[email protected]>
---
asm-h8300/bitops.h | 6 +++---
asm-ia64/bitops.h | 10 +++++-----
asm-mips/bitops.h | 6 +++---
asm-s390/bitops.h | 10 +++++-----
asm-sh/bitops.h | 16 +++++-----------
asm-sh64/bitops.h | 16 +++++-----------
asm-sparc/bitops.h | 6 +++---
asm-sparc64/bitops.h | 6 +++---
asm-v850/bitops.h | 10 +++++-----
asm-xtensa/bitops.h | 6 +++---
10 files changed, 40 insertions(+), 52 deletions(-)

Index: 2.6-git/include/asm-h8300/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-h8300/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-h8300/bitops.h 2006-01-25 19:14:01.000000000 +0900
@@ -397,9 +397,9 @@
}

/* Bitmap functions for the minix filesystem. */
-#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
-#define minix_set_bit(nr,addr) set_bit(nr,addr)
-#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_and_set_bit(nr,addr) __test_and_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) __set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) __test_and_clear_bit(nr,addr)
#define minix_test_bit(nr,addr) test_bit(nr,addr)
#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)

Index: 2.6-git/include/asm-ia64/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-ia64/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-ia64/bitops.h 2006-01-25 19:14:02.000000000 +0900
@@ -394,18 +394,18 @@

#define __clear_bit(nr, addr) clear_bit(nr, addr)

-#define ext2_set_bit test_and_set_bit
+#define ext2_set_bit __test_and_set_bit
#define ext2_set_bit_atomic(l,n,a) test_and_set_bit(n,a)
-#define ext2_clear_bit test_and_clear_bit
+#define ext2_clear_bit __test_and_clear_bit
#define ext2_clear_bit_atomic(l,n,a) test_and_clear_bit(n,a)
#define ext2_test_bit test_bit
#define ext2_find_first_zero_bit find_first_zero_bit
#define ext2_find_next_zero_bit find_next_zero_bit

/* Bitmap functions for the minix filesystem. */
-#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
-#define minix_set_bit(nr,addr) set_bit(nr,addr)
-#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_and_set_bit(nr,addr) __test_and_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) __set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) __test_and_clear_bit(nr,addr)
#define minix_test_bit(nr,addr) test_bit(nr,addr)
#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)

Index: 2.6-git/include/asm-mips/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-mips/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-mips/bitops.h 2006-01-25 19:14:05.000000000 +0900
@@ -956,9 +956,9 @@
* FIXME: These assume that Minix uses the native byte/bitorder.
* This limits the Minix filesystem's value for data exchange very much.
*/
-#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
-#define minix_set_bit(nr,addr) set_bit(nr,addr)
-#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_and_set_bit(nr,addr) __test_and_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) __set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) __test_and_clear_bit(nr,addr)
#define minix_test_bit(nr,addr) test_bit(nr,addr)
#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)

Index: 2.6-git/include/asm-s390/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-s390/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-s390/bitops.h 2006-01-25 19:14:05.000000000 +0900
@@ -871,11 +871,11 @@
*/

#define ext2_set_bit(nr, addr) \
- test_and_set_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
+ __test_and_set_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
#define ext2_set_bit_atomic(lock, nr, addr) \
test_and_set_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
#define ext2_clear_bit(nr, addr) \
- test_and_clear_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
+ __test_and_clear_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
#define ext2_clear_bit_atomic(lock, nr, addr) \
test_and_clear_bit((nr)^(__BITOPS_WORDSIZE - 8), (unsigned long *)addr)
#define ext2_test_bit(nr, addr) \
@@ -1014,11 +1014,11 @@
/* Bitmap functions for the minix filesystem. */
/* FIXME !!! */
#define minix_test_and_set_bit(nr,addr) \
- test_and_set_bit(nr,(unsigned long *)addr)
+ __test_and_set_bit(nr,(unsigned long *)addr)
#define minix_set_bit(nr,addr) \
- set_bit(nr,(unsigned long *)addr)
+ __set_bit(nr,(unsigned long *)addr)
#define minix_test_and_clear_bit(nr,addr) \
- test_and_clear_bit(nr,(unsigned long *)addr)
+ __test_and_clear_bit(nr,(unsigned long *)addr)
#define minix_test_bit(nr,addr) \
test_bit(nr,(unsigned long *)addr)
#define minix_find_first_zero_bit(addr,size) \
Index: 2.6-git/include/asm-sh/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-sh/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-sh/bitops.h 2006-01-25 19:14:06.000000000 +0900
@@ -339,8 +339,8 @@
}

#ifdef __LITTLE_ENDIAN__
-#define ext2_set_bit(nr, addr) test_and_set_bit((nr), (addr))
-#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr), (addr))
+#define ext2_set_bit(nr, addr) __test_and_set_bit((nr), (addr))
+#define ext2_clear_bit(nr, addr) __test_and_clear_bit((nr), (addr))
#define ext2_test_bit(nr, addr) test_bit((nr), (addr))
#define ext2_find_first_zero_bit(addr, size) find_first_zero_bit((addr), (size))
#define ext2_find_next_zero_bit(addr, size, offset) \
@@ -349,30 +349,24 @@
static __inline__ int ext2_set_bit(int nr, volatile void * addr)
{
int mask, retval;
- unsigned long flags;
volatile unsigned char *ADDR = (unsigned char *) addr;

ADDR += nr >> 3;
mask = 1 << (nr & 0x07);
- local_irq_save(flags);
retval = (mask & *ADDR) != 0;
*ADDR |= mask;
- local_irq_restore(flags);
return retval;
}

static __inline__ int ext2_clear_bit(int nr, volatile void * addr)
{
int mask, retval;
- unsigned long flags;
volatile unsigned char *ADDR = (unsigned char *) addr;

ADDR += nr >> 3;
mask = 1 << (nr & 0x07);
- local_irq_save(flags);
retval = (mask & *ADDR) != 0;
*ADDR &= ~mask;
- local_irq_restore(flags);
return retval;
}

@@ -459,9 +453,9 @@
})

/* Bitmap functions for the minix filesystem. */
-#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
-#define minix_set_bit(nr,addr) set_bit(nr,addr)
-#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_and_set_bit(nr,addr) __test_and_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) __set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) __test_and_clear_bit(nr,addr)
#define minix_test_bit(nr,addr) test_bit(nr,addr)
#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)

Index: 2.6-git/include/asm-sh64/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-sh64/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-sh64/bitops.h 2006-01-25 19:14:07.000000000 +0900
@@ -382,8 +382,8 @@
#define hweight8(x) generic_hweight8(x)

#ifdef __LITTLE_ENDIAN__
-#define ext2_set_bit(nr, addr) test_and_set_bit((nr), (addr))
-#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr), (addr))
+#define ext2_set_bit(nr, addr) __test_and_set_bit((nr), (addr))
+#define ext2_clear_bit(nr, addr) __test_and_clear_bit((nr), (addr))
#define ext2_test_bit(nr, addr) test_bit((nr), (addr))
#define ext2_find_first_zero_bit(addr, size) find_first_zero_bit((addr), (size))
#define ext2_find_next_zero_bit(addr, size, offset) \
@@ -392,30 +392,24 @@
static __inline__ int ext2_set_bit(int nr, volatile void * addr)
{
int mask, retval;
- unsigned long flags;
volatile unsigned char *ADDR = (unsigned char *) addr;

ADDR += nr >> 3;
mask = 1 << (nr & 0x07);
- local_irq_save(flags);
retval = (mask & *ADDR) != 0;
*ADDR |= mask;
- local_irq_restore(flags);
return retval;
}

static __inline__ int ext2_clear_bit(int nr, volatile void * addr)
{
int mask, retval;
- unsigned long flags;
volatile unsigned char *ADDR = (unsigned char *) addr;

ADDR += nr >> 3;
mask = 1 << (nr & 0x07);
- local_irq_save(flags);
retval = (mask & *ADDR) != 0;
*ADDR &= ~mask;
- local_irq_restore(flags);
return retval;
}

@@ -502,9 +496,9 @@
})

/* Bitmap functions for the minix filesystem. */
-#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
-#define minix_set_bit(nr,addr) set_bit(nr,addr)
-#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_and_set_bit(nr,addr) __test_and_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) __set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) __test_and_clear_bit(nr,addr)
#define minix_test_bit(nr,addr) test_bit(nr,addr)
#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)

Index: 2.6-git/include/asm-sparc/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-sparc/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-sparc/bitops.h 2006-01-25 19:14:08.000000000 +0900
@@ -523,11 +523,11 @@

/* Bitmap functions for the minix filesystem. */
#define minix_test_and_set_bit(nr,addr) \
- test_and_set_bit((nr),(unsigned long *)(addr))
+ __test_and_set_bit((nr),(unsigned long *)(addr))
#define minix_set_bit(nr,addr) \
- set_bit((nr),(unsigned long *)(addr))
+ __set_bit((nr),(unsigned long *)(addr))
#define minix_test_and_clear_bit(nr,addr) \
- test_and_clear_bit((nr),(unsigned long *)(addr))
+ __test_and_clear_bit((nr),(unsigned long *)(addr))
#define minix_test_bit(nr,addr) \
test_bit((nr),(unsigned long *)(addr))
#define minix_find_first_zero_bit(addr,size) \
Index: 2.6-git/include/asm-sparc64/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-sparc64/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-sparc64/bitops.h 2006-01-25 19:14:08.000000000 +0900
@@ -280,11 +280,11 @@

/* Bitmap functions for the minix filesystem. */
#define minix_test_and_set_bit(nr,addr) \
- test_and_set_bit((nr),(unsigned long *)(addr))
+ __test_and_set_bit((nr),(unsigned long *)(addr))
#define minix_set_bit(nr,addr) \
- set_bit((nr),(unsigned long *)(addr))
+ __set_bit((nr),(unsigned long *)(addr))
#define minix_test_and_clear_bit(nr,addr) \
- test_and_clear_bit((nr),(unsigned long *)(addr))
+ __test_and_clear_bit((nr),(unsigned long *)(addr))
#define minix_test_bit(nr,addr) \
test_bit((nr),(unsigned long *)(addr))
#define minix_find_first_zero_bit(addr,size) \
Index: 2.6-git/include/asm-v850/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-v850/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-v850/bitops.h 2006-01-25 19:14:08.000000000 +0900
@@ -336,18 +336,18 @@
#define hweight16(x) generic_hweight16 (x)
#define hweight8(x) generic_hweight8 (x)

-#define ext2_set_bit test_and_set_bit
+#define ext2_set_bit __test_and_set_bit
#define ext2_set_bit_atomic(l,n,a) test_and_set_bit(n,a)
-#define ext2_clear_bit test_and_clear_bit
+#define ext2_clear_bit __test_and_clear_bit
#define ext2_clear_bit_atomic(l,n,a) test_and_clear_bit(n,a)
#define ext2_test_bit test_bit
#define ext2_find_first_zero_bit find_first_zero_bit
#define ext2_find_next_zero_bit find_next_zero_bit

/* Bitmap functions for the minix filesystem. */
-#define minix_test_and_set_bit test_and_set_bit
-#define minix_set_bit set_bit
-#define minix_test_and_clear_bit test_and_clear_bit
+#define minix_test_and_set_bit __test_and_set_bit
+#define minix_set_bit __set_bit
+#define minix_test_and_clear_bit __test_and_clear_bit
#define minix_test_bit test_bit
#define minix_find_first_zero_bit find_first_zero_bit

Index: 2.6-git/include/asm-xtensa/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-xtensa/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-xtensa/bitops.h 2006-01-25 19:14:08.000000000 +0900
@@ -436,9 +436,9 @@

/* Bitmap functions for the minix filesystem. */

-#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
-#define minix_set_bit(nr,addr) set_bit(nr,addr)
-#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_and_set_bit(nr,addr) __test_and_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) __set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) __test_and_clear_bit(nr,addr)
#define minix_test_bit(nr,addr) test_bit(nr,addr)
#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)

2006-01-25 11:32:08

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

o generic {,test_and_}{set,clear,change}_bit() (atomic bitops)

This patch introduces the C-language equivalents of the functions below:
void set_bit(int nr, volatile unsigned long *addr);
void clear_bit(int nr, volatile unsigned long *addr);
void change_bit(int nr, volatile unsigned long *addr);
int test_and_set_bit(int nr, volatile unsigned long *addr);
int test_and_clear_bit(int nr, volatile unsigned long *addr);
int test_and_change_bit(int nr, volatile unsigned long *addr);

HAVE_ARCH_ATOMIC_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-powerpc/bitops.h
include/asm-parisc/bitops.h
include/asm-parisc/atomic.h

o generic __{,test_and_}{set,clear,change}_bit() and test_bit()

This patch introduces the C-language equivalents of the functions below:
void __set_bit(int nr, volatile unsigned long *addr);
void __clear_bit(int nr, volatile unsigned long *addr);
void __change_bit(int nr, volatile unsigned long *addr);
int __test_and_set_bit(int nr, volatile unsigned long *addr);
int __test_and_clear_bit(int nr, volatile unsigned long *addr);
int __test_and_change_bit(int nr, volatile unsigned long *addr);
int test_bit(int nr, const volatile unsigned long *addr);

HAVE_ARCH_NON_ATOMIC_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
asm-powerpc/bitops.h

o generic __ffs()

This patch introduces the C-language equivalent of the function:
unsigned long __ffs(unsigned long word);

HAVE_ARCH___FFS_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-sparc64/bitops.h

o generic ffz()

This patch introduces the C-language equivalent of the function:
unsigned long ffz(unsigned long word);

HAVE_ARCH_FFZ_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-sparc64/bitops.h

o generic fls()

This patch introduces the C-language equivalent of the function:
int fls(int x);

HAVE_ARCH_FLS_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h

o generic fls64()

This patch introduces the C-language equivalent of the function:
int fls64(__u64 x);

HAVE_ARCH_FLS64_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h

o generic find_{next,first}{,_zero}_bit()

This patch introduces the C-language equivalents of the functions below:

unsigned logn find_next_bit(const unsigned long *addr, unsigned long size,
unsigned long offset);
unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size,
unsigned long offset);
unsigned long find_first_zero_bit(const unsigned long *addr,
unsigned long size);
unsigned long find_first_bit(const unsigned long *addr, unsigned long size);

HAVE_ARCH_FIND_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
arch/powerpc/lib/bitops.c

==== KERNEL

o generic sched_find_first_bit()

This patch introduces the C-language equivalent of the function:
int sched_find_first_bit(const unsigned long *b);

HAVE_ARCH_SCHED_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-powerpc/bitops.h

o generic ffs()

This patch introduces the C-language equivalent of the function:
int ffs(int x);

HAVE_ARCH_FFS_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h

o generic hweight{32,16,8}()

This patch introduces the C-language equivalents of the functions below:
unsigned int hweight32(unsigned int w);
unsigned int hweight16(unsigned int w);
unsigned int hweight8(unsigned int w);

HAVE_ARCH_HWEIGHT_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h

o generic hweight64()

This patch introduces the C-language equivalent of the function:
unsigned long hweight64(__u64 w);

HAVE_ARCH_HWEIGHT64_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h

o generic ext2_{set,clear,test,find_first_zero,find_next_zero}_bit()

This patch introduces the C-language equivalents of the functions below:

int ext2_set_bit(int nr, volatile unsigned long *addr);
int ext2_clear_bit(int nr, volatile unsigned long *addr);
int ext2_test_bit(int nr, const volatile unsigned long *addr);
unsigned long ext2_find_first_zero_bit(const unsigned long *addr,
unsigned long size);

HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS is defined when the architecture has its own
version of these functions.

unsinged long ext2_find_next_zero_bit(const unsigned long *addr,
unsigned long size);

This code largely copied from:
include/asm-powerpc/bitops.h
include/asm-parisc/bitops.h

o generic ext2_{set,clear}_bit_atomic()

This patch introduces the C-language equivalents of the functions below:
int ext2_set_bit_atomic(int nr, volatile unsigned long *addr);
int ext2_clear_bit_atomic(int nr, volatile unsigned long *addr);

HAVE_ARCH_EXT2_ATOMIC_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-sparc/bitops.h

o generic minix_{test,set,test_and_clear,test,find_first_zero}_bit()

This patch introduces the C-language equivalents of the functions below:

HAVE_ARCH_MINIX_BITOPS is defined when the architecture has its own
version of these functions.

int minix_test_and_set_bit(int nr, volatile unsigned long *addr);
int minix_set_bit(int nr, volatile unsigned long *addr);
int minix_test_and_clear_bit(int nr, volatile unsigned long *addr);
int minix_test_bit(int nr, const volatile unsigned long *addr);
unsigned long minix_find_first_zero_bit(const unsigned long *addr,
unsigned long size);

This code largely copied from:
include/asm-sparc/bitops.h

Signed-off-by: Akinobu Mita <[email protected]>
---
bitops.h | 677 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----
1 files changed, 641 insertions(+), 36 deletions(-)

Index: work/include/asm-generic/bitops.h
===================================================================
--- work.orig/include/asm-generic/bitops.h 2006-01-25 19:14:27.000000000 +0900
+++ work/include/asm-generic/bitops.h 2006-01-25 19:32:48.000000000 +0900
@@ -1,81 +1,686 @@
#ifndef _ASM_GENERIC_BITOPS_H_
#define _ASM_GENERIC_BITOPS_H_

+#include <asm/types.h>
+
+#define BITOP_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
+#define BITOP_WORD(nr) ((nr) / BITS_PER_LONG)
+#define BITOP_LE_SWIZZLE ((BITS_PER_LONG-1) & ~0x7)
+
+#ifndef HAVE_ARCH_ATOMIC_BITOPS
+
+#ifdef CONFIG_SMP
+#include <asm/spinlock.h>
+#include <asm/cache.h> /* we use L1_CACHE_BYTES */
+
+/* Use an array of spinlocks for our atomic_ts.
+ * Hash function to index into a different SPINLOCK.
+ * Since "a" is usually an address, use one spinlock per cacheline.
+ */
+# define ATOMIC_HASH_SIZE 4
+# define ATOMIC_HASH(a) (&(__atomic_hash[ (((unsigned long) a)/L1_CACHE_BYTES) & (ATOMIC_HASH_SIZE-1) ]))
+
+extern raw_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned;
+
+/* Can't use raw_spin_lock_irq because of #include problems, so
+ * this is the substitute */
+#define _atomic_spin_lock_irqsave(l,f) do { \
+ raw_spinlock_t *s = ATOMIC_HASH(l); \
+ local_irq_save(f); \
+ __raw_spin_lock(s); \
+} while(0)
+
+#define _atomic_spin_unlock_irqrestore(l,f) do { \
+ raw_spinlock_t *s = ATOMIC_HASH(l); \
+ __raw_spin_unlock(s); \
+ local_irq_restore(f); \
+} while(0)
+
+
+#else
+# define _atomic_spin_lock_irqsave(l,f) do { local_irq_save(f); } while (0)
+# define _atomic_spin_unlock_irqrestore(l,f) do { local_irq_restore(f); } while (0)
+#endif
+
/*
* For the benefit of those who are trying to port Linux to another
* architecture, here are some C-language equivalents. You should
* recode these in the native assembly language, if at all possible.
- * To guarantee atomicity, these routines call cli() and sti() to
- * disable interrupts while they operate. (You have to provide inline
- * routines to cli() and sti().)
*
- * Also note, these routines assume that you have 32 bit longs.
- * You will have to change this if you are trying to port Linux to the
- * Alpha architecture or to a Cray. :-)
- *
* C language equivalents written by Theodore Ts'o, 9/26/92
*/

-extern __inline__ int set_bit(int nr,long * addr)
+static __inline__ void set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ *p |= mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+}
+
+static __inline__ void clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ *p &= ~mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+}
+
+static __inline__ void change_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ *p ^= mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+}
+
+static __inline__ int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old;
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ old = *p;
+ *p = old | mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+
+ return (old & mask) != 0;
+}
+
+static __inline__ int test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old;
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ old = *p;
+ *p = old & ~mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+
+ return (old & mask) != 0;
+}
+
+static __inline__ int test_and_change_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old;
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ old = *p;
+ *p = old ^ mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+
+ return (old & mask) != 0;
+}
+
+#endif /* HAVE_ARCH_ATOMIC_BITOPS */
+
+#ifndef HAVE_ARCH_NON_ATOMIC_BITOPS
+
+static __inline__ void __set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+
+ *p |= mask;
+}
+
+static __inline__ void __clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+
+ *p &= ~mask;
+}
+
+static __inline__ void __change_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+
+ *p ^= mask;
+}
+
+static __inline__ int __test_and_set_bit(int nr, volatile unsigned long *addr)
{
- int mask, retval;
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old = *p;

- addr += nr >> 5;
- mask = 1 << (nr & 0x1f);
- cli();
- retval = (mask & *addr) != 0;
- *addr |= mask;
- sti();
- return retval;
+ *p = old | mask;
+ return (old & mask) != 0;
}

-extern __inline__ int clear_bit(int nr, long * addr)
+static __inline__ int __test_and_clear_bit(int nr, volatile unsigned long *addr)
{
- int mask, retval;
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old = *p;

- addr += nr >> 5;
- mask = 1 << (nr & 0x1f);
- cli();
- retval = (mask & *addr) != 0;
- *addr &= ~mask;
- sti();
- return retval;
+ *p = old & ~mask;
+ return (old & mask) != 0;
}

-extern __inline__ int test_bit(int nr, const unsigned long * addr)
+static __inline__ int __test_and_change_bit(int nr,
+ volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old = *p;
+
+ *p = old ^ mask;
+ return (old & mask) != 0;
+}
+
+static __inline__ int test_bit(int nr, __const__ volatile unsigned long *addr)
+{
+ return 1UL & (addr[BITOP_WORD(nr)] >> (nr & (BITS_PER_LONG-1)));
+}
+
+#endif /* HAVE_ARCH_NON_ATOMIC_BITOPS */
+
+#ifndef HAVE_ARCH___FFS_BITOPS
+
+/**
+ * __ffs - find first bit in word.
+ * @word: The word to search
+ *
+ * Returns 0..BITS_PER_LONG-1
+ * Undefined if no bit exists, so code should check against 0 first.
+ */
+static inline unsigned long __ffs(unsigned long word)
{
- int mask;
+ int b = 0, s;

- addr += nr >> 5;
- mask = 1 << (nr & 0x1f);
- return ((mask & *addr) != 0);
+#if BITS_PER_LONG == 32
+ s = 16; if (word << 16 != 0) s = 0; b += s; word >>= s;
+ s = 8; if (word << 24 != 0) s = 0; b += s; word >>= s;
+ s = 4; if (word << 28 != 0) s = 0; b += s; word >>= s;
+ s = 2; if (word << 30 != 0) s = 0; b += s; word >>= s;
+ s = 1; if (word << 31 != 0) s = 0; b += s;
+
+ return b;
+#elif BITS_PER_LONG == 64
+ s = 32; if (word << 32 != 0) s = 0; b += s; word >>= s;
+ s = 16; if (word << 48 != 0) s = 0; b += s; word >>= s;
+ s = 8; if (word << 56 != 0) s = 0; b += s; word >>= s;
+ s = 4; if (word << 60 != 0) s = 0; b += s; word >>= s;
+ s = 2; if (word << 62 != 0) s = 0; b += s; word >>= s;
+ s = 1; if (word << 63 != 0) s = 0; b += s;
+
+ return b;
+#else
+#error BITS_PER_LONG not defined
+#endif
}

+#endif /* HAVE_ARCH___FFS_BITOPS */
+
+#ifndef HAVE_ARCH_FFZ_BITOPS
+
+/* Undefined if no bit is zero. */
+#define ffz(x) __ffs(~x)
+
+#endif /* HAVE_ARCH_FFZ_BITOPS */
+
+#ifndef HAVE_ARCH_FLS_BITOPS
+
/*
* fls: find last bit set.
*/

-#define fls(x) generic_fls(x)
-#define fls64(x) generic_fls64(x)
+static __inline__ int fls(int x)
+{
+ int r = 32;
+
+ if (!x)
+ return 0;
+ if (!(x & 0xffff0000u)) {
+ x <<= 16;
+ r -= 16;
+ }
+ if (!(x & 0xff000000u)) {
+ x <<= 8;
+ r -= 8;
+ }
+ if (!(x & 0xf0000000u)) {
+ x <<= 4;
+ r -= 4;
+ }
+ if (!(x & 0xc0000000u)) {
+ x <<= 2;
+ r -= 2;
+ }
+ if (!(x & 0x80000000u)) {
+ x <<= 1;
+ r -= 1;
+ }
+ return r;
+}
+
+#endif /* HAVE_ARCH_FLS_BITOPS */
+
+#ifndef HAVE_ARCH_FLS64_BITOPS
+
+static inline int fls64(__u64 x)
+{
+ __u32 h = x >> 32;
+ if (h)
+ return fls(x) + 32;
+ return fls(x);
+}
+
+#endif /* HAVE_ARCH_FLS64_BITOPS */
+
+#ifndef HAVE_ARCH_FIND_BITOPS
+
+/**
+ * find_next_bit - find the next set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+static inline unsigned long find_next_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BITOP_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG-1);
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset %= BITS_PER_LONG;
+ if (offset) {
+ tmp = *(p++);
+ tmp &= (~0UL << offset);
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (tmp)
+ goto found_middle;
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+ }
+ while (size & ~(BITS_PER_LONG-1)) {
+ if ((tmp = *(p++)))
+ goto found_middle;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp &= (~0UL >> (BITS_PER_LONG - size));
+ if (tmp == 0UL) /* Are any bits set? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + __ffs(tmp);
+}
+
+/*
+ * This implementation of find_{first,next}_zero_bit was stolen from
+ * Linus' asm-alpha/bitops.h.
+ */
+static inline unsigned long find_next_zero_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BITOP_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG-1);
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset %= BITS_PER_LONG;
+ if (offset) {
+ tmp = *(p++);
+ tmp |= ~0UL >> (BITS_PER_LONG - offset);
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+ }
+ while (size & ~(BITS_PER_LONG-1)) {
+ if (~(tmp = *(p++)))
+ goto found_middle;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp |= ~0UL << size;
+ if (tmp == ~0UL) /* Are any bits zero? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + ffz(tmp);
+}
+
+#define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0)
+#define find_first_bit(addr, size) find_next_bit((addr), (size), 0)
+
+#endif /* HAVE_ARCH_FIND_BITOPS */

#ifdef __KERNEL__

+#ifndef HAVE_ARCH_SCHED_BITOPS
+
+#include <linux/compiler.h> /* unlikely() */
+
+/*
+ * Every architecture must define this function. It's the fastest
+ * way of searching a 140-bit bitmap where the first 100 bits are
+ * unlikely to be set. It's guaranteed that at least one of the 140
+ * bits is cleared.
+ */
+static inline int sched_find_first_bit(const unsigned long *b)
+{
+#if BITS_PER_LONG == 64
+ if (unlikely(b[0]))
+ return __ffs(b[0]);
+ if (unlikely(b[1]))
+ return __ffs(b[1]) + 64;
+ return __ffs(b[2]) + 128;
+#elif BITS_PER_LONG == 32
+ if (unlikely(b[0]))
+ return __ffs(b[0]);
+ if (unlikely(b[1]))
+ return __ffs(b[1]) + 32;
+ if (unlikely(b[2]))
+ return __ffs(b[2]) + 64;
+ if (b[3])
+ return __ffs(b[3]) + 96;
+ return __ffs(b[4]) + 128;
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+#endif /* HAVE_ARCH_SCHED_BITOPS */
+
+#ifndef HAVE_ARCH_FFS_BITOPS
+
/*
* ffs: find first bit set. This is defined the same way as
* the libc and compiler builtin ffs routines, therefore
* differs in spirit from the above ffz (man ffs).
*/

-#define ffs(x) generic_ffs(x)
+static inline int ffs(int x)
+{
+ int r = 1;
+
+ if (!x)
+ return 0;
+ if (!(x & 0xffff)) {
+ x >>= 16;
+ r += 16;
+ }
+ if (!(x & 0xff)) {
+ x >>= 8;
+ r += 8;
+ }
+ if (!(x & 0xf)) {
+ x >>= 4;
+ r += 4;
+ }
+ if (!(x & 3)) {
+ x >>= 2;
+ r += 2;
+ }
+ if (!(x & 1)) {
+ x >>= 1;
+ r += 1;
+ }
+ return r;
+}
+
+#endif /* HAVE_ARCH_FFS_BITOPS */
+
+
+#ifndef HAVE_ARCH_HWEIGHT_BITOPS

/*
* hweightN: returns the hamming weight (i.e. the number
* of bits set) of a N-bit word
*/

-#define hweight32(x) generic_hweight32(x)
-#define hweight16(x) generic_hweight16(x)
-#define hweight8(x) generic_hweight8(x)
+static inline unsigned int hweight32(unsigned int w)
+{
+ unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
+ res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
+ res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
+ res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
+ return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
+}
+
+static inline unsigned int hweight16(unsigned int w)
+{
+ unsigned int res = (w & 0x5555) + ((w >> 1) & 0x5555);
+ res = (res & 0x3333) + ((res >> 2) & 0x3333);
+ res = (res & 0x0F0F) + ((res >> 4) & 0x0F0F);
+ return (res & 0x00FF) + ((res >> 8) & 0x00FF);
+}
+
+static inline unsigned int hweight8(unsigned int w)
+{
+ unsigned int res = (w & 0x55) + ((w >> 1) & 0x55);
+ res = (res & 0x33) + ((res >> 2) & 0x33);
+ return (res & 0x0F) + ((res >> 4) & 0x0F);
+}
+
+#endif /* HAVE_ARCH_HWEIGHT_BITOPS */
+
+#ifndef HAVE_ARCH_HWEIGHT64_BITOPS
+
+static inline unsigned long hweight64(__u64 w)
+{
+#if BITS_PER_LONG < 64
+ return hweight32((unsigned int)(w >> 32)) + hweight32((unsigned int)w);
+#else
+ u64 res;
+ res = (w & 0x5555555555555555ul) + ((w >> 1) & 0x5555555555555555ul);
+ res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
+ res = (res & 0x0F0F0F0F0F0F0F0Ful) + ((res >> 4) & 0x0F0F0F0F0F0F0F0Ful);
+ res = (res & 0x00FF00FF00FF00FFul) + ((res >> 8) & 0x00FF00FF00FF00FFul);
+ res = (res & 0x0000FFFF0000FFFFul) + ((res >> 16) & 0x0000FFFF0000FFFFul);
+ return (res & 0x00000000FFFFFFFFul) + ((res >> 32) & 0x00000000FFFFFFFFul);
+#endif
+}
+
+#endif /* HAVE_ARCH_HWEIGHT64_BITOPS */
+
+#ifndef HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS
+
+#include <asm/byteorder.h>
+
+#if defined(__LITTLE_ENDIAN)
+
+static __inline__ int generic_test_le_bit(unsigned long nr,
+ __const__ unsigned long *addr)
+{
+ __const__ unsigned char *tmp = (__const__ unsigned char *) addr;
+ return (tmp[nr >> 3] >> (nr & 7)) & 1;
+}
+
+#define generic___set_le_bit(nr, addr) __set_bit(nr, addr)
+#define generic___clear_le_bit(nr, addr) __clear_bit(nr, addr)
+
+#define generic_test_and_set_le_bit(nr, addr) test_and_set_bit(nr, addr)
+#define generic_test_and_clear_le_bit(nr, addr) test_and_clear_bit(nr, addr)
+
+#define generic___test_and_set_le_bit(nr, addr) __test_and_set_bit(nr, addr)
+#define generic___test_and_clear_le_bit(nr, addr) __test_and_clear_bit(nr, addr)
+
+#define generic_find_next_zero_le_bit(addr, size, offset) find_next_zero_bit(addr, size, offset)
+
+#elif defined(__BIG_ENDIAN)
+
+static __inline__ int generic_test_le_bit(unsigned long nr,
+ __const__ unsigned long *addr)
+{
+ __const__ unsigned char *tmp = (__const__ unsigned char *) addr;
+ return (tmp[nr >> 3] >> (nr & 7)) & 1;
+}
+
+#define generic___set_le_bit(nr, addr) \
+ __set_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+#define generic___clear_le_bit(nr, addr) \
+ __clear_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+
+#define generic_test_and_set_le_bit(nr, addr) \
+ test_and_set_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+#define generic_test_and_clear_le_bit(nr, addr) \
+ test_and_clear_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+
+#define generic___test_and_set_le_bit(nr, addr) \
+ __test_and_set_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+#define generic___test_and_clear_le_bit(nr, addr) \
+ __test_and_clear_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+
+/* include/linux/byteorder does not support "unsigned long" type */
+static inline unsigned long ext2_swabp(const unsigned long * x)
+{
+#if BITS_PER_LONG == 64
+ return (unsigned long) __swab64p((u64 *) x);
+#elif BITS_PER_LONG == 32
+ return (unsigned long) __swab32p((u32 *) x);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+/* include/linux/byteorder doesn't support "unsigned long" type */
+static inline unsigned long ext2_swab(const unsigned long y)
+{
+#if BITS_PER_LONG == 64
+ return (unsigned long) __swab64((u64) y);
+#elif BITS_PER_LONG == 32
+ return (unsigned long) __swab32((u32) y);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+static __inline__ unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BITOP_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG - 1);
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= (BITS_PER_LONG - 1UL);
+ if (offset) {
+ tmp = ext2_swabp(p++);
+ tmp |= (~0UL >> (BITS_PER_LONG - offset));
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+ }
+
+ while (size & ~(BITS_PER_LONG - 1)) {
+ if (~(tmp = *(p++)))
+ goto found_middle_swap;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = ext2_swabp(p);
+found_first:
+ tmp |= ~0UL << size;
+ if (tmp == ~0UL) /* Are any bits zero? */
+ return result + size; /* Nope. Skip ffz */
+found_middle:
+ return result + ffz(tmp);
+
+found_middle_swap:
+ return result + ffz(ext2_swab(tmp));
+}
+#else
+#error "Please fix <asm/byteorder.h>"
+#endif
+
+#define generic_find_first_zero_le_bit(addr, size) \
+ generic_find_next_zero_le_bit((addr), (size), 0)
+
+#define ext2_set_bit(nr,addr) \
+ generic___test_and_set_le_bit((nr),(unsigned long *)(addr))
+#define ext2_clear_bit(nr,addr) \
+ generic___test_and_clear_le_bit((nr),(unsigned long *)(addr))
+
+#define ext2_test_bit(nr,addr) \
+ generic_test_le_bit((nr),(unsigned long *)(addr))
+#define ext2_find_first_zero_bit(addr, size) \
+ generic_find_first_zero_le_bit((unsigned long *)(addr), (size))
+#define ext2_find_next_zero_bit(addr, size, off) \
+ generic_find_next_zero_le_bit((unsigned long *)(addr), (size), (off))
+
+#endif /* HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS */
+
+#ifndef HAVE_ARCH_EXT2_ATOMIC_BITOPS
+
+#define ext2_set_bit_atomic(lock, nr, addr) \
+ ({ \
+ int ret; \
+ spin_lock(lock); \
+ ret = ext2_set_bit((nr), (unsigned long *)(addr)); \
+ spin_unlock(lock); \
+ ret; \
+ })
+
+#define ext2_clear_bit_atomic(lock, nr, addr) \
+ ({ \
+ int ret; \
+ spin_lock(lock); \
+ ret = ext2_clear_bit((nr), (unsigned long *)(addr)); \
+ spin_unlock(lock); \
+ ret; \
+ })
+
+#endif /* HAVE_ARCH_EXT2_ATOMIC_BITOPS */
+
+#ifndef HAVE_ARCH_MINIX_BITOPS
+
+#define minix_test_and_set_bit(nr,addr) \
+ __test_and_set_bit((nr),(unsigned long *)(addr))
+#define minix_set_bit(nr,addr) \
+ __set_bit((nr),(unsigned long *)(addr))
+#define minix_test_and_clear_bit(nr,addr) \
+ __test_and_clear_bit((nr),(unsigned long *)(addr))
+#define minix_test_bit(nr,addr) \
+ test_bit((nr),(unsigned long *)(addr))
+#define minix_find_first_zero_bit(addr,size) \
+ find_first_zero_bit((unsigned long *)(addr),(size))
+
+#endif /* HAVE_ARCH_MINIX_BITOPS */

#endif /* __KERNEL__ */

2006-01-25 11:34:44

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 5/6] fix warning on test_ti_thread_flag()

If the arechitecture is
- BITS_PER_LONG == 64
- struct thread_info.flag 32 is bits
- second argument of test_bit() was void *

Then compiler print error message on test_ti_thread_flags()
in include/linux/thread_info.h

Signed-off-by: Akinobu Mita <[email protected]>
---
thread_info.h | 2 +-
1 files changed, 1 insertion(+), 1 deletion(-)

Index: 2.6-git/include/linux/thread_info.h
===================================================================
--- 2.6-git.orig/include/linux/thread_info.h 2006-01-25 19:07:12.000000000 +0900
+++ 2.6-git/include/linux/thread_info.h 2006-01-25 19:14:26.000000000 +0900
@@ -49,7 +49,7 @@

static inline int test_ti_thread_flag(struct thread_info *ti, int flag)
{
- return test_bit(flag,&ti->flags);
+ return test_bit(flag, (void *)&ti->flags);
}

#define set_thread_flag(flag) \

2006-01-25 11:35:48

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 6/6] remove unused generic bitops in include/linux/bitops.h

generic_{ffs,fls,fls64,hweight{64,32,16,8}}() were moved into
include/asm-generic/bitops.h. So all architectures don't use them.

Signed-off-by: Akinobu Mita <[email protected]>
---

bitops.h | 124 ---------------------------------------------------------------
1 files changed, 1 insertion(+), 123 deletions(-)

Index: 2.6-git/include/linux/bitops.h
===================================================================
--- 2.6-git.orig/include/linux/bitops.h 2006-01-25 19:07:12.000000000 +0900
+++ 2.6-git/include/linux/bitops.h 2006-01-25 19:14:27.000000000 +0900
@@ -3,88 +3,11 @@
#include <asm/types.h>

/*
- * ffs: find first bit set. This is defined the same way as
- * the libc and compiler builtin ffs routines, therefore
- * differs in spirit from the above ffz (man ffs).
- */
-
-static inline int generic_ffs(int x)
-{
- int r = 1;
-
- if (!x)
- return 0;
- if (!(x & 0xffff)) {
- x >>= 16;
- r += 16;
- }
- if (!(x & 0xff)) {
- x >>= 8;
- r += 8;
- }
- if (!(x & 0xf)) {
- x >>= 4;
- r += 4;
- }
- if (!(x & 3)) {
- x >>= 2;
- r += 2;
- }
- if (!(x & 1)) {
- x >>= 1;
- r += 1;
- }
- return r;
-}
-
-/*
- * fls: find last bit set.
- */
-
-static __inline__ int generic_fls(int x)
-{
- int r = 32;
-
- if (!x)
- return 0;
- if (!(x & 0xffff0000u)) {
- x <<= 16;
- r -= 16;
- }
- if (!(x & 0xff000000u)) {
- x <<= 8;
- r -= 8;
- }
- if (!(x & 0xf0000000u)) {
- x <<= 4;
- r -= 4;
- }
- if (!(x & 0xc0000000u)) {
- x <<= 2;
- r -= 2;
- }
- if (!(x & 0x80000000u)) {
- x <<= 1;
- r -= 1;
- }
- return r;
-}
-
-/*
* Include this here because some architectures need generic_ffs/fls in
* scope
*/
#include <asm/bitops.h>

-
-static inline int generic_fls64(__u64 x)
-{
- __u32 h = x >> 32;
- if (h)
- return fls(x) + 32;
- return fls(x);
-}
-
static __inline__ int get_bitmask_order(unsigned int count)
{
int order;
@@ -103,54 +26,9 @@
return order;
}

-/*
- * hweightN: returns the hamming weight (i.e. the number
- * of bits set) of a N-bit word
- */
-
-static inline unsigned int generic_hweight32(unsigned int w)
-{
- unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
- res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
- res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
- res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
- return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
-}
-
-static inline unsigned int generic_hweight16(unsigned int w)
-{
- unsigned int res = (w & 0x5555) + ((w >> 1) & 0x5555);
- res = (res & 0x3333) + ((res >> 2) & 0x3333);
- res = (res & 0x0F0F) + ((res >> 4) & 0x0F0F);
- return (res & 0x00FF) + ((res >> 8) & 0x00FF);
-}
-
-static inline unsigned int generic_hweight8(unsigned int w)
-{
- unsigned int res = (w & 0x55) + ((w >> 1) & 0x55);
- res = (res & 0x33) + ((res >> 2) & 0x33);
- return (res & 0x0F) + ((res >> 4) & 0x0F);
-}
-
-static inline unsigned long generic_hweight64(__u64 w)
-{
-#if BITS_PER_LONG < 64
- return generic_hweight32((unsigned int)(w >> 32)) +
- generic_hweight32((unsigned int)w);
-#else
- u64 res;
- res = (w & 0x5555555555555555ul) + ((w >> 1) & 0x5555555555555555ul);
- res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
- res = (res & 0x0F0F0F0F0F0F0F0Ful) + ((res >> 4) & 0x0F0F0F0F0F0F0F0Ful);
- res = (res & 0x00FF00FF00FF00FFul) + ((res >> 8) & 0x00FF00FF00FF00FFul);
- res = (res & 0x0000FFFF0000FFFFul) + ((res >> 16) & 0x0000FFFF0000FFFFul);
- return (res & 0x00000000FFFFFFFFul) + ((res >> 32) & 0x00000000FFFFFFFFul);
-#endif
-}
-
static inline unsigned long hweight_long(unsigned long w)
{
- return sizeof(w) == 4 ? generic_hweight32(w) : generic_hweight64(w);
+ return sizeof(w) == 4 ? hweight32(w) : hweight64(w);
}

/*

2006-01-25 11:54:48

by Keith Owens

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

Akinobu Mita (on Wed, 25 Jan 2006 20:32:06 +0900) wrote:
>o generic {,test_and_}{set,clear,change}_bit() (atomic bitops)
...
>+static __inline__ void set_bit(int nr, volatile unsigned long *addr)
>+{
>+ unsigned long mask = BITOP_MASK(nr);
>+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
>+ unsigned long flags;
>+
>+ _atomic_spin_lock_irqsave(p, flags);
>+ *p |= mask;
>+ _atomic_spin_unlock_irqrestore(p, flags);
>+}

Be very, very careful about using these generic *_bit() routines if the
architecture supports non-maskable interrupts.

NMI events can occur at any time, including when interrupts have been
disabled by *_irqsave(). So you can get NMI events occurring while a
*_bit fucntion is holding a spin lock. If the NMI handler also wants
to do bit manipulation (and they do) then you can get a deadlock
between the original caller of *_bit() and the NMI handler.

Doing any work that requires spinlocks in an NMI handler is just asking
for deadlock problems. The generic *_bit() routines add a hidden
spinlock behind what was previously a safe operation. I would even say
that any arch that supports any type of NMI event _must_ define its own
bit routines that do not rely on your _atomic_spin_lock_irqsave() and
its hash of spinlocks.

2006-01-25 12:10:26

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH 1/6] {set,clear,test}_bit() related cleanup

On Wednesday 25 January 2006 12:28, Akinobu Mita wrote:
> While working on these patch set, I found several possible cleanup
> on x86-64 and ia64.

Please don't send emails with that big cc lists [dropped most of them]

I applied the x86-64 bits. Thanks. Please send the ia64 bits separately
to the IA64 maintainer.

-Andi

2006-01-25 12:32:41

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH 5/6] fix warning on test_ti_thread_flag()

On Wed, 25 Jan 2006, Akinobu Mita wrote:
> If the arechitecture is
> - BITS_PER_LONG == 64
> - struct thread_info.flag 32 is bits
> - second argument of test_bit() was void *
>
> Then compiler print error message on test_ti_thread_flags()
> in include/linux/thread_info.h
>
> Signed-off-by: Akinobu Mita <[email protected]>
> ---
> thread_info.h | 2 +-
> 1 files changed, 1 insertion(+), 1 deletion(-)
>
> Index: 2.6-git/include/linux/thread_info.h
> ===================================================================
> --- 2.6-git.orig/include/linux/thread_info.h 2006-01-25 19:07:12.000000000 +0900
> +++ 2.6-git/include/linux/thread_info.h 2006-01-25 19:14:26.000000000 +0900
> @@ -49,7 +49,7 @@
>
> static inline int test_ti_thread_flag(struct thread_info *ti, int flag)
> {
> - return test_bit(flag,&ti->flags);
> + return test_bit(flag, (void *)&ti->flags);
> }

This is not safe. The bitops are defined to work on unsigned long only, so
flags should be changed to unsigned long instead, or you should use a
temporary.

Affected platforms:
- alpha: flags is unsigned int
- ia64, sh, x86_64: flags is __u32

The only affected 64-platforms are little endian, so it will silently work
after your change, though...

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2006-01-25 17:13:33

by Chen, Kenneth W

[permalink] [raw]
Subject: RE: [PATCH 5/6] fix warning on test_ti_thread_flag()

Geert Uytterhoeven wrote on Wednesday, January 25, 2006 4:29 AM
> On Wed, 25 Jan 2006, Akinobu Mita wrote:
> > If the arechitecture is
> > - BITS_PER_LONG == 64
> > - struct thread_info.flag 32 is bits
> > - second argument of test_bit() was void *
> >
> > Then compiler print error message on test_ti_thread_flags()
> > in include/linux/thread_info.h
> >
> > Signed-off-by: Akinobu Mita <[email protected]>
> > ---
> > thread_info.h | 2 +-
> > 1 files changed, 1 insertion(+), 1 deletion(-)
> >
> > Index: 2.6-git/include/linux/thread_info.h
> > ===================================================================
> > --- 2.6-git.orig/include/linux/thread_info.h 2006-01-25
19:07:12.000000000 +0900
> > +++ 2.6-git/include/linux/thread_info.h 2006-01-25
19:14:26.000000000 +0900
> > @@ -49,7 +49,7 @@
> >
> > static inline int test_ti_thread_flag(struct thread_info *ti, int
flag)
> > {
> > - return test_bit(flag,&ti->flags);
> > + return test_bit(flag, (void *)&ti->flags);
> > }
>
> This is not safe. The bitops are defined to work on unsigned long
only, so
> flags should be changed to unsigned long instead, or you should use a
> temporary.
>
> Affected platforms:
> - alpha: flags is unsigned int
> - ia64, sh, x86_64: flags is __u32
>
> The only affected 64-platforms are little endian, so it will silently
work
> after your change, though...

I thought test_bit can operate on array beyond unsigned long.
It's perfectly legitimate to do: test_bit(999, bit_array) as
long as bit_array is indeed big enough to hold 999 bits. It
is the responsibility of the caller to make sure that the
underlying array is big enough for the bit that is being tested.

I don't think you need to change the flags size.

- Ken

2006-01-25 17:24:07

by Geert Uytterhoeven

[permalink] [raw]
Subject: RE: [PATCH 5/6] fix warning on test_ti_thread_flag()

On Wed, 25 Jan 2006, Chen, Kenneth W wrote:
> Geert Uytterhoeven wrote on Wednesday, January 25, 2006 4:29 AM
> > On Wed, 25 Jan 2006, Akinobu Mita wrote:
> > > If the arechitecture is
> > > - BITS_PER_LONG == 64
> > > - struct thread_info.flag 32 is bits
> > > - second argument of test_bit() was void *
> > >
> > > Then compiler print error message on test_ti_thread_flags()
> > > in include/linux/thread_info.h
> > >
> > > Signed-off-by: Akinobu Mita <[email protected]>
> > > ---
> > > thread_info.h | 2 +-
> > > 1 files changed, 1 insertion(+), 1 deletion(-)
> > >
> > > Index: 2.6-git/include/linux/thread_info.h
> > > ===================================================================
> > > --- 2.6-git.orig/include/linux/thread_info.h 2006-01-25
> 19:07:12.000000000 +0900
> > > +++ 2.6-git/include/linux/thread_info.h 2006-01-25
> 19:14:26.000000000 +0900
> > > @@ -49,7 +49,7 @@
> > >
> > > static inline int test_ti_thread_flag(struct thread_info *ti, int
> flag)
> > > {
> > > - return test_bit(flag,&ti->flags);
> > > + return test_bit(flag, (void *)&ti->flags);
> > > }
> >
> > This is not safe. The bitops are defined to work on unsigned long
> only, so
> > flags should be changed to unsigned long instead, or you should use a
> > temporary.
> >
> > Affected platforms:
> > - alpha: flags is unsigned int
> > - ia64, sh, x86_64: flags is __u32
> >
> > The only affected 64-platforms are little endian, so it will silently
> work
> > after your change, though...
>
> I thought test_bit can operate on array beyond unsigned long.
> It's perfectly legitimate to do: test_bit(999, bit_array) as
> long as bit_array is indeed big enough to hold 999 bits. It
> is the responsibility of the caller to make sure that the
> underlying array is big enough for the bit that is being tested.

Yes, it can operate on arrays of unsigned long.

> I don't think you need to change the flags size.

Passing a pointer to a 32-bit entity to a function that takes a pointer to a
64-bit entity is a classical endianness bug. So it's better to change it,
before people copy the code to a big endian platform.

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2006-01-25 20:03:29

by Chen, Kenneth W

[permalink] [raw]
Subject: RE: [PATCH 5/6] fix warning on test_ti_thread_flag()

Geert Uytterhoeven wrote on Wednesday, January 25, 2006 9:19 AM
> > I don't think you need to change the flags size.
>
> Passing a pointer to a 32-bit entity to a function that takes a
> pointer to a 64-bit entity is a classical endianness bug. So it's
> better to change it, before people copy the code to a big endian
> platform.

Well, x86-64 and linux-ia64 both use little endian. I don't
understand why you are barking at us with big endian issue.

- Ken


Side-note: cc list trimmed.


2006-01-25 20:03:10

by Russell King

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Wed, Jan 25, 2006 at 08:32:06PM +0900, Akinobu Mita wrote:
> +#ifndef HAVE_ARCH___FFS_BITOPS
> +
> +/**
> + * __ffs - find first bit in word.
> + * @word: The word to search
> + *
> + * Returns 0..BITS_PER_LONG-1
> + * Undefined if no bit exists, so code should check against 0 first.
> + */
> +static inline unsigned long __ffs(unsigned long word)
> {
> - int mask;
> + int b = 0, s;
>
> - addr += nr >> 5;
> - mask = 1 << (nr & 0x1f);
> - return ((mask & *addr) != 0);
> +#if BITS_PER_LONG == 32
> + s = 16; if (word << 16 != 0) s = 0; b += s; word >>= s;
> + s = 8; if (word << 24 != 0) s = 0; b += s; word >>= s;
> + s = 4; if (word << 28 != 0) s = 0; b += s; word >>= s;
> + s = 2; if (word << 30 != 0) s = 0; b += s; word >>= s;
> + s = 1; if (word << 31 != 0) s = 0; b += s;
> +
> + return b;
> +#elif BITS_PER_LONG == 64
> + s = 32; if (word << 32 != 0) s = 0; b += s; word >>= s;
> + s = 16; if (word << 48 != 0) s = 0; b += s; word >>= s;
> + s = 8; if (word << 56 != 0) s = 0; b += s; word >>= s;
> + s = 4; if (word << 60 != 0) s = 0; b += s; word >>= s;
> + s = 2; if (word << 62 != 0) s = 0; b += s; word >>= s;
> + s = 1; if (word << 63 != 0) s = 0; b += s;
> +
> + return b;
> +#else
> +#error BITS_PER_LONG not defined
> +#endif

This code generates more expensive shifts than our (ARMs) existing C
version. This is a backward step.

Basically, shifts which depend on a variable are more expensive than
constant-based shifts.

I've not really looked at the rest because I haven't figured out which
bits will be used on ARM and which won't - which I think is another
problem with this patch set. I'll look again later tonight.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-01-25 20:59:00

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Wed, Jan 25, 2006 at 08:02:50PM +0000, Russell King wrote:
> I've not really looked at the rest because I haven't figured out which
> bits will be used on ARM and which won't - which I think is another
> problem with this patch set. I'll look again later tonight.

Russell,
I have the same problem. This file is 920 lines long and contains
7 distinct changes according to the (well written) notes.

Akinobu,
I appreciate your work - but could this particular peice be
split up into 7 chunks?

That would make checking the behavior of something like
HAVE_ARCH_FFZ_BITOPS easier for each arch.

thanks,
grant

2006-01-25 22:29:35

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH 5/6] fix warning on test_ti_thread_flag()

Akinobu Mita writes:

> If the arechitecture is
> - BITS_PER_LONG == 64
> - struct thread_info.flag 32 is bits
> - second argument of test_bit() was void *
>
> Then compiler print error message on test_ti_thread_flags()
> in include/linux/thread_info.h

And correctly so. The correct fix is to make thread_info.flag an
unsigned long. This patch is NAKed.

Paul.

2006-01-25 23:24:40

by Ian molton

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

Russell King wrote:

> This code generates more expensive shifts than our (ARMs) existing C
> version. This is a backward step.
>
> Basically, shifts which depend on a variable are more expensive than
> constant-based shifts.

arm26 will have the same problem here.

2006-01-26 00:06:15

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 5/6] fix warning on test_ti_thread_flag()

From: Paul Mackerras <[email protected]>
Date: Thu, 26 Jan 2006 09:28:02 +1100

> Akinobu Mita writes:
>
> > If the arechitecture is
> > - BITS_PER_LONG == 64
> > - struct thread_info.flag 32 is bits
> > - second argument of test_bit() was void *
> >
> > Then compiler print error message on test_ti_thread_flags()
> > in include/linux/thread_info.h
>
> And correctly so. The correct fix is to make thread_info.flag an
> unsigned long. This patch is NAKed.

I agree.

2006-01-26 00:08:21

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Wed, Jan 25, 2006 at 08:02:50PM +0000, Russell King wrote:
> > + s = 16; if (word << 16 != 0) s = 0; b += s; word >>= s;
> > + s = 8; if (word << 24 != 0) s = 0; b += s; word >>= s;
> > + s = 4; if (word << 28 != 0) s = 0; b += s; word >>= s;
...
> Basically, shifts which depend on a variable are more expensive than
> constant-based shifts.

Actually, they're all constant shifts. Just written stupidly.


r~

2006-01-26 02:13:19

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Wed, Jan 25, 2006 at 10:54:43PM +1100, Keith Owens wrote:
> Be very, very careful about using these generic *_bit() routines if the
> architecture supports non-maskable interrupts.
>
> NMI events can occur at any time, including when interrupts have been
> disabled by *_irqsave(). So you can get NMI events occurring while a
> *_bit fucntion is holding a spin lock. If the NMI handler also wants
> to do bit manipulation (and they do) then you can get a deadlock
> between the original caller of *_bit() and the NMI handler.
>
> Doing any work that requires spinlocks in an NMI handler is just asking
> for deadlock problems. The generic *_bit() routines add a hidden
> spinlock behind what was previously a safe operation. I would even say
> that any arch that supports any type of NMI event _must_ define its own
> bit routines that do not rely on your _atomic_spin_lock_irqsave() and
> its hash of spinlocks.

At least cris and parisc are using similar *_bit function on SMP.
I will add your advise in comment.

--- ./include/asm-generic/bitops.h.orig 2006-01-26 10:56:00.000000000 +0900
+++ ./include/asm-generic/bitops.h 2006-01-26 11:01:28.000000000 +0900
@@ -50,6 +50,16 @@ extern raw_spinlock_t __atomic_hash[ATOM
* C language equivalents written by Theodore Ts'o, 9/26/92
*/

+/*
+ * NMI events can occur at any time, including when interrupts have been
+ * disabled by *_irqsave(). So you can get NMI events occurring while a
+ * *_bit fucntion is holding a spin lock. If the NMI handler also wants
+ * to do bit manipulation (and they do) then you can get a deadlock
+ * between the original caller of *_bit() and the NMI handler.
+ *
+ * by Keith Owens
+ */
+
static __inline__ void set_bit(int nr, volatile unsigned long *addr)
{
unsigned long mask = BITOP_MASK(nr);

2006-01-26 02:19:46

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

[dropped most of cc lists]

While seeing atomic *_bit() functions in cris,
I found unnessesary local_irq_restore() call.

Signed-off-by: Akinobu Mita <[email protected]>

--- include/asm-cris/bitops.h.orig 2006-01-26 11:13:40.000000000 +0900
+++ include/asm-cris/bitops.h 2006-01-26 11:14:20.000000000 +0900
@@ -101,7 +101,6 @@ static inline int test_and_set_bit(int n
retval = (mask & *adr) != 0;
*adr |= mask;
cris_atomic_restore(addr, flags);
- local_irq_restore(flags);
return retval;
}

2006-01-26 03:27:13

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Wed, Jan 25, 2006 at 12:59:07PM -0800, Grant Grundler wrote:
> Akinobu,
> I appreciate your work - but could this particular peice be
> split up into 7 chunks?

I have 12 patches.

2006-01-26 03:29:14

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 1/12] generic *_bit()

This patch introduces the C-language equivalents of the functions below:

- atomic operation:
void set_bit(int nr, volatile unsigned long *addr);
void clear_bit(int nr, volatile unsigned long *addr);
void change_bit(int nr, volatile unsigned long *addr);
int test_and_set_bit(int nr, volatile unsigned long *addr);
int test_and_clear_bit(int nr, volatile unsigned long *addr);
int test_and_change_bit(int nr, volatile unsigned long *addr);

- non-atomic operation:
void __set_bit(int nr, volatile unsigned long *addr);
void __clear_bit(int nr, volatile unsigned long *addr);
void __change_bit(int nr, volatile unsigned long *addr);
int __test_and_set_bit(int nr, volatile unsigned long *addr);
int __test_and_clear_bit(int nr, volatile unsigned long *addr);
int __test_and_change_bit(int nr, volatile unsigned long *addr);
int test_bit(int nr, const volatile unsigned long *addr);

HAVE_ARCH_ATOMIC_BITOPS is defined when the architecture has its own
{,test_and_}{set,clear,change}_bit()

HAVE_ARCH_NON_ATOMIC_BITOPS is defined when the architecture has its own
__{,test_and_}{set,clear,change}_bit() and test_bit()

This code largely copied from:
include/asm-powerpc/bitops.h
include/asm-parisc/bitops.h
include/asm-parisc/atomic.h


Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:07:14.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:08.000000000 +0900
@@ -1,56 +1,198 @@
#ifndef _ASM_GENERIC_BITOPS_H_
#define _ASM_GENERIC_BITOPS_H_

+#include <asm/types.h>
+
+#define BITOP_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
+#define BITOP_WORD(nr) ((nr) / BITS_PER_LONG)
+
+#ifndef HAVE_ARCH_ATOMIC_BITOPS
+
+#ifdef CONFIG_SMP
+#include <asm/spinlock.h>
+#include <asm/cache.h> /* we use L1_CACHE_BYTES */
+
+/* Use an array of spinlocks for our atomic_ts.
+ * Hash function to index into a different SPINLOCK.
+ * Since "a" is usually an address, use one spinlock per cacheline.
+ */
+# define ATOMIC_HASH_SIZE 4
+# define ATOMIC_HASH(a) (&(__atomic_hash[ (((unsigned long) a)/L1_CACHE_BYTES) & (ATOMIC_HASH_SIZE-1) ]))
+
+extern raw_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned;
+
+/* Can't use raw_spin_lock_irq because of #include problems, so
+ * this is the substitute */
+#define _atomic_spin_lock_irqsave(l,f) do { \
+ raw_spinlock_t *s = ATOMIC_HASH(l); \
+ local_irq_save(f); \
+ __raw_spin_lock(s); \
+} while(0)
+
+#define _atomic_spin_unlock_irqrestore(l,f) do { \
+ raw_spinlock_t *s = ATOMIC_HASH(l); \
+ __raw_spin_unlock(s); \
+ local_irq_restore(f); \
+} while(0)
+
+
+#else
+# define _atomic_spin_lock_irqsave(l,f) do { local_irq_save(f); } while (0)
+# define _atomic_spin_unlock_irqrestore(l,f) do { local_irq_restore(f); } while (0)
+#endif
+
/*
* For the benefit of those who are trying to port Linux to another
* architecture, here are some C-language equivalents. You should
* recode these in the native assembly language, if at all possible.
- * To guarantee atomicity, these routines call cli() and sti() to
- * disable interrupts while they operate. (You have to provide inline
- * routines to cli() and sti().)
*
- * Also note, these routines assume that you have 32 bit longs.
- * You will have to change this if you are trying to port Linux to the
- * Alpha architecture or to a Cray. :-)
- *
* C language equivalents written by Theodore Ts'o, 9/26/92
*/

-extern __inline__ int set_bit(int nr,long * addr)
+static __inline__ void set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ *p |= mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+}
+
+static __inline__ void clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ *p &= ~mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+}
+
+static __inline__ void change_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ *p ^= mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+}
+
+static __inline__ int test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old;
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ old = *p;
+ *p = old | mask;
+ _atomic_spin_unlock_irqrestore(p, flags);
+
+ return (old & mask) != 0;
+}
+
+static __inline__ int test_and_clear_bit(int nr, volatile unsigned long *addr)
{
- int mask, retval;
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old;
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ old = *p;
+ *p = old & ~mask;
+ _atomic_spin_unlock_irqrestore(p, flags);

- addr += nr >> 5;
- mask = 1 << (nr & 0x1f);
- cli();
- retval = (mask & *addr) != 0;
- *addr |= mask;
- sti();
- return retval;
+ return (old & mask) != 0;
}

-extern __inline__ int clear_bit(int nr, long * addr)
+static __inline__ int test_and_change_bit(int nr, volatile unsigned long *addr)
{
- int mask, retval;
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old;
+ unsigned long flags;
+
+ _atomic_spin_lock_irqsave(p, flags);
+ old = *p;
+ *p = old ^ mask;
+ _atomic_spin_unlock_irqrestore(p, flags);

- addr += nr >> 5;
- mask = 1 << (nr & 0x1f);
- cli();
- retval = (mask & *addr) != 0;
- *addr &= ~mask;
- sti();
- return retval;
+ return (old & mask) != 0;
}

-extern __inline__ int test_bit(int nr, const unsigned long * addr)
+#endif /* HAVE_ARCH_ATOMIC_BITOPS */
+
+#ifndef HAVE_ARCH_NON_ATOMIC_BITOPS
+
+static __inline__ void __set_bit(int nr, volatile unsigned long *addr)
{
- int mask;
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);

- addr += nr >> 5;
- mask = 1 << (nr & 0x1f);
- return ((mask & *addr) != 0);
+ *p |= mask;
}

+static __inline__ void __clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+
+ *p &= ~mask;
+}
+
+static __inline__ void __change_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+
+ *p ^= mask;
+}
+
+static __inline__ int __test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old = *p;
+
+ *p = old | mask;
+ return (old & mask) != 0;
+}
+
+static __inline__ int __test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old = *p;
+
+ *p = old & ~mask;
+ return (old & mask) != 0;
+}
+
+static __inline__ int __test_and_change_bit(int nr,
+ volatile unsigned long *addr)
+{
+ unsigned long mask = BITOP_MASK(nr);
+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr);
+ unsigned long old = *p;
+
+ *p = old ^ mask;
+ return (old & mask) != 0;
+}
+
+static __inline__ int test_bit(int nr, __const__ volatile unsigned long *addr)
+{
+ return 1UL & (addr[BITOP_WORD(nr)] >> (nr & (BITS_PER_LONG-1)));
+}
+
+#endif /* HAVE_ARCH_NON_ATOMIC_BITOPS */
+
/*
* fls: find last bit set.
*/

2006-01-26 03:30:45

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 2/12] generic __ffs()

This patch introduces the C-language equivalent of the function:
unsigned long __ffs(unsigned long word);

HAVE_ARCH___FFS_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-sparc64/bitops.h


Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:08.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:09.000000000 +0900
@@ -193,6 +193,43 @@

#endif /* HAVE_ARCH_NON_ATOMIC_BITOPS */

+#ifndef HAVE_ARCH___FFS_BITOPS
+
+/**
+ * __ffs - find first bit in word.
+ * @word: The word to search
+ *
+ * Returns 0..BITS_PER_LONG-1
+ * Undefined if no bit exists, so code should check against 0 first.
+ */
+static inline unsigned long __ffs(unsigned long word)
+{
+ int b = 0, s;
+
+#if BITS_PER_LONG == 32
+ s = 16; if (word << 16 != 0) s = 0; b += s; word >>= s;
+ s = 8; if (word << 24 != 0) s = 0; b += s; word >>= s;
+ s = 4; if (word << 28 != 0) s = 0; b += s; word >>= s;
+ s = 2; if (word << 30 != 0) s = 0; b += s; word >>= s;
+ s = 1; if (word << 31 != 0) s = 0; b += s;
+
+ return b;
+#elif BITS_PER_LONG == 64
+ s = 32; if (word << 32 != 0) s = 0; b += s; word >>= s;
+ s = 16; if (word << 48 != 0) s = 0; b += s; word >>= s;
+ s = 8; if (word << 56 != 0) s = 0; b += s; word >>= s;
+ s = 4; if (word << 60 != 0) s = 0; b += s; word >>= s;
+ s = 2; if (word << 62 != 0) s = 0; b += s; word >>= s;
+ s = 1; if (word << 63 != 0) s = 0; b += s;
+
+ return b;
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+#endif /* HAVE_ARCH___FFS_BITOPS */
+
/*
* fls: find last bit set.
*/

2006-01-26 03:31:51

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 3/12] generic ffz()

This patch introduces the C-language equivalent of the function:
unsigned long ffz(unsigned long word);

HAVE_ARCH_FFZ_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-sparc64/bitops.h


Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:09.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
@@ -230,6 +230,13 @@

#endif /* HAVE_ARCH___FFS_BITOPS */

+#ifndef HAVE_ARCH_FFZ_BITOPS
+
+/* Undefined if no bit is zero. */
+#define ffz(x) __ffs(~x)
+
+#endif /* HAVE_ARCH_FFZ_BITOPS */
+
/*
* fls: find last bit set.
*/

2006-01-26 03:32:31

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 4/12] generic fls() and fls64()

This patch introduces the C-language equivalent of the function:
int fls(int x);
int fls64(__u64 x);

HAVE_ARCH_FLS_BITOPS is defined when the architecture has its own
fls().
HAVE_ARCH_FLS64_BITOPS is defined when the architecture has its own
fls64()

This code largely copied from:
include/linux/bitops.h


Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
@@ -237,12 +237,54 @@

#endif /* HAVE_ARCH_FFZ_BITOPS */

+#ifndef HAVE_ARCH_FLS_BITOPS
+
/*
* fls: find last bit set.
*/

-#define fls(x) generic_fls(x)
-#define fls64(x) generic_fls64(x)
+static __inline__ int fls(int x)
+{
+ int r = 32;
+
+ if (!x)
+ return 0;
+ if (!(x & 0xffff0000u)) {
+ x <<= 16;
+ r -= 16;
+ }
+ if (!(x & 0xff000000u)) {
+ x <<= 8;
+ r -= 8;
+ }
+ if (!(x & 0xf0000000u)) {
+ x <<= 4;
+ r -= 4;
+ }
+ if (!(x & 0xc0000000u)) {
+ x <<= 2;
+ r -= 2;
+ }
+ if (!(x & 0x80000000u)) {
+ x <<= 1;
+ r -= 1;
+ }
+ return r;
+}
+
+#endif /* HAVE_ARCH_FLS_BITOPS */
+
+#ifndef HAVE_ARCH_FLS64_BITOPS
+
+static inline int fls64(__u64 x)
+{
+ __u32 h = x >> 32;
+ if (h)
+ return fls(x) + 32;
+ return fls(x);
+}
+
+#endif /* HAVE_ARCH_FLS64_BITOPS */

#ifdef __KERNEL__

2006-01-26 03:33:16

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 5/12] generic find_{next,first}{,_zero}_bit()

This patch introduces the C-language equivalents of the functions below:

unsigned logn find_next_bit(const unsigned long *addr, unsigned long
size,
unsigned long offset);
unsigned long find_next_zero_bit(const unsigned long *addr, unsigned
long size,
unsigned long offset);
unsigned long find_first_zero_bit(const unsigned long *addr,
unsigned long size);
unsigned long find_first_bit(const unsigned long *addr, unsigned long
size);

HAVE_ARCH_FIND_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
arch/powerpc/lib/bitops.c


Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
@@ -286,6 +286,101 @@

#endif /* HAVE_ARCH_FLS64_BITOPS */

+#ifndef HAVE_ARCH_FIND_BITOPS
+
+/**
+ * find_next_bit - find the next set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+static inline unsigned long find_next_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BITOP_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG-1);
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset %= BITS_PER_LONG;
+ if (offset) {
+ tmp = *(p++);
+ tmp &= (~0UL << offset);
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (tmp)
+ goto found_middle;
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+ }
+ while (size & ~(BITS_PER_LONG-1)) {
+ if ((tmp = *(p++)))
+ goto found_middle;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp &= (~0UL >> (BITS_PER_LONG - size));
+ if (tmp == 0UL) /* Are any bits set? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + __ffs(tmp);
+}
+
+/*
+ * This implementation of find_{first,next}_zero_bit was stolen from
+ * Linus' asm-alpha/bitops.h.
+ */
+static inline unsigned long find_next_zero_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BITOP_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG-1);
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset %= BITS_PER_LONG;
+ if (offset) {
+ tmp = *(p++);
+ tmp |= ~0UL >> (BITS_PER_LONG - offset);
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+ }
+ while (size & ~(BITS_PER_LONG-1)) {
+ if (~(tmp = *(p++)))
+ goto found_middle;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp |= ~0UL << size;
+ if (tmp == ~0UL) /* Are any bits zero? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + ffz(tmp);
+}
+
+#define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0)
+#define find_first_bit(addr, size) find_next_bit((addr), (size), 0)
+
+#endif /* HAVE_ARCH_FIND_BITOPS */
+
#ifdef __KERNEL__

/*

2006-01-26 03:34:10

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 6/12] generic sched_find_first_bit()

This patch introduces the C-language equivalent of the function:
int sched_find_first_bit(const unsigned long *b);

HAVE_ARCH_SCHED_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/asm-powerpc/bitops.h

Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
@@ -383,6 +383,41 @@

#ifdef __KERNEL__

+#ifndef HAVE_ARCH_SCHED_BITOPS
+
+#include <linux/compiler.h> /* unlikely() */
+
+/*
+ * Every architecture must define this function. It's the fastest
+ * way of searching a 140-bit bitmap where the first 100 bits are
+ * unlikely to be set. It's guaranteed that at least one of the 140
+ * bits is cleared.
+ */
+static inline int sched_find_first_bit(const unsigned long *b)
+{
+#if BITS_PER_LONG == 64
+ if (unlikely(b[0]))
+ return __ffs(b[0]);
+ if (unlikely(b[1]))
+ return __ffs(b[1]) + 64;
+ return __ffs(b[2]) + 128;
+#elif BITS_PER_LONG == 32
+ if (unlikely(b[0]))
+ return __ffs(b[0]);
+ if (unlikely(b[1]))
+ return __ffs(b[1]) + 32;
+ if (unlikely(b[2]))
+ return __ffs(b[2]) + 64;
+ if (b[3])
+ return __ffs(b[3]) + 96;
+ return __ffs(b[4]) + 128;
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+#endif /* HAVE_ARCH_SCHED_BITOPS */
+
/*
* ffs: find first bit set. This is defined the same way as
* the libc and compiler builtin ffs routines, therefore

2006-01-26 03:35:12

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 7/12] generic ffs()

This patch introduces the C-language equivalent of the function:
int ffs(int x);

HAVE_ARCH_FFS_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h


Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
@@ -418,13 +418,45 @@

#endif /* HAVE_ARCH_SCHED_BITOPS */

+#ifndef HAVE_ARCH_FFS_BITOPS
+
/*
* ffs: find first bit set. This is defined the same way as
* the libc and compiler builtin ffs routines, therefore
* differs in spirit from the above ffz (man ffs).
*/

-#define ffs(x) generic_ffs(x)
+static inline int ffs(int x)
+{
+ int r = 1;
+
+ if (!x)
+ return 0;
+ if (!(x & 0xffff)) {
+ x >>= 16;
+ r += 16;
+ }
+ if (!(x & 0xff)) {
+ x >>= 8;
+ r += 8;
+ }
+ if (!(x & 0xf)) {
+ x >>= 4;
+ r += 4;
+ }
+ if (!(x & 3)) {
+ x >>= 2;
+ r += 2;
+ }
+ if (!(x & 1)) {
+ x >>= 1;
+ r += 1;
+ }
+ return r;
+}
+
+#endif /* HAVE_ARCH_FFS_BITOPS */
+

/*
* hweightN: returns the hamming weight (i.e. the number

2006-01-26 03:36:10

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 8/12] generic hweight{32,16,8}()

This patch introduces the C-language equivalents of the functions below:
unsigned int hweight32(unsigned int w);
unsigned int hweight16(unsigned int w);
unsigned int hweight8(unsigned int w);

HAVE_ARCH_HWEIGHT_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h

Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
@@ -458,14 +458,38 @@
#endif /* HAVE_ARCH_FFS_BITOPS */


+#ifndef HAVE_ARCH_HWEIGHT_BITOPS
+
/*
* hweightN: returns the hamming weight (i.e. the number
* of bits set) of a N-bit word
*/

-#define hweight32(x) generic_hweight32(x)
-#define hweight16(x) generic_hweight16(x)
-#define hweight8(x) generic_hweight8(x)
+static inline unsigned int hweight32(unsigned int w)
+{
+ unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
+ res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
+ res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
+ res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
+ return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
+}
+
+static inline unsigned int hweight16(unsigned int w)
+{
+ unsigned int res = (w & 0x5555) + ((w >> 1) & 0x5555);
+ res = (res & 0x3333) + ((res >> 2) & 0x3333);
+ res = (res & 0x0F0F) + ((res >> 4) & 0x0F0F);
+ return (res & 0x00FF) + ((res >> 8) & 0x00FF);
+}
+
+static inline unsigned int hweight8(unsigned int w)
+{
+ unsigned int res = (w & 0x55) + ((w >> 1) & 0x55);
+ res = (res & 0x33) + ((res >> 2) & 0x33);
+ return (res & 0x0F) + ((res >> 4) & 0x0F);
+}
+
+#endif /* HAVE_ARCH_HWEIGHT_BITOPS */

#endif /* __KERNEL__ */

2006-01-26 03:36:48

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 9/12] generic hweight64()

This patch introduces the C-language equivalent of the function:
unsigned long hweight64(__u64 w);

HAVE_ARCH_HWEIGHT64_BITOPS is defined when the architecture has its own
version of these functions.

This code largely copied from:
include/linux/bitops.h

Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
@@ -491,6 +491,25 @@

#endif /* HAVE_ARCH_HWEIGHT_BITOPS */

+#ifndef HAVE_ARCH_HWEIGHT64_BITOPS
+
+static inline unsigned long hweight64(__u64 w)
+{
+#if BITS_PER_LONG < 64
+ return hweight32((unsigned int)(w >> 32)) + hweight32((unsigned int)w);
+#else
+ u64 res;
+ res = (w & 0x5555555555555555ul) + ((w >> 1) & 0x5555555555555555ul);
+ res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
+ res = (res & 0x0F0F0F0F0F0F0F0Ful) + ((res >> 4) & 0x0F0F0F0F0F0F0F0Ful);
+ res = (res & 0x00FF00FF00FF00FFul) + ((res >> 8) & 0x00FF00FF00FF00FFul);
+ res = (res & 0x0000FFFF0000FFFFul) + ((res >> 16) & 0x0000FFFF0000FFFFul);
+ return (res & 0x00000000FFFFFFFFul) + ((res >> 32) & 0x00000000FFFFFFFFul);
+#endif
+}
+
+#endif /* HAVE_ARCH_HWEIGHT64_BITOPS */
+
#endif /* __KERNEL__ */

#endif /* _ASM_GENERIC_BITOPS_H */

2006-01-26 03:38:04

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 10/12] generic ext2_{set,clear,test,find_first_zero,find_next_zero}_bit()

This patch introduces the C-language equivalents of the functions below:

int ext2_set_bit(int nr, volatile unsigned long *addr);
int ext2_clear_bit(int nr, volatile unsigned long *addr);
int ext2_test_bit(int nr, const volatile unsigned long *addr);
unsigned long ext2_find_first_zero_bit(const unsigned long *addr,
unsigned long size);

HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS is defined when the architecture has
its own
version of these functions.

unsinged long ext2_find_next_zero_bit(const unsigned long *addr,
unsigned long size);

This code largely copied from:
include/asm-powerpc/bitops.h
include/asm-parisc/bitops.h

Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
@@ -5,6 +5,7 @@

#define BITOP_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
#define BITOP_WORD(nr) ((nr) / BITS_PER_LONG)
+#define BITOP_LE_SWIZZLE ((BITS_PER_LONG-1) & ~0x7)

#ifndef HAVE_ARCH_ATOMIC_BITOPS

@@ -510,6 +511,140 @@

#endif /* HAVE_ARCH_HWEIGHT64_BITOPS */

+#ifndef HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS
+
+#include <asm/byteorder.h>
+
+#if defined(__LITTLE_ENDIAN)
+
+static __inline__ int generic_test_le_bit(unsigned long nr,
+ __const__ unsigned long *addr)
+{
+ __const__ unsigned char *tmp = (__const__ unsigned char *) addr;
+ return (tmp[nr >> 3] >> (nr & 7)) & 1;
+}
+
+#define generic___set_le_bit(nr, addr) __set_bit(nr, addr)
+#define generic___clear_le_bit(nr, addr) __clear_bit(nr, addr)
+
+#define generic_test_and_set_le_bit(nr, addr) test_and_set_bit(nr, addr)
+#define generic_test_and_clear_le_bit(nr, addr) test_and_clear_bit(nr, addr)
+
+#define generic___test_and_set_le_bit(nr, addr) __test_and_set_bit(nr, addr)
+#define generic___test_and_clear_le_bit(nr, addr) __test_and_clear_bit(nr, addr)
+
+#define generic_find_next_zero_le_bit(addr, size, offset) find_next_zero_bit(addr, size, offset)
+
+#elif defined(__BIG_ENDIAN)
+
+static __inline__ int generic_test_le_bit(unsigned long nr,
+ __const__ unsigned long *addr)
+{
+ __const__ unsigned char *tmp = (__const__ unsigned char *) addr;
+ return (tmp[nr >> 3] >> (nr & 7)) & 1;
+}
+
+#define generic___set_le_bit(nr, addr) \
+ __set_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+#define generic___clear_le_bit(nr, addr) \
+ __clear_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+
+#define generic_test_and_set_le_bit(nr, addr) \
+ test_and_set_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+#define generic_test_and_clear_le_bit(nr, addr) \
+ test_and_clear_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+
+#define generic___test_and_set_le_bit(nr, addr) \
+ __test_and_set_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+#define generic___test_and_clear_le_bit(nr, addr) \
+ __test_and_clear_bit((nr) ^ BITOP_LE_SWIZZLE, (addr))
+
+/* include/linux/byteorder does not support "unsigned long" type */
+static inline unsigned long ext2_swabp(const unsigned long * x)
+{
+#if BITS_PER_LONG == 64
+ return (unsigned long) __swab64p((u64 *) x);
+#elif BITS_PER_LONG == 32
+ return (unsigned long) __swab32p((u32 *) x);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+/* include/linux/byteorder doesn't support "unsigned long" type */
+static inline unsigned long ext2_swab(const unsigned long y)
+{
+#if BITS_PER_LONG == 64
+ return (unsigned long) __swab64((u64) y);
+#elif BITS_PER_LONG == 32
+ return (unsigned long) __swab32((u32) y);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+static __inline__ unsigned long generic_find_next_zero_le_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ const unsigned long *p = addr + BITOP_WORD(offset);
+ unsigned long result = offset & ~(BITS_PER_LONG - 1);
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= (BITS_PER_LONG - 1UL);
+ if (offset) {
+ tmp = ext2_swabp(p++);
+ tmp |= (~0UL >> (BITS_PER_LONG - offset));
+ if (size < BITS_PER_LONG)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= BITS_PER_LONG;
+ result += BITS_PER_LONG;
+ }
+
+ while (size & ~(BITS_PER_LONG - 1)) {
+ if (~(tmp = *(p++)))
+ goto found_middle_swap;
+ result += BITS_PER_LONG;
+ size -= BITS_PER_LONG;
+ }
+ if (!size)
+ return result;
+ tmp = ext2_swabp(p);
+found_first:
+ tmp |= ~0UL << size;
+ if (tmp == ~0UL) /* Are any bits zero? */
+ return result + size; /* Nope. Skip ffz */
+found_middle:
+ return result + ffz(tmp);
+
+found_middle_swap:
+ return result + ffz(ext2_swab(tmp));
+}
+#else
+#error "Please fix <asm/byteorder.h>"
+#endif
+
+#define generic_find_first_zero_le_bit(addr, size) \
+ generic_find_next_zero_le_bit((addr), (size), 0)
+
+#define ext2_set_bit(nr,addr) \
+ generic___test_and_set_le_bit((nr),(unsigned long *)(addr))
+#define ext2_clear_bit(nr,addr) \
+ generic___test_and_clear_le_bit((nr),(unsigned long *)(addr))
+
+#define ext2_test_bit(nr,addr) \
+ generic_test_le_bit((nr),(unsigned long *)(addr))
+#define ext2_find_first_zero_bit(addr, size) \
+ generic_find_first_zero_le_bit((unsigned long *)(addr), (size))
+#define ext2_find_next_zero_bit(addr, size, off) \
+ generic_find_next_zero_le_bit((unsigned long *)(addr), (size), (off))
+
+#endif /* HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS */
+
#endif /* __KERNEL__ */

#endif /* _ASM_GENERIC_BITOPS_H */

2006-01-26 03:38:38

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 11/12] generic ext2_{set,clear}_bit_atomic()

This patch introduces the C-language equivalents of the functions below:
int ext2_set_bit_atomic(int nr, volatile unsigned long *addr);
int ext2_clear_bit_atomic(int nr, volatile unsigned long *addr);

HAVE_ARCH_EXT2_ATOMIC_BITOPS is defined when the architecture has its
own
version of these functions.

This code largely copied from:
include/asm-sparc/bitops.h

Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:12.000000000 +0900
@@ -645,6 +645,28 @@

#endif /* HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS */

+#ifndef HAVE_ARCH_EXT2_ATOMIC_BITOPS
+
+#define ext2_set_bit_atomic(lock, nr, addr) \
+ ({ \
+ int ret; \
+ spin_lock(lock); \
+ ret = ext2_set_bit((nr), (unsigned long *)(addr)); \
+ spin_unlock(lock); \
+ ret; \
+ })
+
+#define ext2_clear_bit_atomic(lock, nr, addr) \
+ ({ \
+ int ret; \
+ spin_lock(lock); \
+ ret = ext2_clear_bit((nr), (unsigned long *)(addr)); \
+ spin_unlock(lock); \
+ ret; \
+ })
+
+#endif /* HAVE_ARCH_EXT2_ATOMIC_BITOPS */
+
#endif /* __KERNEL__ */

#endif /* _ASM_GENERIC_BITOPS_H */

2006-01-26 03:39:09

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH 12/12] generic minix_{test,set,test_and_clear,test,find_first_zero}_bit()

This patch introduces the C-language equivalents of the functions below:

HAVE_ARCH_MINIX_BITOPS is defined when the architecture has its own
version of these functions.

int minix_test_and_set_bit(int nr, volatile unsigned long *addr);
int minix_set_bit(int nr, volatile unsigned long *addr);
int minix_test_and_clear_bit(int nr, volatile unsigned long *addr);
int minix_test_bit(int nr, const volatile unsigned long *addr);
unsigned long minix_find_first_zero_bit(const unsigned long *addr,
unsigned long size);

This code largely copied from:
include/asm-sparc/bitops.h

Index: 2.6-git/include/asm-generic/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:12.000000000 +0900
+++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:12.000000000 +0900
@@ -667,6 +667,21 @@

#endif /* HAVE_ARCH_EXT2_ATOMIC_BITOPS */

+#ifndef HAVE_ARCH_MINIX_BITOPS
+
+#define minix_test_and_set_bit(nr,addr) \
+ __test_and_set_bit((nr),(unsigned long *)(addr))
+#define minix_set_bit(nr,addr) \
+ __set_bit((nr),(unsigned long *)(addr))
+#define minix_test_and_clear_bit(nr,addr) \
+ __test_and_clear_bit((nr),(unsigned long *)(addr))
+#define minix_test_bit(nr,addr) \
+ test_bit((nr),(unsigned long *)(addr))
+#define minix_find_first_zero_bit(addr,size) \
+ find_first_zero_bit((unsigned long *)(addr),(size))
+
+#endif /* HAVE_ARCH_MINIX_BITOPS */
+
#endif /* __KERNEL__ */

#endif /* _ASM_GENERIC_BITOPS_H */

2006-01-26 03:51:47

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 5/6] fix warning on test_ti_thread_flag()

On Wed, Jan 25, 2006 at 12:02:21PM -0800, Chen, Kenneth W wrote:
> Geert Uytterhoeven wrote on Wednesday, January 25, 2006 9:19 AM
> > > I don't think you need to change the flags size.
> >
> > Passing a pointer to a 32-bit entity to a function that takes a
> > pointer to a 64-bit entity is a classical endianness bug. So it's
> > better to change it, before people copy the code to a big endian
> > platform.
>
> Well, x86-64 and linux-ia64 both use little endian. I don't
> understand why you are barking at us with big endian issue.
>

I can fix this without changing the flags size for those architectures.

1. Introduce *_le_bit() bit operations which takes void *addr
(already I have these functions in the scope of
HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS in my patch)

2. Change flags to __u8 flags[4] or __u8 flags[8] for each architectures.

3. Use *_le_bit() in include/linux/thread_info.h

2006-01-26 04:12:50

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH 5/6] fix warning on test_ti_thread_flag()

Akinobu Mita writes:

> I can fix this without changing the flags size for those architectures.
>
> 1. Introduce *_le_bit() bit operations which takes void *addr
> (already I have these functions in the scope of
> HAVE_ARCH_EXT2_NON_ATOMIC_BITOPS in my patch)
>
> 2. Change flags to __u8 flags[4] or __u8 flags[8] for each architectures.
>
> 3. Use *_le_bit() in include/linux/thread_info.h

Please don't do this, you'll break the powerpc assembly code that
tests bits in thread_info()->flags.

Paul.

2006-01-26 04:34:32

by Edgar Toernig

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

Richard Henderson wrote:
>
> On Wed, Jan 25, 2006 at 08:02:50PM +0000, Russell King wrote:
> > > + s = 16; if (word << 16 != 0) s = 0; b += s; word >>= s;
> > > + s = 8; if (word << 24 != 0) s = 0; b += s; word >>= s;
> > > + s = 4; if (word << 28 != 0) s = 0; b += s; word >>= s;
> ...
> > Basically, shifts which depend on a variable are more expensive than
> > constant-based shifts.
>
> Actually, they're all constant shifts. Just written stupidly.

Why shift at all?

int ffs(u32 word)
{
int bit = 0;

word &= -word; // only keep the lsb.

if (word & 0xffff0000) bit |= 16;
if (word & 0xff00ff00) bit |= 8;
if (word & 0xf0f0f0f0) bit |= 4;
if (word & 0xcccccccc) bit |= 2;
if (word & 0xaaaaaaaa) bit |= 1;

return bit;
}

Ciao, ET.

2006-01-26 07:05:49

by Balbir Singh

[permalink] [raw]
Subject: Re: [PATCH 9/12] generic hweight64()

On 1/26/06, Akinobu Mita <[email protected]> wrote:
> This patch introduces the C-language equivalent of the function:
> unsigned long hweight64(__u64 w);
>
> HAVE_ARCH_HWEIGHT64_BITOPS is defined when the architecture has its own
> version of these functions.
>
> This code largely copied from:
> include/linux/bitops.h
>
> Index: 2.6-git/include/asm-generic/bitops.h
> ===================================================================
> --- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
> +++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
> @@ -491,6 +491,25 @@
>
> #endif /* HAVE_ARCH_HWEIGHT_BITOPS */
>
> +#ifndef HAVE_ARCH_HWEIGHT64_BITOPS
> +
> +static inline unsigned long hweight64(__u64 w)
> +{
> +#if BITS_PER_LONG < 64
> + return hweight32((unsigned int)(w >> 32)) + hweight32((unsigned int)w);
> +#else
> + u64 res;
> + res = (w & 0x5555555555555555ul) + ((w >> 1) & 0x5555555555555555ul);

This can be replaced with

res = (w-((w >> 1) & 0x5555555555555555ul));

> + res = (res & 0x3333333333333333ul) + ((res >> 2) & 0x3333333333333333ul);
> + res = (res & 0x0F0F0F0F0F0F0F0Ful) + ((res >> 4) & 0x0F0F0F0F0F0F0F0Ful);

res = (res+(res>>4))&0x0F0F0F0F0F0F0F0Ful;

> + res = (res & 0x00FF00FF00FF00FFul) + ((res >> 8) & 0x00FF00FF00FF00FFul);
> + res = (res & 0x0000FFFF0000FFFFul) + ((res >> 16) & 0x0000FFFF0000FFFFul);
> + return (res & 0x00000000FFFFFFFFul) + ((res >> 32) & 0x00000000FFFFFFFFul);
> +#endif
> +}
> +
> +#endif /* HAVE_ARCH_HWEIGHT64_BITOPS */
> +
> #endif /* __KERNEL__ */
>
> #endif /* _ASM_GENERIC_BITOPS_H */
> -

Please see Don Knuth's MMIXWare for more credits and improvements to this
algorithm

Balbir

2006-01-26 07:12:11

by Balbir Singh

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On 1/26/06, Akinobu Mita <[email protected]> wrote:
> This patch introduces the C-language equivalents of the functions below:
> unsigned int hweight32(unsigned int w);
> unsigned int hweight16(unsigned int w);
> unsigned int hweight8(unsigned int w);
>
> HAVE_ARCH_HWEIGHT_BITOPS is defined when the architecture has its own
> version of these functions.
>
> This code largely copied from:
> include/linux/bitops.h
>
> Index: 2.6-git/include/asm-generic/bitops.h
> ===================================================================
> --- 2.6-git.orig/include/asm-generic/bitops.h 2006-01-25 19:14:10.000000000 +0900
> +++ 2.6-git/include/asm-generic/bitops.h 2006-01-25 19:14:11.000000000 +0900
> @@ -458,14 +458,38 @@
> #endif /* HAVE_ARCH_FFS_BITOPS */
>
>
> +#ifndef HAVE_ARCH_HWEIGHT_BITOPS
> +
> /*
> * hweightN: returns the hamming weight (i.e. the number
> * of bits set) of a N-bit word
> */
>
> -#define hweight32(x) generic_hweight32(x)
> -#define hweight16(x) generic_hweight16(x)
> -#define hweight8(x) generic_hweight8(x)
> +static inline unsigned int hweight32(unsigned int w)
> +{
> + unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
> + res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
> + res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
> + res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
> + return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
> +}
> +

This can be replaced with

register int res=w;
res=res-((res>>1)&0x55555555);
res=(res&0x33333333)+((res>>2)&0x33333333);
res=(res+(res>>4))&0x0f0f0f0f;
res=res+(res>>8);
return (res+(res>>16)) & 0xff;

Similar optimizations can be applied to the routines below. Please see
http://www-cs-faculty.stanford.edu/~knuth/mmixware.html errata and the code
in mmix-arith.w for the complete set of optimizations and credits.

http://www.jjj.de/fxt/fxtbook.pdf is another inspirational source for
such algorithms.

> +static inline unsigned int hweight16(unsigned int w)
> +{
> + unsigned int res = (w & 0x5555) + ((w >> 1) & 0x5555);
> + res = (res & 0x3333) + ((res >> 2) & 0x3333);
> + res = (res & 0x0F0F) + ((res >> 4) & 0x0F0F);
> + return (res & 0x00FF) + ((res >> 8) & 0x00FF);
> +}
> +
> +static inline unsigned int hweight8(unsigned int w)
> +{
> + unsigned int res = (w & 0x55) + ((w >> 1) & 0x55);
> + res = (res & 0x33) + ((res >> 2) & 0x33);
> + return (res & 0x0F) + ((res >> 4) & 0x0F);
> +}
> +
> +#endif /* HAVE_ARCH_HWEIGHT_BITOPS */
>
> #endif /* __KERNEL__ */

Regards,
Balbir

2006-01-26 08:22:01

by Michael Tokarev

[permalink] [raw]
Subject: Re: [PATCH 3/12] generic ffz()

Akinobu Mita wrote:
> This patch introduces the C-language equivalent of the function:
> unsigned long ffz(unsigned long word);
[]
> +#define ffz(x) __ffs(~x)

please consider using
#define ffz(x) __ffs(~(x))

instead -- note the extra ()-pair

/mjt

2006-01-26 08:55:59

by Russell King

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Wed, Jan 25, 2006 at 04:06:18PM -0800, Richard Henderson wrote:
> On Wed, Jan 25, 2006 at 08:02:50PM +0000, Russell King wrote:
> > > + s = 16; if (word << 16 != 0) s = 0; b += s; word >>= s;
> > > + s = 8; if (word << 24 != 0) s = 0; b += s; word >>= s;
> > > + s = 4; if (word << 28 != 0) s = 0; b += s; word >>= s;
> ...
> > Basically, shifts which depend on a variable are more expensive than
> > constant-based shifts.
>
> Actually, they're all constant shifts. Just written stupidly.

Unfortunately that's not correct. You do not appear to have checked
the compiler output like I did - this code does _not_ generate
constant shifts.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-01-26 10:04:42

by Rutger Nijlunsing

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

[snip]
> > /*
> > * hweightN: returns the hamming weight (i.e. the number
> > * of bits set) of a N-bit word
> > */
> >
> > -#define hweight32(x) generic_hweight32(x)
> > -#define hweight16(x) generic_hweight16(x)
> > -#define hweight8(x) generic_hweight8(x)
> > +static inline unsigned int hweight32(unsigned int w)
> > +{
> > + unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
> > + res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
> > + res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
> > + res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
> > + return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
> > +}
> > +
>
> This can be replaced with
>
> register int res=w;
> res=res-((res>>1)&0x55555555);
> res=(res&0x33333333)+((res>>2)&0x33333333);
> res=(res+(res>>4))&0x0f0f0f0f;
> res=res+(res>>8);
> return (res+(res>>16)) & 0xff;
>
> Similar optimizations can be applied to the routines below. Please see
> http://www-cs-faculty.stanford.edu/~knuth/mmixware.html errata and the code
> in mmix-arith.w for the complete set of optimizations and credits.
>
> http://www.jjj.de/fxt/fxtbook.pdf is another inspirational source for
> such algorithms.

Ah, the joys of bit twiddling!

http://graphics.stanford.edu/~seander/bithacks.html
...has some more.

--
Rutger Nijlunsing ---------------------------------- eludias ed dse.nl
never attribute to a conspiracy which can be explained by incompetence
----------------------------------------------------------------------

2006-01-26 16:09:20

by Grant Grundler

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Thu, Jan 26, 2006 at 08:55:41AM +0000, Russell King wrote:
> Unfortunately that's not correct. You do not appear to have checked
> the compiler output like I did - this code does _not_ generate
> constant shifts.

Russell,
By "written stupidly", I thought Richard meant they could have
used constants instead of "s". e.g.:
if (word << 16 == 0) { b += 16; word >>= 16); }
if (word << 24 == 0) { b += 8; word >>= 8); }
if (word << 28 == 0) { b += 4; word >>= 4); }

But I prefer what Edgar Toernig suggested.

grant

2006-01-26 16:15:13

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 1/6] {set,clear,test}_bit() related cleanup

Hi!

> While working on these patch set, I found several possible cleanup
> on x86-64 and ia64.

It is probably not your fault, but...

> Index: 2.6-git/include/asm-x86_64/mmu_context.h
> ===================================================================
> --- 2.6-git.orig/include/asm-x86_64/mmu_context.h 2006-01-25 19:07:15.000000000 +0900
> +++ 2.6-git/include/asm-x86_64/mmu_context.h 2006-01-25 19:13:59.000000000 +0900
> @@ -34,12 +34,12 @@
> unsigned cpu = smp_processor_id();
> if (likely(prev != next)) {
> /* stop flush ipis for the previous mm */
> - clear_bit(cpu, &prev->cpu_vm_mask);
> + cpu_clear(cpu, prev->cpu_vm_mask);
> #ifdef CONFIG_SMP
> write_pda(mmu_state, TLBSTATE_OK);
> write_pda(active_mm, next);
> #endif
> - set_bit(cpu, &next->cpu_vm_mask);
> + cpu_set(cpu, next->cpu_vm_mask);
> load_cr3(next->pgd);
>
> if (unlikely(next->context.ldt != prev->context.ldt))

cpu_set sounds *very* ambiguous. We have thing called cpusets, for
example. I'd not guess that is set_bit in cpu endianity (is it?).

Pavel
--
Thanks, Sharp!

2006-01-26 16:30:49

by Nicolas Pitre

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Thu, 26 Jan 2006, Grant Grundler wrote:

> On Thu, Jan 26, 2006 at 08:55:41AM +0000, Russell King wrote:
> > Unfortunately that's not correct. You do not appear to have checked
> > the compiler output like I did - this code does _not_ generate
> > constant shifts.
>
> Russell,
> By "written stupidly", I thought Richard meant they could have
> used constants instead of "s". e.g.:
> if (word << 16 == 0) { b += 16; word >>= 16); }
> if (word << 24 == 0) { b += 8; word >>= 8); }
> if (word << 28 == 0) { b += 4; word >>= 4); }
>
> But I prefer what Edgar Toernig suggested.

It is just as bad on ARM since it requires large constants that cannot
be expressed with immediate litteral values. The constant shift
approach is really the best on ARM.


Nicolas

2006-01-26 16:40:41

by Russell King

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Thu, Jan 26, 2006 at 09:18:49AM -0700, Grant Grundler wrote:
> On Thu, Jan 26, 2006 at 08:55:41AM +0000, Russell King wrote:
> > Unfortunately that's not correct. You do not appear to have checked
> > the compiler output like I did - this code does _not_ generate
> > constant shifts.
>
> Russell,
> By "written stupidly", I thought Richard meant they could have
> used constants instead of "s". e.g.:
> if (word << 16 == 0) { b += 16; word >>= 16); }
> if (word << 24 == 0) { b += 8; word >>= 8); }
> if (word << 28 == 0) { b += 4; word >>= 4); }
>
> But I prefer what Edgar Toernig suggested.

Ok, I can see I'm going to lose this, but what the hell.

Firstly though, an out of line function call on ARM clobbers six out
of 11 CPU registers.

Let's compare the implementations, which are:

int toernig_ffs(unsigned long word)
{
int bit = 0;
word &= -word; // only keep the lsb.
if (word & 0xffff0000) bit |= 16;
if (word & 0xff00ff00) bit |= 8;
if (word & 0xf0f0f0f0) bit |= 4;
if (word & 0xcccccccc) bit |= 2;
if (word & 0xaaaaaaaa) bit |= 1;
return bit;
}

toernig_ffs:
rsb r3, r0, #0
and r0, r0, r3
mov r3, r0, lsr #16
bic r2, r0, #16711680
str lr, [sp, #-4]!
mov r3, r3, asl #16
ldr lr, .L7
ldr r1, .L7+4
ldr ip, .L7+8
cmp r3, #0
bic r2, r2, #255
and lr, r0, lr
and r1, r0, r1
and ip, r0, ip
movne r0, #16
moveq r0, #0
cmp r2, #0
orrne r0, r0, #8
cmp r1, #0
orrne r0, r0, #4
cmp ip, #0
orrne r0, r0, #2
cmp lr, #0
orrne r0, r0, #1
ldr pc, [sp], #4
.L8:
.align 2
.L7:
.word -1431655766
.word -252645136
.word -858993460

25 instructions. 3 words of additional data. 5 registers. 0 register
based shifts.

I feel that this is far too expensive to sanely inline - at least three
words of additional data for a use in a function, and has a high register
usage comparable to that of an out of line function.

int mita_ffs(unsigned long word)
{
int b = 0, s;
s = 16; if (word << 16 != 0) s = 0; b += s; word >>= s;
s = 8; if (word << 24 != 0) s = 0; b += s; word >>= s;
s = 4; if (word << 28 != 0) s = 0; b += s; word >>= s;
s = 2; if (word << 30 != 0) s = 0; b += s; word >>= s;
s = 1; if (word << 31 != 0) s = 0; b += s;
return b;
}

mita_ffs:
movs r1, r0, asl #16
moveq r2, #16
movne r2, #0
mov r0, r0, lsr r2 @ register-based shift
mov r3, r2
movs r2, r0, asl #24
moveq r2, #8
movne r2, #0
mov r0, r0, lsr r2 @ register-based shift
movs r1, r0, asl #28
add r3, r3, r2
moveq r2, #4
movne r2, #0
mov r0, r0, lsr r2 @ register-based shift
movs r1, r0, asl #30
add r3, r3, r2
moveq r2, #2
movne r2, #0
mov r0, r0, lsr r2 @ register-based shift
tst r0, #1
add r3, r3, r2
moveq r2, #1
movne r2, #0
add r3, r3, r2
mov r0, r3
mov pc, lr

26 instructions. 4 registers used. 4 unconditional register-based
shifts (expensive).

Better, but uses inefficient register based shifts (which can take twice
as many cycles as non-register based shifts depending on the CPU). Still
has a high usage on CPU registers though. Could possibly be a candidate
for inlining.

int arm_ffs(unsigned long word)
{
int k = 31;
if (word & 0x0000ffff) { k -= 16; word <<= 16; }
if (word & 0x00ff0000) { k -= 8; word <<= 8; }
if (word & 0x0f000000) { k -= 4; word <<= 4; }
if (word & 0x30000000) { k -= 2; word <<= 2; }
if (word & 0x40000000) { k -= 1; }
return k;
}

arm_ffs:
mov r3, r0, asl #16
mov r3, r3, lsr #16
cmp r3, #0
movne r0, r0, asl #16
mov r3, #31
movne r3, #15
tst r0, #16711680
movne r0, r0, asl #8
subne r3, r3, #8
tst r0, #251658240
movne r0, r0, asl #4
subne r3, r3, #4
tst r0, #805306368
movne r0, r0, asl #2
subne r3, r3, #2
tst r0, #1073741824
subne r3, r3, #1
mov r0, r3
mov pc, lr

19 instructions. 2 registers. 0 register based shifts. More reasonable
for inlining.

Clearly the smallest of the lot with the smallest register pressure,
being the best candidate out of the lot, whether we inline it or not.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-01-26 16:47:57

by Russell King

[permalink] [raw]
Subject: Re: [PATCH 1/6] {set,clear,test}_bit() related cleanup

On Thu, Jan 26, 2006 at 05:14:27PM +0100, Pavel Machek wrote:
> > Index: 2.6-git/include/asm-x86_64/mmu_context.h
> > ===================================================================
> > --- 2.6-git.orig/include/asm-x86_64/mmu_context.h 2006-01-25 19:07:15.000000000 +0900
> > +++ 2.6-git/include/asm-x86_64/mmu_context.h 2006-01-25 19:13:59.000000000 +0900
> > @@ -34,12 +34,12 @@
> > unsigned cpu = smp_processor_id();
> > if (likely(prev != next)) {
> > /* stop flush ipis for the previous mm */
> > - clear_bit(cpu, &prev->cpu_vm_mask);
> > + cpu_clear(cpu, prev->cpu_vm_mask);
> > #ifdef CONFIG_SMP
> > write_pda(mmu_state, TLBSTATE_OK);
> > write_pda(active_mm, next);
> > #endif
> > - set_bit(cpu, &next->cpu_vm_mask);
> > + cpu_set(cpu, next->cpu_vm_mask);
> > load_cr3(next->pgd);
> >
> > if (unlikely(next->context.ldt != prev->context.ldt))
>
> cpu_set sounds *very* ambiguous. We have thing called cpusets, for
> example. I'd not guess that is set_bit in cpu endianity (is it?).

That's a problem for the cpusets folk - cpu_set predates them by a
fair time - it's part of the cpumask API. See include/linux/cpumask.h

Also, since cpu_vm_mask is a cpumask_t, the above change to me looks
like a bug fix in its own right.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-01-26 17:31:58

by Richard Henderson

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Thu, Jan 26, 2006 at 05:34:12AM +0100, Edgar Toernig wrote:
> Why shift at all?

Becuase that *is* a valid architecture tuning knob. Most risc
machines can't AND with arbitrary constants like that, and loading
the constant might bulk things up more than just using the shift.


r~

2006-01-26 18:57:54

by Bryan O'Sullivan

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On Thu, 2006-01-26 at 12:36 +0900, Akinobu Mita wrote:

> HAVE_ARCH_HWEIGHT_BITOPS is defined when the architecture has its own
> version of these functions.

All of this HAVE_ARCH_xxx stuff gave Linus heartburn a few weeks ago,
and you're massively increasing its proliferation.

How about putting each class of bitop into its own header file in
asm-generic, and getting the arches that need each one to include the
specific files it needs in its own bitops.h header?

For example, the hweight stuff would go into
asm-generic/bitops-hweight.h, and then asm-foo/bitops.h would just use

#include <asm-generic/bitops-hweight.h>

or else define its own if it didn't need the generic versions.

<b

2006-01-26 19:16:03

by Paul Jackson

[permalink] [raw]
Subject: Re: [PATCH 1/6] {set,clear,test}_bit() related cleanup

Pavel wrote:
> cpu_set sounds *very* ambiguous. We have thing called cpusets,

Hmmm ... you're right. I've worked for quite some time on both
of these, and hadn't noticed this similarity before.

Oh well. Such is the nature of naming things. Sometimes nice
names resemble other nice names in unexpected ways.

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <[email protected]> 1.925.600.0401

2006-01-26 22:55:17

by Grant Grundler

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Thu, Jan 26, 2006 at 04:40:21PM +0000, Russell King wrote:
> Ok, I can see I'm going to lose this, but what the hell.

Well, we agree. As Richard Henderson just pointed out, parisc
is among those that can't load large immediate values either.

> Let's compare the implementations, which are:
...
> int arm_ffs(unsigned long word)
> {
> int k = 31;
> if (word & 0x0000ffff) { k -= 16; word <<= 16; }
> if (word & 0x00ff0000) { k -= 8; word <<= 8; }
> if (word & 0x0f000000) { k -= 4; word <<= 4; }
> if (word & 0x30000000) { k -= 2; word <<= 2; }
> if (word & 0x40000000) { k -= 1; }
> return k;
> }

Of those suggested, arm_ffs() is closest to what parisc
currently has in assembly (see include/asm-parisc/bitops.h:__ffs()).
But given how unobvious the parisc instruction nullification works,
the rough equivalent in "C" (untested!) would look something like:

unsigned int k = 31;
if (word & 0x0000ffff) { k -= 16;} else { word >>= 16; }
if (word & 0x000000ff) { k -= 8;} else { word >>= 8; }
if (word & 0x0000000f) { k -= 4;} else { word >>= 4; }
if (word & 0x00000003) { k -= 2;} else { word >>= 2; }
if (word & 0x00000001) { k -= 1;}
return k;

I doubt that's better for arm but am curious how it compares.
You have time to try it?
If not, no worries.


> 19 instructions. 2 registers. 0 register based shifts. More reasonable
> for inlining.

Yeah, about the same for parisc.

> Clearly the smallest of the lot with the smallest register pressure,
> being the best candidate out of the lot, whether we inline it or not.

Agreed. But I expect parisc will have to continue using it's asm
sequence and ignore the generic version. AFAIK, the compiler isn't that
good with instruction nullification and I have other issues I'd
rather work on.

cheers,
grant

2006-01-26 23:04:17

by Russell King

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Thu, Jan 26, 2006 at 04:04:43PM -0700, Grant Grundler wrote:
> On Thu, Jan 26, 2006 at 04:40:21PM +0000, Russell King wrote:
> > Ok, I can see I'm going to lose this, but what the hell.
>
> Well, we agree. As Richard Henderson just pointed out, parisc
> is among those that can't load large immediate values either.
>
> > Let's compare the implementations, which are:
> ...
> > int arm_ffs(unsigned long word)
> > {
> > int k = 31;
> > if (word & 0x0000ffff) { k -= 16; word <<= 16; }
> > if (word & 0x00ff0000) { k -= 8; word <<= 8; }
> > if (word & 0x0f000000) { k -= 4; word <<= 4; }
> > if (word & 0x30000000) { k -= 2; word <<= 2; }
> > if (word & 0x40000000) { k -= 1; }
> > return k;
> > }
>
> Of those suggested, arm_ffs() is closest to what parisc
> currently has in assembly (see include/asm-parisc/bitops.h:__ffs()).
> But given how unobvious the parisc instruction nullification works,
> the rough equivalent in "C" (untested!) would look something like:
>
> unsigned int k = 31;
> if (word & 0x0000ffff) { k -= 16;} else { word >>= 16; }
> if (word & 0x000000ff) { k -= 8;} else { word >>= 8; }
> if (word & 0x0000000f) { k -= 4;} else { word >>= 4; }
> if (word & 0x00000003) { k -= 2;} else { word >>= 2; }
> if (word & 0x00000001) { k -= 1;}
> return k;
>
> I doubt that's better for arm but am curious how it compares.
> You have time to try it?

This is essentially the same as arm_ffs():

grundler_ffs:
mov r3, r0, asl #16
mov r3, r3, lsr #16
cmp r3, #0
moveq r0, r0, lsr #16
mov r3, #31
movne r3, #15
tst r0, #255
moveq r0, r0, lsr #8
subne r3, r3, #8
tst r0, #15
moveq r0, r0, lsr #4
subne r3, r3, #4
tst r0, #3
moveq r0, r0, lsr #2
subne r3, r3, #2
tst r0, #1
subne r3, r3, #1
mov r0, r3
mov pc, lr

only that the shifts, immediate values and the sense of some of the
conditional instructions have changed. Therefore, the parisc rough
equivalent looks like it would be suitable for ARM as well.

> > Clearly the smallest of the lot with the smallest register pressure,
> > being the best candidate out of the lot, whether we inline it or not.
>
> Agreed. But I expect parisc will have to continue using it's asm
> sequence and ignore the generic version. AFAIK, the compiler isn't that
> good with instruction nullification and I have other issues I'd
> rather work on.

Me too - already solved this problem once. However, I'd rather not
needlessly take a step backwards in the name of generic bitops.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-01-27 00:31:12

by John David Anglin

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of

> Yeah, about the same for parisc.
>
> > Clearly the smallest of the lot with the smallest register pressure,
> > being the best candidate out of the lot, whether we inline it or not.
>
> Agreed. But I expect parisc will have to continue using it's asm
> sequence and ignore the generic version. AFAIK, the compiler isn't that
> good with instruction nullification and I have other issues I'd
> rather work on.

I looked at the assembler code generated on parisc with 4.1.0 (prerelease).
The toernig code is definitely inferior. The mita sequence is four
instructions longer than the arm sequence, but it didn't have any branches.
The arm sequence has four branches. Thus, it's not clear to me which
would perform better in the real world. There were no nullified instructions
generated for any of the sequences. However, neither is as good as the
handcraft asm sequence currently being used.

Dave
--
J. David Anglin [email protected]
National Research Council of Canada (613) 990-0752 (FAX: 952-6602)

2006-01-27 04:42:59

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On Thu, Jan 26, 2006 at 10:57:47AM -0800, Bryan O'Sullivan wrote:

> How about putting each class of bitop into its own header file in
> asm-generic, and getting the arches that need each one to include the
> specific files it needs in its own bitops.h header?
>

I think it's better than adding many HAVE_ARCH_*_BITOPS.
I will have 14 new headers. So I want to make new directory
include/asm-generic/bitops/:

include/asm-generic/bitops/atomic.h
include/asm-generic/bitops/nonatomic.h
include/asm-generic/bitops/__ffs.h
include/asm-generic/bitops/ffz.h
include/asm-generic/bitops/fls.h
include/asm-generic/bitops/fls64.h
include/asm-generic/bitops/find.h
include/asm-generic/bitops/ffs.h
include/asm-generic/bitops/sched-ffs.h
include/asm-generic/bitops/hweight.h
include/asm-generic/bitops/hweight64.h
include/asm-generic/bitops/ext2-atomic.h
include/asm-generic/bitops/ext2-nonatomic.h
include/asm-generic/bitops/minix.h

2006-01-27 04:55:19

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On Thu, Jan 26, 2006 at 12:42:09PM +0530, Balbir Singh wrote:

> > +static inline unsigned int hweight32(unsigned int w)
> > +{
> > + unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
> > + res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
> > + res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
> > + res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
> > + return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
> > +}
> > +
>
> This can be replaced with
>
> register int res=w;
> res=res-((res>>1)&0x55555555);
> res=(res&0x33333333)+((res>>2)&0x33333333);
> res=(res+(res>>4))&0x0f0f0f0f;
> res=res+(res>>8);
> return (res+(res>>16)) & 0xff;

Probably you are right.
Unfortunately, it is difficult for me to prove that sane equivalence.

Anyway those hweight*() functions are copied from include/linux/bitops.h:
generic_hweight*(). So you can optimize these functions.

2006-01-27 05:23:49

by Bryan O'Sullivan

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On Fri, 2006-01-27 at 13:43 +0900, Akinobu Mita wrote:

> I think it's better than adding many HAVE_ARCH_*_BITOPS.
> I will have 14 new headers. So I want to make new directory
> include/asm-generic/bitops/:

While you're thrashing all that stuff, have you thought about adding
generic support for the atomic_*_mask functions? Only eight of almost
30 arches actually implement them, which makes them worthless for
portable drivers. The same approach you're using now with other bitops
will work equally well, or be just as broken, depending on the arch in
question :-)

<b

2006-01-27 05:40:30

by Balbir Singh

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On 1/27/06, Akinobu Mita <[email protected]> wrote:
> On Thu, Jan 26, 2006 at 12:42:09PM +0530, Balbir Singh wrote:
>
> > > +static inline unsigned int hweight32(unsigned int w)
> > > +{
> > > + unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
> > > + res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
> > > + res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
> > > + res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
> > > + return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
> > > +}
> > > +
> >
> > This can be replaced with
> >
> > register int res=w;
> > res=res-((res>>1)&0x55555555);
> > res=(res&0x33333333)+((res>>2)&0x33333333);
> > res=(res+(res>>4))&0x0f0f0f0f;
> > res=res+(res>>8);
> > return (res+(res>>16)) & 0xff;
>
> Probably you are right.
> Unfortunately, it is difficult for me to prove that sane equivalence.
>

Well, a proof is not difficult. This is a well tested proven piece of
code published by Don Knuth. If you need a proof, I can provide one.

> Anyway those hweight*() functions are copied from include/linux/bitops.h:
> generic_hweight*(). So you can optimize these functions.
>

You are right, even those functions can be optimized.

Balbir

2006-01-27 06:39:05

by Akinobu Mita

[permalink] [raw]
Subject: [PATCH] parisc: add ()-pair in __ffs()

Found by Michael Tokarev

Add missing ()-pair in ffz() macro.

Signed-off-by: Akinobu Mita <[email protected]>

Index: 2.6-git/include/asm-parisc/bitops.h
===================================================================
--- 2.6-git.orig/include/asm-parisc/bitops.h 2006-01-26 18:33:40.000000000 +0900
+++ 2.6-git/include/asm-parisc/bitops.h 2006-01-26 19:32:07.000000000 +0900
@@ -220,7 +220,7 @@
}

/* Undefined if no bit is zero. */
-#define ffz(x) __ffs(~x)
+#define ffz(x) __ffs(~(x))

/*
* ffs: find first bit set. returns 1 to BITS_PER_LONG or 0 (if none set)

2006-01-27 06:40:29

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On Fri, Jan 27, 2006 at 11:10:29AM +0530, Balbir Singh wrote:
> On 1/27/06, Akinobu Mita <[email protected]> wrote:
> > On Thu, Jan 26, 2006 at 12:42:09PM +0530, Balbir Singh wrote:
> >
> > > > +static inline unsigned int hweight32(unsigned int w)
> > > > +{
> > > > + unsigned int res = (w & 0x55555555) + ((w >> 1) & 0x55555555);
> > > > + res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
> > > > + res = (res & 0x0F0F0F0F) + ((res >> 4) & 0x0F0F0F0F);
> > > > + res = (res & 0x00FF00FF) + ((res >> 8) & 0x00FF00FF);
> > > > + return (res & 0x0000FFFF) + ((res >> 16) & 0x0000FFFF);
> > > > +}
> > > > +
> > >
> > > This can be replaced with
> > >
> > > register int res=w;
> > > res=res-((res>>1)&0x55555555);
> > > res=(res&0x33333333)+((res>>2)&0x33333333);
> > > res=(res+(res>>4))&0x0f0f0f0f;
> > > res=res+(res>>8);
> > > return (res+(res>>16)) & 0xff;
> >
> > Probably you are right.
> > Unfortunately, it is difficult for me to prove that sane equivalence.
> >
>
> Well, a proof is not difficult. This is a well tested proven piece of
> code published by Don Knuth. If you need a proof, I can provide one.

Thanks, I want.

2006-01-27 12:52:01

by Hirokazu Takata

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

Hello Mita-san, and folks,

From: [email protected] (Akinobu Mita)
Subject: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h
Date: Wed, 25 Jan 2006 20:32:06 +0900
> o generic {,test_and_}{set,clear,change}_bit() (atomic bitops)
>
> This patch introduces the C-language equivalents of the functions below:
> void set_bit(int nr, volatile unsigned long *addr);
> void clear_bit(int nr, volatile unsigned long *addr);
...
> int test_and_change_bit(int nr, volatile unsigned long *addr);
>
> HAVE_ARCH_ATOMIC_BITOPS is defined when the architecture has its own
> version of these functions.
>
> This code largely copied from:
> include/asm-powerpc/bitops.h
> include/asm-parisc/bitops.h
> include/asm-parisc/atomic.h

Could you tell me more about the new generic {set,clear,test}_bit()
routines?

Why do you copied these routines from parisc and employed them
as generic ones?
I'm not sure whether these generic {set,clear,test}_bit() routines
are really generic or not.

> +/* Can't use raw_spin_lock_irq because of #include problems, so
> + * this is the substitute */
> +#define _atomic_spin_lock_irqsave(l,f) do { \
> + raw_spinlock_t *s = ATOMIC_HASH(l); \
> + local_irq_save(f); \
> + __raw_spin_lock(s); \
> +} while(0)
> +
> +#define _atomic_spin_unlock_irqrestore(l,f) do { \
> + raw_spinlock_t *s = ATOMIC_HASH(l); \
> + __raw_spin_unlock(s); \
> + local_irq_restore(f); \
> +} while(0)

Is there a possibility that these routines affect for archs
with no HAVE_ARCH_ATOMIC_BITOPS for SMP ?
I think __raw_spin_lock() is sufficient and local_irqsave() is
not necessary in general atomic routines.

If the parisc's LDCW instruction required disabling interrupts,
it would be parisc specific and not generic case, I think,
although I'm not familier with the parisc architecture...

-- Takata

2006-01-29 07:11:48

by Stuart Brady

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Thu, Jan 26, 2006 at 11:03:54PM +0000, Russell King wrote:
> Me too - already solved this problem once. However, I'd rather not
> needlessly take a step backwards in the name of generic bitops.

Indeed. However, I think we can actually improve bitops for some
architectures. Here's what I've found so far:

Versions of Alpha, ARM, MIPS, PowerPC and SPARC have bit counting
instructions which we're using in most cases. I may have missed some:

Alpha may have:
ctlz, CounT Leading Zeros
cttz, CounT Trailing Zeros

ARM (since v5) has:
clz, Count Leading Zeros

MIPS may have:
clz, Count Leading Zeros
clo, Count Leading Ones

PowerPC has:
cntlz[wd], CouNT Leading Zeros (for Word/Double-word)

SPARC v9 has:
popc, POPulation Count

PA-RISC has none. I've not checked any others.

The Alpha, ARM and PowerPC functions look fine to me.

On MIPS, fls() and flz() should probably use CLO. Curiously, MIPS is
the only arch with a flz() function.

On SPARC, the implementation of ffz() appears to be "cheese", and the
proposed generic versions would be better. ffs() looks quite generic,
and fls() uses the linux/bitops.h implementation.

There are versions of hweight*() for sparc64 which use POPC when
ULTRA_HAS_POPULATION_COUNT is defined, but AFAICS, it's never defined.

The SPARC v9 arch manual recommends using popc(x ^ ~-x) for functions
like ffs(). ffz() would return ffs(~x).

I've had an idea for fls():

static inline int fls(unsigned long x)
{
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
return popc(x);
}

I'm not sure how that compares to the generic fls(), but I suspect it's
quite a bit faster. Unfortunately, I don't have any MIPS or SPARC v9
hardware to test this on.

I'm not sure if this is of any use:

static inline int __ffs(unsigned long x)
{
return (int)hweight_long(x ^ ~-x) - 1;
}

The idea being that the generic hweight_long has no branches.
--
Stuart Brady

2006-01-30 03:29:39

by Akinobu Mita

[permalink] [raw]
Subject: Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Fri, Jan 27, 2006 at 09:51:47PM +0900, Hirokazu Takata wrote:

> Could you tell me more about the new generic {set,clear,test}_bit()
> routines?
>
> Why do you copied these routines from parisc and employed them
> as generic ones?
> I'm not sure whether these generic {set,clear,test}_bit() routines
> are really generic or not.

I think it is the most portable implementation.
And I'm trying not to write my own code in this patch set.

>
> > +/* Can't use raw_spin_lock_irq because of #include problems, so
> > + * this is the substitute */
> > +#define _atomic_spin_lock_irqsave(l,f) do { \
> > + raw_spinlock_t *s = ATOMIC_HASH(l); \
> > + local_irq_save(f); \
> > + __raw_spin_lock(s); \
> > +} while(0)
> > +
> > +#define _atomic_spin_unlock_irqrestore(l,f) do { \
> > + raw_spinlock_t *s = ATOMIC_HASH(l); \
> > + __raw_spin_unlock(s); \
> > + local_irq_restore(f); \
> > +} while(0)
>
> Is there a possibility that these routines affect for archs
> with no HAVE_ARCH_ATOMIC_BITOPS for SMP ?

Currently there is no architecture using this atomic *_bit() routines
on SMP. But it may be the benefit of those who are trying to port Linux.
(See the comment by Theodore Ts'o in include/asm-generic/bitops.h)

> I think __raw_spin_lock() is sufficient and local_irqsave() is
> not necessary in general atomic routines.

If the interrupt handler also wants to do bit manipilation then
you can get a deadlock between the original caller of *_bit() and the
interrupt handler.

2006-01-30 04:03:35

by David Miller

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

From: Stuart Brady <[email protected]>
Date: Sun, 29 Jan 2006 07:12:42 +0000

> There are versions of hweight*() for sparc64 which use POPC when
> ULTRA_HAS_POPULATION_COUNT is defined, but AFAICS, it's never defined.

That's right, the problem here is that no chips actually implement
the instruction in hardware, it is software emulated, ie. useless :-)

2006-01-30 17:19:15

by Ralf Baechle

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Sun, Jan 29, 2006 at 07:12:42AM +0000, Stuart Brady wrote:

> On MIPS, fls() and flz() should probably use CLO.

It actually uses clz.

> Curiously, MIPS is the only arch with a flz() function.

No longer. The fls implementation was based on flz and fls was the only
user of flz. So I cleaned that, once I commit flz will be gone. Not
only a cleanup but also a minor optimization.

Ralf

2006-01-30 19:49:07

by Stuart Brady

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

On Mon, Jan 30, 2006 at 05:06:47PM +0000, Ralf Baechle wrote:
> On Sun, Jan 29, 2006 at 07:12:42AM +0000, Stuart Brady wrote:
>
> > On MIPS, fls() and flz() should probably use CLO.
>
> It actually uses clz.

I know. flz(x) is basically __ilog2(~x), and I still say clo would be
better. Removing flz() sounds reasonable, though.

> > Curiously, MIPS is the only arch with a flz() function.
>
> No longer. The fls implementation was based on flz and fls was the only
> user of flz. So I cleaned that, once I commit flz will be gone. Not
> only a cleanup but also a minor optimization.

I'd got that slightly wrong. Yeah, fls(x) returned flz(~x) + 1, which
is __ilog2(~~x) + 1. So obviously clz was fine for that, but it needed
cleaning up.

Shame about popc on SPARC. However, ffz is cheese, regardless of pops.
(On sparc64, ffs is too.) I'll wait for the generic bitops patches to
be dealt with (or not) and then submit a patch fixing this if needed.

Thanks,
--
Stuart Brady

By the way, I really hope nobody gets ten copies of this, as happened
with my last post. It does not seem to be my fault, AFAICS.

2006-01-30 23:03:57

by David Miller

[permalink] [raw]
Subject: Re: [parisc-linux] Re: [PATCH 3/6] C-language equivalents of include/asm-*/bitops.h

From: Stuart Brady <[email protected]>
Date: Mon, 30 Jan 2006 19:50:04 +0000

> Shame about popc on SPARC. However, ffz is cheese, regardless of pops.
> (On sparc64, ffs is too.) I'll wait for the generic bitops patches to
> be dealt with (or not) and then submit a patch fixing this if needed.

I'm happy with any improvement you might make here, for sure.

The sparc64 ffz() implementation was done so dog stupid like that
so that the code would be small since this gets inlined all over
the place.

So if you can keep it small and improve it, or make it a bit larger
and uninline it, that's great.

2006-01-31 11:14:29

by Balbir Singh

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

> > Well, a proof is not difficult. This is a well tested proven piece of
> > code published by Don Knuth. If you need a proof, I can provide one.
>
> Thanks, I want.
>

Please bear with me, I am caught up with other things. I will try and
provide one soon.

Balbir

2006-01-31 16:49:57

by George Spelvin

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

This is an extremely well-known technique. You can see a similar version
that uses a multiply for the last few steps at
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
whch refers to
"Software Optimization Guide for AMD Athlon 64 and Opteron Processors"
http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/25112.PDF

It's section 8.6, "Efficient Implementation of Population-Count Function
in 32-bit Mode", pages 179-180.

It uses the name that I am more familiar with, "popcunt" (population count),
although "Hamming weight" also makes sense.

Anyway, the proof of correctness proceeds as follows:

b = a - ((a >> 1) & 0x55555555);
c = (b & 0x33333333) + ((b >> 2) & 0x33333333);
d = (c + (c >> 4)) & 0x0f0f0f0f;
#if SLOW_MULTIPLY
e = d + (d >> 8)
f = e + (e >> 16);
return f & 63;
#else
/* Useful if multiply takes at most 4 cycles */
return (d * 0x01010101) >> 24;
#endif

The input value a can be thought of as 32 1-bit fields each holding
their own hamming weight. Now look at it as 16 2-bit fields.
Each 2-bit field a1..a0 has the value 2*a1 + a0. This can be converted
into the hamming weight of the 2-bit field a1+a0 by subtracting a1.

That's what the (a >> 1) & mask subtraction does. Since there can be no
borrows, you can just do it all at once.

Enumerating the 4 possible cases:

0b00 = 0 -> 0 - 0 = 0
0b01 = 1 -> 1 - 0 = 1
0b10 = 2 -> 2 - 1 = 1
0b11 = 3 -> 3 - 1 = 2


The next step consists of breaking up b (made of 16 2-bir fields) into
even and odd halves and adding them into 4-bit fields. Since the largest
possible sum is 2+2 = 4, which will not fit into a 4-bit field, the 2-bit
fields have to be masked before they are added.


After this point, the masking can be delayed. Each 4-bit field holds
a population count from 0..4, taking at most 3 bits. These numbers can
be added without overflowing a 4-bit field, so we can compute
c + (c >> 4), and only then mask off the unwanted bits.


This produces d, a number of 4 8-bit fields, each in the range 0..8.
>From this point, we can shift and add d multiple times without overflowing
an 8-bit field, and only do a final mask at the end.

The number to mask with has to be at least 63 (so that 32 on't be truncated),
but can also be 128 or 255. The x86 has a special encoding for signed
immediate byte values -128..127, so the value of 255 is slower. On
other processors, a special "sign extend byte" instruction might be faster.


On a processor with fast integer multiplies (Athlon but not P4), you can
reduce the final few serially dependent instructions to a single integer
multiply. Consider d to be 3 8-bit values d3, d2, d1 and d0, each in the
range 0..8. The multiply forms the partial products:

d3 d2 d1 d0
d3 d2 d1 d0
d3 d2 d1 d0
+ d3 d2 d1 d0
----------------------
e3 e2 e1 e0

Where e3 = d3 + d2 + d1 + d0. e2, e1 and e0 obviously cannot generate
any carries.

2006-01-31 18:13:49

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

On Tue, Jan 31, 2006 at 11:49:49AM -0500, [email protected] wrote:
> This is an extremely well-known technique. You can see a similar version
> that uses a multiply for the last few steps at
> http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
> whch refers to
> "Software Optimization Guide for AMD Athlon 64 and Opteron Processors"
> http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/25112.PDF
...

> The next step consists of breaking up b (made of 16 2-bir fields) into
> even and odd halves and adding them into 4-bit fields. Since the largest
> possible sum is 2+2 = 4, which will not fit into a 4-bit field, the 2-bit
> fields have to be masked before they are added.

Up to here, things were clear.
My guess is you meant "which will not fit into a 2-bit field".

thanks,
grant

2006-02-01 15:12:23

by Chen, Kenneth W

[permalink] [raw]
Subject: RE: [PATCH 1/12] generic *_bit()

Akinobu Mita wrote on Wednesday, January 25, 2006 7:29 PM
> This patch introduces the C-language equivalents of the functions below:
>
> - atomic operation:
> void set_bit(int nr, volatile unsigned long *addr);
> void clear_bit(int nr, volatile unsigned long *addr);
> void change_bit(int nr, volatile unsigned long *addr);
> int test_and_set_bit(int nr, volatile unsigned long *addr);
> int test_and_clear_bit(int nr, volatile unsigned long *addr);
> int test_and_change_bit(int nr, volatile unsigned long *addr);

I wonder why you did not make these functions take volatile
unsigned int * address argument?

- Ken

2006-02-01 18:02:43

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, Feb 01, 2006 at 07:11:34AM -0800, Chen, Kenneth W wrote:
> Akinobu Mita wrote on Wednesday, January 25, 2006 7:29 PM
> > This patch introduces the C-language equivalents of the functions below:
> >
> > - atomic operation:
> > void set_bit(int nr, volatile unsigned long *addr);
> > void clear_bit(int nr, volatile unsigned long *addr);
> > void change_bit(int nr, volatile unsigned long *addr);
> > int test_and_set_bit(int nr, volatile unsigned long *addr);
> > int test_and_clear_bit(int nr, volatile unsigned long *addr);
> > int test_and_change_bit(int nr, volatile unsigned long *addr);
>
> I wonder why you did not make these functions take volatile
> unsigned int * address argument?

Because they are defined to operate on arrays of unsigned long

2006-02-01 18:08:40

by Chen, Kenneth W

[permalink] [raw]
Subject: RE: [PATCH 1/12] generic *_bit()

Christoph Hellwig wrote on Wednesday, February 01, 2006 10:03 AM
> > Akinobu Mita wrote on Wednesday, January 25, 2006 7:29 PM
> > > This patch introduces the C-language equivalents of the functions below:
> > >
> > > - atomic operation:
> > > void set_bit(int nr, volatile unsigned long *addr);
> > > void clear_bit(int nr, volatile unsigned long *addr);
> > > void change_bit(int nr, volatile unsigned long *addr);
> > > int test_and_set_bit(int nr, volatile unsigned long *addr);
> > > int test_and_clear_bit(int nr, volatile unsigned long *addr);
> > > int test_and_change_bit(int nr, volatile unsigned long *addr);
> >
> > I wonder why you did not make these functions take volatile
> > unsigned int * address argument?
>
> Because they are defined to operate on arrays of unsigned long

I think these should be defined to operate on arrays of unsigned int.
Bit is a bit, no matter how many byte you load (8/16/32/64), you can
only operate on just one bit.

- Ken

2006-02-01 19:20:09

by Russell King

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, Feb 01, 2006 at 10:07:28AM -0800, Chen, Kenneth W wrote:
> Christoph Hellwig wrote on Wednesday, February 01, 2006 10:03 AM
> > > Akinobu Mita wrote on Wednesday, January 25, 2006 7:29 PM
> > > > This patch introduces the C-language equivalents of the functions below:
> > > >
> > > > - atomic operation:
> > > > void set_bit(int nr, volatile unsigned long *addr);
> > > > void clear_bit(int nr, volatile unsigned long *addr);
> > > > void change_bit(int nr, volatile unsigned long *addr);
> > > > int test_and_set_bit(int nr, volatile unsigned long *addr);
> > > > int test_and_clear_bit(int nr, volatile unsigned long *addr);
> > > > int test_and_change_bit(int nr, volatile unsigned long *addr);
> > >
> > > I wonder why you did not make these functions take volatile
> > > unsigned int * address argument?
> >
> > Because they are defined to operate on arrays of unsigned long
>
> I think these should be defined to operate on arrays of unsigned int.
> Bit is a bit, no matter how many byte you load (8/16/32/64), you can
> only operate on just one bit.

Invalid assumption, from the point of view of endianness across different
architectures. Consider where bit 0 is for a LE and BE unsigned long *
vs a LE and BE unsigned char *.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-02-01 19:27:15

by Chen, Kenneth W

[permalink] [raw]
Subject: RE: [PATCH 1/12] generic *_bit()

Russell King wrote on Wednesday, February 01, 2006 11:20 AM
> > I think these should be defined to operate on arrays of unsigned int.
> > Bit is a bit, no matter how many byte you load (8/16/32/64), you can
> > only operate on just one bit.
>
> Invalid assumption, from the point of view of endianness across different
> architectures. Consider where bit 0 is for a LE and BE unsigned long *
> vs a LE and BE unsigned char *.

Where the bit end up in LE or BE is irrelevant. As long as one always
use the same bit numbering and same address pointer type, you always
get the same bit. Or am I missing something?

- Ken

2006-02-01 19:35:21

by Russell King

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, Feb 01, 2006 at 11:25:25AM -0800, Chen, Kenneth W wrote:
> Russell King wrote on Wednesday, February 01, 2006 11:20 AM
> > > I think these should be defined to operate on arrays of unsigned int.
> > > Bit is a bit, no matter how many byte you load (8/16/32/64), you can
> > > only operate on just one bit.
> >
> > Invalid assumption, from the point of view of endianness across different
> > architectures. Consider where bit 0 is for a LE and BE unsigned long *
> > vs a LE and BE unsigned char *.
>
> Where the bit end up in LE or BE is irrelevant. As long as one always
> use the same bit numbering and same address pointer type, you always
> get the same bit. Or am I missing something?

>From a 32-bit long perspective, bit 0 of a long is always the bit which
represents odd numbers. Where this falls depends on the endianness:

MSB LSB
big-endian long0: byte0 byte1 byte2 byte3
little-endian long0: byte3 byte2 byte1 byte0

Bit 0 of a BE long ends up at byte 3 bit 0.
Bit 0 of a LE long ends up at byte 0 bit 0.

However, bit 0 of a byte stream is always byte 0 bit 0.

Hence, converting the bitops to take a different sized pointer from
the one we presently pass changes the semantics of the function for
big endian machines - by the fact that you change the order of bits
in memory.

Whether this matters or not is up to how the bitops are used. If
it's something which only bitops operate on, it probably doesn't
make that much difference. If it's some external data or some
data which is accessed in other ways, it most certainly does matter.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-02-01 19:39:17

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, Feb 01, 2006 at 10:07:28AM -0800, Chen, Kenneth W wrote:
> I think these should be defined to operate on arrays of unsigned int.
> Bit is a bit, no matter how many byte you load (8/16/32/64), you can
> only operate on just one bit.

Well, if it doesn't matter, why is unsigned int better?

unsigned long is typically the native register size, right?
I'd expect that to be more efficient on most arches.

grant

2006-02-01 21:41:59

by Chen, Kenneth W

[permalink] [raw]
Subject: RE: [PATCH 1/12] generic *_bit()

Grant Grundler wrote on Wednesday, February 01, 2006 11:40 AM
> On Wed, Feb 01, 2006 at 10:07:28AM -0800, Chen, Kenneth W wrote:
> > I think these should be defined to operate on arrays of unsigned int.
> > Bit is a bit, no matter how many byte you load (8/16/32/64), you can
> > only operate on just one bit.
>
> Well, if it doesn't matter, why is unsigned int better?


I was coming from the angle of having bitop operate on unsigned
int *, so people don't have to type cast or change bit flag variable
to unsigned long for various structures. With unsigned int type for
bit flag, some of them are not even close to fully utilized. for example:

thread_info->flags uses 18 bits
thread_struct->flags uses 7 bits

It's a waste of memory to define a variable that kernel will *never*
touch the 4 MSB in that field.


> unsigned long is typically the native register size, right?
> I'd expect that to be more efficient on most arches.


The only difference that I can think of on Itanium processor is the
memory operation, you either load/store 4 or 8 bytes. Once the data
is in the CPU register, it doesn't make any difference whether it is
operating on 32bit or entire 64 bit. I don't know about others RISC
arch though whether it is more efficient with native register size.

- Ken

2006-02-01 22:08:46

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, Feb 01, 2006 at 01:41:03PM -0800, Chen, Kenneth W wrote:
> > Well, if it doesn't matter, why is unsigned int better?
>
> I was coming from the angle of having bitop operate on unsigned
> int *, so people don't have to type cast or change bit flag variable
> to unsigned long for various structures. With unsigned int type for
> bit flag, some of them are not even close to fully utilized. for example:
>
> thread_info->flags uses 18 bits
> thread_struct->flags uses 7 bits
>
> It's a waste of memory to define a variable that kernel will *never*
> touch the 4 MSB in that field.

Agreed. Good point. But this can be mitigated if the code using "unsigned int"
(or unsigned byte) first loads the value into a local unsigned long variable.
That typically translates into a tmp register anyway. Compiler will help
you find places where that needs to happen.

Counter point is bit arrays (e.g. bit maps) like cpumask_t are
typically much larger than 32-bits (typically distro's ship with
NR_CPUS set to 256 or so). File system code also likes bit arrays
for block allocation tables. Searching a bit array using unsigned
long is 2x faster on 64-bit architectures. I don't want to give
that up and I'm pretty sure Tony Luck, Paul Mckerras and a few
others would object unless you can give a better reason.

Obviously neither memory footprint nor speed of walking memory is an
the issue for 32-bit arches (where unsigned long == unsigned int).


> > unsigned long is typically the native register size, right?
> > I'd expect that to be more efficient on most arches.
>
> The only difference that I can think of on Itanium processor is the
> memory operation, you either load/store 4 or 8 bytes. Once the data
> is in the CPU register, it doesn't make any difference whether it is
> operating on 32bit or entire 64 bit. I don't know about others RISC
> arch though whether it is more efficient with native register size.

agreed. I was thinking mostly of the bit map search - not searching
within a single unsigned int.

grant

2006-02-01 22:49:35

by Anton Altaparmakov

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, 1 Feb 2006, Grant Grundler wrote:
> On Wed, Feb 01, 2006 at 01:41:03PM -0800, Chen, Kenneth W wrote:
> > > Well, if it doesn't matter, why is unsigned int better?
> >
> > I was coming from the angle of having bitop operate on unsigned
> > int *, so people don't have to type cast or change bit flag variable
> > to unsigned long for various structures. With unsigned int type for
> > bit flag, some of them are not even close to fully utilized. for example:
> >
> > thread_info->flags uses 18 bits
> > thread_struct->flags uses 7 bits
> >
> > It's a waste of memory to define a variable that kernel will *never*
> > touch the 4 MSB in that field.
>
> Agreed. Good point. But this can be mitigated if the code using "unsigned int"
> (or unsigned byte) first loads the value into a local unsigned long variable.
> That typically translates into a tmp register anyway. Compiler will help
> you find places where that needs to happen.
>
> Counter point is bit arrays (e.g. bit maps) like cpumask_t are
> typically much larger than 32-bits (typically distro's ship with
> NR_CPUS set to 256 or so). File system code also likes bit arrays
> for block allocation tables. Searching a bit array using unsigned
> long is 2x faster on 64-bit architectures. I don't want to give
> that up and I'm pretty sure Tony Luck, Paul Mckerras and a few
> others would object unless you can give a better reason.

Err, searching by anything other than bytes is useless for a file system
driver. Otherwise you get all sorts of disgustingly horrible allocation
patterns depending on the endianness of the machine...

Best regards,

Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer / IRC: #ntfs on irc.freenode.net
WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/

2006-02-02 00:08:08

by Grant Grundler

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, Feb 01, 2006 at 10:49:08PM +0000, Anton Altaparmakov wrote:
> Err, searching by anything other than bytes is useless for a file system
> driver. Otherwise you get all sorts of disgustingly horrible allocation
> patterns depending on the endianness of the machine...

Well, tell that to ext2/3 maintainers since they introduced
the ext2_test_bit() and friends. They do require LE handling
of the bit array since that's an on-disk format. See how big endian
machines (parisc/ppc/sparc/etc) deal with it in asm/bitops.h.

grant

2006-02-02 08:52:26

by Anton Altaparmakov

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, 1 Feb 2006, Grant Grundler wrote:
> On Wed, Feb 01, 2006 at 10:49:08PM +0000, Anton Altaparmakov wrote:
> > Err, searching by anything other than bytes is useless for a file system
> > driver. Otherwise you get all sorts of disgustingly horrible allocation
> > patterns depending on the endianness of the machine...
>
> Well, tell that to ext2/3 maintainers since they introduced
> the ext2_test_bit() and friends. They do require LE handling
> of the bit array since that's an on-disk format. See how big endian
> machines (parisc/ppc/sparc/etc) deal with it in asm/bitops.h.

Oh, I hadn't noticed those before. Thanks.

The name seems a bit silly as I imagine most fs drivers would be able to
use them and there already are ext2 and minix versions. Probably ought
be renamed to a more generic name like le_test_bit() or something...

Best regards,

Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer / IRC: #ntfs on irc.freenode.net
WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/

2006-02-02 09:34:22

by Balbir Singh

[permalink] [raw]
Subject: Re: [PATCH 8/12] generic hweight{32,16,8}()

> This is an extremely well-known technique. You can see a similar version
> that uses a multiply for the last few steps at
> http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel
> whch refers to
> "Software Optimization Guide for AMD Athlon 64 and Opteron Processors"
> http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/25112.PDF
>
> It's section 8.6, "Efficient Implementation of Population-Count Function
> in 32-bit Mode", pages 179-180.

Thanks for doing this. The proof looks good except for what has been
already pointed out by Grant Grundler.

2006-02-02 10:13:53

by Andreas Schwab

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

Anton Altaparmakov <[email protected]> writes:

> The name seems a bit silly as I imagine most fs drivers would be able to
> use them and there already are ext2 and minix versions. Probably ought
> be renamed to a more generic name like le_test_bit() or something...

Minix is even more complicated, since the on-disk format is different
between architectures (the m68k port of Minix did not handle that
correctly).

Andreas.

--
Andreas Schwab, SuSE Labs, [email protected]
SuSE Linux Products GmbH, Maxfeldstra?e 5, 90409 N?rnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."

2006-02-03 04:55:41

by Paul Mackerras

[permalink] [raw]
Subject: RE: [PATCH 1/12] generic *_bit()

Chen, Kenneth W writes:

> Christoph Hellwig wrote on Wednesday, February 01, 2006 10:03 AM
> > Because they are defined to operate on arrays of unsigned long
>
> I think these should be defined to operate on arrays of unsigned int.
> Bit is a bit, no matter how many byte you load (8/16/32/64), you can
> only operate on just one bit.

Christoph is right. Changing to unsigned int would change the layout
on big-endian 64-bit platforms.

Paul.

2006-02-03 10:25:32

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Wed, 1 Feb 2006, Russell King wrote:
> On Wed, Feb 01, 2006 at 10:07:28AM -0800, Chen, Kenneth W wrote:
> > Christoph Hellwig wrote on Wednesday, February 01, 2006 10:03 AM
> > > > Akinobu Mita wrote on Wednesday, January 25, 2006 7:29 PM
> > > > > This patch introduces the C-language equivalents of the functions below:
> > > > >
> > > > > - atomic operation:
> > > > > void set_bit(int nr, volatile unsigned long *addr);
> > > > > void clear_bit(int nr, volatile unsigned long *addr);
> > > > > void change_bit(int nr, volatile unsigned long *addr);
> > > > > int test_and_set_bit(int nr, volatile unsigned long *addr);
> > > > > int test_and_clear_bit(int nr, volatile unsigned long *addr);
> > > > > int test_and_change_bit(int nr, volatile unsigned long *addr);
> > > >
> > > > I wonder why you did not make these functions take volatile
> > > > unsigned int * address argument?
> > >
> > > Because they are defined to operate on arrays of unsigned long
> >
> > I think these should be defined to operate on arrays of unsigned int.
> > Bit is a bit, no matter how many byte you load (8/16/32/64), you can
> > only operate on just one bit.
>
> Invalid assumption, from the point of view of endianness across different
> architectures. Consider where bit 0 is for a LE and BE unsigned long *
> vs a LE and BE unsigned char *.

Intel doesn't care about big endian (cfr. your lkml back issues of January
2006).

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2006-02-03 10:27:32

by Russell King

[permalink] [raw]
Subject: Re: [PATCH 1/12] generic *_bit()

On Fri, Feb 03, 2006 at 11:24:30AM +0100, Geert Uytterhoeven wrote:
> On Wed, 1 Feb 2006, Russell King wrote:
> > Invalid assumption, from the point of view of endianness across different
> > architectures. Consider where bit 0 is for a LE and BE unsigned long *
> > vs a LE and BE unsigned char *.
>
> Intel doesn't care about big endian (cfr. your lkml back issues of January
> 2006).

Incorrect. Intel does actually produce big endian CPUs - most of the
Intel IXP (ARM based) stuff is big endian. It just depends which part
of Intel you're referring to.

--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core

2006-02-03 17:09:25

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH 1/12] generic *_bit()

> > Intel doesn't care about big endian (cfr. your lkml back issues of January
> > 2006).
>
> Incorrect. Intel does actually produce big endian CPUs - most of the
> Intel IXP (ARM based) stuff is big endian. It just depends which part
> of Intel you're referring to.

Set PSR.be (and DCR.be) to 1 and ia64 becomes a big-endian cpu (which,
IIRC, how HP-UX uses it).

-Tony