Skip to content

Commit

Permalink
alpha: simplify and optimize sched_find_first_bit
Browse files Browse the repository at this point in the history
Search only the first 100 bits instead of 140, saving a couple
instructions. The resulting code is about 1/3 faster (40K ticks/1000
iterations down to 30K ticks/1000 iterations).

Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: [email protected]
Acked-by: Richard Henderson <[email protected]>
Signed-off-by: Matt Turner <[email protected]>
  • Loading branch information
mattst88 authored and Matt Turner committed May 25, 2010
1 parent 1cb3d8e commit a75f5f0
Showing 1 changed file with 9 additions and 11 deletions.
20 changes: 9 additions & 11 deletions arch/alpha/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -438,22 +438,20 @@ static inline unsigned int __arch_hweight8(unsigned int w)

/*
* Every architecture must define this function. It's the fastest
* way of searching a 140-bit bitmap where the first 100 bits are
* unlikely to be set. It's guaranteed that at least one of the 140
* bits is set.
* way of searching a 100-bit bitmap. It's guaranteed that at least
* one of the 100 bits is cleared.
*/
static inline unsigned long
sched_find_first_bit(unsigned long b[3])
sched_find_first_bit(const unsigned long b[2])
{
unsigned long b0 = b[0], b1 = b[1], b2 = b[2];
unsigned long ofs;
unsigned long b0, b1, ofs, tmp;

ofs = (b1 ? 64 : 128);
b1 = (b1 ? b1 : b2);
ofs = (b0 ? 0 : ofs);
b0 = (b0 ? b0 : b1);
b0 = b[0];
b1 = b[1];
ofs = (b0 ? 0 : 64);
tmp = (b0 ? b0 : b1);

return __ffs(b0) + ofs;
return __ffs(tmp) + ofs;
}

#include <asm-generic/bitops/ext2-non-atomic.h>
Expand Down

0 comments on commit a75f5f0

Please sign in to comment.