Skip to content

Commit

Permalink
Merge commit '3cf2f34' into sched/core, to fix build error
Browse files Browse the repository at this point in the history
Fix this dependency on the locking tree's smp_mb*() API changes:

  kernel/sched/idle.c:247:3: error: implicit declaration of function ‘smp_mb__after_atomic’ [-Werror=implicit-function-declaration]

Signed-off-by: Ingo Molnar <[email protected]>
  • Loading branch information
Ingo Molnar committed Jun 12, 2014
2 parents f602d06 + 3cf2f34 commit 535560d
Show file tree
Hide file tree
Showing 186 changed files with 551 additions and 652 deletions.
31 changes: 12 additions & 19 deletions Documentation/atomic_ops.txt
Original file line number Diff line number Diff line change
Expand Up @@ -285,15 +285,13 @@ If a caller requires memory barrier semantics around an atomic_t
operation which does not return a value, a set of interfaces are
defined which accomplish this:

void smp_mb__before_atomic_dec(void);
void smp_mb__after_atomic_dec(void);
void smp_mb__before_atomic_inc(void);
void smp_mb__after_atomic_inc(void);
void smp_mb__before_atomic(void);
void smp_mb__after_atomic(void);

For example, smp_mb__before_atomic_dec() can be used like so:
For example, smp_mb__before_atomic() can be used like so:

obj->dead = 1;
smp_mb__before_atomic_dec();
smp_mb__before_atomic();
atomic_dec(&obj->ref_count);

It makes sure that all memory operations preceding the atomic_dec()
Expand All @@ -302,15 +300,10 @@ operation. In the above example, it guarantees that the assignment of
"1" to obj->dead will be globally visible to other cpus before the
atomic counter decrement.

Without the explicit smp_mb__before_atomic_dec() call, the
Without the explicit smp_mb__before_atomic() call, the
implementation could legally allow the atomic counter update visible
to other cpus before the "obj->dead = 1;" assignment.

The other three interfaces listed are used to provide explicit
ordering with respect to memory operations after an atomic_dec() call
(smp_mb__after_atomic_dec()) and around atomic_inc() calls
(smp_mb__{before,after}_atomic_inc()).

A missing memory barrier in the cases where they are required by the
atomic_t implementation above can have disastrous results. Here is
an example, which follows a pattern occurring frequently in the Linux
Expand Down Expand Up @@ -487,26 +480,26 @@ Finally there is the basic operation:
Which returns a boolean indicating if bit "nr" is set in the bitmask
pointed to by "addr".

If explicit memory barriers are required around clear_bit() (which
does not return a value, and thus does not need to provide memory
barrier semantics), two interfaces are provided:
If explicit memory barriers are required around {set,clear}_bit() (which do
not return a value, and thus does not need to provide memory barrier
semantics), two interfaces are provided:

void smp_mb__before_clear_bit(void);
void smp_mb__after_clear_bit(void);
void smp_mb__before_atomic(void);
void smp_mb__after_atomic(void);

They are used as follows, and are akin to their atomic_t operation
brothers:

/* All memory operations before this call will
* be globally visible before the clear_bit().
*/
smp_mb__before_clear_bit();
smp_mb__before_atomic();
clear_bit( ... );

/* The clear_bit() will be visible before all
* subsequent memory operations.
*/
smp_mb__after_clear_bit();
smp_mb__after_atomic();

There are two special bitops with lock barrier semantics (acquire/release,
same as spinlocks). These operate in the same way as their non-_lock/unlock
Expand Down
42 changes: 11 additions & 31 deletions Documentation/memory-barriers.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1583,20 +1583,21 @@ There are some more advanced barrier functions:
insert anything more than a compiler barrier in a UP compilation.


(*) smp_mb__before_atomic_dec();
(*) smp_mb__after_atomic_dec();
(*) smp_mb__before_atomic_inc();
(*) smp_mb__after_atomic_inc();
(*) smp_mb__before_atomic();
(*) smp_mb__after_atomic();

These are for use with atomic add, subtract, increment and decrement
functions that don't return a value, especially when used for reference
counting. These functions do not imply memory barriers.
These are for use with atomic (such as add, subtract, increment and
decrement) functions that don't return a value, especially when used for
reference counting. These functions do not imply memory barriers.

These are also used for atomic bitop functions that do not return a
value (such as set_bit and clear_bit).

As an example, consider a piece of code that marks an object as being dead
and then decrements the object's reference count:

obj->dead = 1;
smp_mb__before_atomic_dec();
smp_mb__before_atomic();
atomic_dec(&obj->ref_count);

This makes sure that the death mark on the object is perceived to be set
Expand All @@ -1606,27 +1607,6 @@ There are some more advanced barrier functions:
operations" subsection for information on where to use these.


(*) smp_mb__before_clear_bit(void);
(*) smp_mb__after_clear_bit(void);

These are for use similar to the atomic inc/dec barriers. These are
typically used for bitwise unlocking operations, so care must be taken as
there are no implicit memory barriers here either.

Consider implementing an unlock operation of some nature by clearing a
locking bit. The clear_bit() would then need to be barriered like this:

smp_mb__before_clear_bit();
clear_bit( ... );

This prevents memory operations before the clear leaking to after it. See
the subsection on "Locking Functions" with reference to RELEASE operation
implications.

See Documentation/atomic_ops.txt for more information. See the "Atomic
operations" subsection for information on where to use these.


MMIO WRITE BARRIER
------------------

Expand Down Expand Up @@ -2283,11 +2263,11 @@ operations:
change_bit();

With these the appropriate explicit memory barrier should be used if necessary
(smp_mb__before_clear_bit() for instance).
(smp_mb__before_atomic() for instance).


The following also do _not_ imply memory barriers, and so may require explicit
memory barriers under some circumstances (smp_mb__before_atomic_dec() for
memory barriers under some circumstances (smp_mb__before_atomic() for
instance):

atomic_add();
Expand Down
5 changes: 0 additions & 5 deletions arch/alpha/include/asm/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -292,9 +292,4 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
#define atomic_dec(v) atomic_sub(1,(v))
#define atomic64_dec(v) atomic64_sub(1,(v))

#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()

#endif /* _ALPHA_ATOMIC_H */
3 changes: 0 additions & 3 deletions arch/alpha/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,6 @@ __set_bit(unsigned long nr, volatile void * addr)
*m |= 1 << (nr & 31);
}

#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()

static inline void
clear_bit(unsigned long nr, volatile void * addr)
{
Expand Down
5 changes: 0 additions & 5 deletions arch/arc/include/asm/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -190,11 +190,6 @@ static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)

#endif /* !CONFIG_ARC_HAS_LLSC */

#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()

/**
* __atomic_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t
Expand Down
5 changes: 1 addition & 4 deletions arch/arc/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

#include <linux/types.h>
#include <linux/compiler.h>
#include <asm/barrier.h>

/*
* Hardware assisted read-modify-write using ARC700 LLOCK/SCOND insns.
Expand Down Expand Up @@ -496,10 +497,6 @@ static inline __attribute__ ((const)) int __ffs(unsigned long word)
*/
#define ffz(x) __ffs(~(x))

/* TODO does this affect uni-processor code */
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()

#include <asm-generic/bitops/hweight.h>
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/sched.h>
Expand Down
5 changes: 0 additions & 5 deletions arch/arm/include/asm/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -241,11 +241,6 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)

#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0)

#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()

#ifndef CONFIG_GENERIC_ATOMIC64
typedef struct {
long long counter;
Expand Down
3 changes: 3 additions & 0 deletions arch/arm/include/asm/barrier.h
Original file line number Diff line number Diff line change
Expand Up @@ -79,5 +79,8 @@ do { \

#define set_mb(var, value) do { var = value; smp_mb(); } while (0)

#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()

#endif /* !__ASSEMBLY__ */
#endif /* __ASM_BARRIER_H */
4 changes: 1 addition & 3 deletions arch/arm/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,7 @@

#include <linux/compiler.h>
#include <linux/irqflags.h>

#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#include <asm/barrier.h>

/*
* These functions are the basis of our bit ops.
Expand Down
5 changes: 0 additions & 5 deletions arch/arm64/include/asm/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -152,11 +152,6 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)

#define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0)

#define smp_mb__before_atomic_dec() smp_mb()
#define smp_mb__after_atomic_dec() smp_mb()
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()

/*
* 64-bit atomic operations.
*/
Expand Down
3 changes: 3 additions & 0 deletions arch/arm64/include/asm/barrier.h
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,9 @@ do { \
#define set_mb(var, value) do { var = value; smp_mb(); } while (0)
#define nop() asm volatile("nop");

#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()

#endif /* __ASSEMBLY__ */

#endif /* __ASM_BARRIER_H */
9 changes: 0 additions & 9 deletions arch/arm64/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,8 @@
#define __ASM_BITOPS_H

#include <linux/compiler.h>

#include <asm/barrier.h>

/*
* clear_bit may not imply a memory barrier
*/
#ifndef smp_mb__before_clear_bit
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#endif

#ifndef _LINUX_BITOPS_H
#error only <linux/bitops.h> can be included directly
#endif
Expand Down
5 changes: 0 additions & 5 deletions arch/avr32/include/asm/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -183,9 +183,4 @@ static inline int atomic_sub_if_positive(int i, atomic_t *v)

#define atomic_dec_if_positive(v) atomic_sub_if_positive(1, v)

#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()

#endif /* __ASM_AVR32_ATOMIC_H */
9 changes: 2 additions & 7 deletions arch/avr32/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,7 @@
#endif

#include <asm/byteorder.h>

/*
* clear_bit() doesn't provide any barrier for the compiler
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm/barrier.h>

/*
* set_bit - Atomically set a bit in memory
Expand Down Expand Up @@ -67,7 +62,7 @@ static inline void set_bit(int nr, volatile void * addr)
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static inline void clear_bit(int nr, volatile void * addr)
Expand Down
3 changes: 3 additions & 0 deletions arch/blackfin/include/asm/barrier.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@

#endif /* !CONFIG_SMP */

#define smp_mb__before_atomic() barrier()
#define smp_mb__after_atomic() barrier()

#include <asm-generic/barrier.h>

#endif /* _BLACKFIN_BARRIER_H */
14 changes: 2 additions & 12 deletions arch/blackfin/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,21 +27,17 @@

#include <asm-generic/bitops/ext2-atomic.h>

#include <asm/barrier.h>

#ifndef CONFIG_SMP
#include <linux/irqflags.h>

/*
* clear_bit may not imply a memory barrier
*/
#ifndef smp_mb__before_clear_bit
#define smp_mb__before_clear_bit() smp_mb()
#define smp_mb__after_clear_bit() smp_mb()
#endif
#include <asm-generic/bitops/atomic.h>
#include <asm-generic/bitops/non-atomic.h>
#else

#include <asm/barrier.h>
#include <asm/byteorder.h> /* swab32 */
#include <linux/linkage.h>

Expand Down Expand Up @@ -101,12 +97,6 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
return __raw_bit_test_toggle_asm(a, nr & 0x1f);
}

/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()

#define test_bit __skip_test_bit
#include <asm-generic/bitops/non-atomic.h>
#undef test_bit
Expand Down
8 changes: 1 addition & 7 deletions arch/c6x/include/asm/bitops.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,8 @@
#ifdef __KERNEL__

#include <linux/bitops.h>

#include <asm/byteorder.h>

/*
* clear_bit() doesn't provide any barrier for the compiler.
*/
#define smp_mb__before_clear_bit() barrier()
#define smp_mb__after_clear_bit() barrier()
#include <asm/barrier.h>

/*
* We are lucky, DSP is perfect for bitops: do it in 3 cycles
Expand Down
8 changes: 2 additions & 6 deletions arch/cris/include/asm/atomic.h
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@
#include <linux/types.h>
#include <asm/cmpxchg.h>
#include <arch/atomic.h>
#include <arch/system.h>
#include <asm/barrier.h>

/*
* Atomic operations that C can't guarantee us. Useful for
Expand Down Expand Up @@ -151,10 +153,4 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
return ret;
}

/* Atomic operations are already serializing */
#define smp_mb__before_atomic_dec() barrier()
#define smp_mb__after_atomic_dec() barrier()
#define smp_mb__before_atomic_inc() barrier()
#define smp_mb__after_atomic_inc() barrier()

#endif
Loading

0 comments on commit 535560d

Please sign in to comment.