Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
x86 rwsem: avoid taking slow path when stealing write lock
modify __down_write[_nested] and __down_write_trylock to grab the write lock whenever the active count is 0, even if there are queued waiters (they must be writers pending wakeup, since the active count is 0). Note that this is an optimization only; architectures without this optimization will still work fine: - __down_write() would take the slow path which would take the wait_lock and then try stealing the lock (as in the spinlocked rwsem implementation) - __down_write_trylock() would fail, but callers must be ready to deal with that - since there are some writers pending wakeup, they could have raced with us and obtained the lock before we steal it. Signed-off-by: Michel Lespinasse <[email protected]> Reviewed-by: Peter Hurley <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
- Loading branch information