Skip to content

Commit

Permalink
Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/gi…
Browse files Browse the repository at this point in the history
…t/torvalds/linux-2.6

Conflicts:
	arch/arm/mach-pxa/corgi.c
	arch/arm/mach-pxa/poodle.c
	arch/arm/mach-pxa/spitz.c
  • Loading branch information
David Woodhouse authored and David Woodhouse committed Jan 5, 2009
2 parents 160bbab + fe0bdec commit 353816f
Show file tree
Hide file tree
Showing 5,564 changed files with 314,022 additions and 148,659 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
8 changes: 4 additions & 4 deletions CREDITS
Original file line number Diff line number Diff line change
Expand Up @@ -369,10 +369,10 @@ P: 1024/8462A731 4C 55 86 34 44 59 A7 99 2B 97 88 4A 88 9A 0D 97
D: sun4 port, Sparc hacker

N: Hugh Blemings
E: hugh@misc.nu
W: http://misc.nu/hugh/
D: Author and maintainer of the Keyspan USB to Serial drivers
S: Po Box 234
E: hugh@blemings.org
W: http://blemings.org/hugh
D: Original author of the Keyspan USB to serial drivers, random PowerPC hacker
S: PO Box 234
S: Belconnen ACT 2616
S: Australia

Expand Down
14 changes: 8 additions & 6 deletions Documentation/ABI/testing/sysfs-class-uwb_rc
Original file line number Diff line number Diff line change
Expand Up @@ -32,14 +32,16 @@ Contact: [email protected]
Description:
Write:

<channel> [<bpst offset>]
<channel>

to start beaconing on a specific channel, or stop
beaconing if <channel> is -1. Valid channels depends
on the radio controller's supported band groups.
to force a specific channel to be used when beaconing,
or, if <channel> is -1, to prohibit beaconing. If
<channel> is 0, then the default channel selection
algorithm will be used. Valid channels depends on the
radio controller's supported band groups.

<bpst offset> may be used to try and join a specific
beacon group if more than one was found during a scan.
Reading returns the currently active channel, or -1 if
the radio controller is not beaconing.

What: /sys/class/uwb_rc/uwbN/scan
Date: July 2008
Expand Down
2 changes: 1 addition & 1 deletion Documentation/DocBook/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
# To add a new book the only step required is to add the book to the
# list of DOCBOOKS.

DOCBOOKS := wanbook.xml z8530book.xml mcabook.xml \
DOCBOOKS := z8530book.xml mcabook.xml \
kernel-hacking.xml kernel-locking.xml deviceiobook.xml \
procfs-guide.xml writing_usb_driver.xml networking.xml \
kernel-api.xml filesystems.xml lsm.xml usb.xml kgdb.xml \
Expand Down
3 changes: 0 additions & 3 deletions Documentation/DocBook/networking.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -98,9 +98,6 @@
X!Enet/core/wireless.c
</sect1>
-->
<sect1><title>Synchronous PPP</title>
!Edrivers/net/wan/syncppp.c
</sect1>
</chapter>

</book>
99 changes: 0 additions & 99 deletions Documentation/DocBook/wanbook.tmpl

This file was deleted.

2 changes: 2 additions & 0 deletions Documentation/RCU/00-INDEX
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ RTFP.txt
- List of RCU papers (bibliography) going back to 1980.
torture.txt
- RCU Torture Test Operation (CONFIG_RCU_TORTURE_TEST)
trace.txt
- CONFIG_RCU_TRACE debugfs files and formats
UP.txt
- RCU on Uniprocessor Systems
whatisRCU.txt
Expand Down
167 changes: 167 additions & 0 deletions Documentation/RCU/rculist_nulls.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
Using hlist_nulls to protect read-mostly linked lists and
objects using SLAB_DESTROY_BY_RCU allocations.

Please read the basics in Documentation/RCU/listRCU.txt

Using special makers (called 'nulls') is a convenient way
to solve following problem :

A typical RCU linked list managing objects which are
allocated with SLAB_DESTROY_BY_RCU kmem_cache can
use following algos :

1) Lookup algo
--------------
rcu_read_lock()
begin:
obj = lockless_lookup(key);
if (obj) {
if (!try_get_ref(obj)) // might fail for free objects
goto begin;
/*
* Because a writer could delete object, and a writer could
* reuse these object before the RCU grace period, we
* must check key after geting the reference on object
*/
if (obj->key != key) { // not the object we expected
put_ref(obj);
goto begin;
}
}
rcu_read_unlock();

Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu()
but a version with an additional memory barrier (smp_rmb())

lockless_lookup(key)
{
struct hlist_node *node, *next;
for (pos = rcu_dereference((head)->first);
pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
pos = rcu_dereference(next))
if (obj->key == key)
return obj;
return NULL;

And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() :

struct hlist_node *node;
for (pos = rcu_dereference((head)->first);
pos && ({ prefetch(pos->next); 1; }) &&
({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
pos = rcu_dereference(pos->next))
if (obj->key == key)
return obj;
return NULL;
}

Quoting Corey Minyard :

"If the object is moved from one list to another list in-between the
time the hash is calculated and the next field is accessed, and the
object has moved to the end of a new list, the traversal will not
complete properly on the list it should have, since the object will
be on the end of the new list and there's not a way to tell it's on a
new list and restart the list traversal. I think that this can be
solved by pre-fetching the "next" field (with proper barriers) before
checking the key."

2) Insert algo :
----------------

We need to make sure a reader cannot read the new 'obj->obj_next' value
and previous value of 'obj->key'. Or else, an item could be deleted
from a chain, and inserted into another chain. If new chain was empty
before the move, 'next' pointer is NULL, and lockless reader can
not detect it missed following items in original chain.

/*
* Please note that new inserts are done at the head of list,
* not in the middle or end.
*/
obj = kmem_cache_alloc(...);
lock_chain(); // typically a spin_lock()
obj->key = key;
atomic_inc(&obj->refcnt);
/*
* we need to make sure obj->key is updated before obj->next
*/
smp_wmb();
hlist_add_head_rcu(&obj->obj_node, list);
unlock_chain(); // typically a spin_unlock()


3) Remove algo
--------------
Nothing special here, we can use a standard RCU hlist deletion.
But thanks to SLAB_DESTROY_BY_RCU, beware a deleted object can be reused
very very fast (before the end of RCU grace period)

if (put_last_reference_on(obj) {
lock_chain(); // typically a spin_lock()
hlist_del_init_rcu(&obj->obj_node);
unlock_chain(); // typically a spin_unlock()
kmem_cache_free(cachep, obj);
}



--------------------------------------------------------------------------
With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
and extra smp_wmb() in insert function.

For example, if we choose to store the slot number as the 'nulls'
end-of-list marker for each slot of the hash table, we can detect
a race (some writer did a delete and/or a move of an object
to another chain) checking the final 'nulls' value if
the lookup met the end of chain. If final 'nulls' value
is not the slot number, then we must restart the lookup at
the begining. If the object was moved to same chain,
then the reader doesnt care : It might eventually
scan the list again without harm.


1) lookup algo

head = &table[slot];
rcu_read_lock();
begin:
hlist_nulls_for_each_entry_rcu(obj, node, head, member) {
if (obj->key == key) {
if (!try_get_ref(obj)) // might fail for free objects
goto begin;
if (obj->key != key) { // not the object we expected
put_ref(obj);
goto begin;
}
goto out;
}
/*
* if the nulls value we got at the end of this lookup is
* not the expected one, we must restart lookup.
* We probably met an item that was moved to another chain.
*/
if (get_nulls_value(node) != slot)
goto begin;
obj = NULL;

out:
rcu_read_unlock();

2) Insert function :
--------------------

/*
* Please note that new inserts are done at the head of list,
* not in the middle or end.
*/
obj = kmem_cache_alloc(cachep);
lock_chain(); // typically a spin_lock()
obj->key = key;
atomic_set(&obj->refcnt, 1);
/*
* insert obj in RCU way (readers might be traversing chain)
*/
hlist_nulls_add_head_rcu(&obj->obj_node, list);
unlock_chain(); // typically a spin_unlock()
Loading

0 comments on commit 353816f

Please sign in to comment.