Skip to content

Commit

Permalink
Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/lin…
Browse files Browse the repository at this point in the history
…ux/kernel/git/tip/tip

Pull RCU updates from Ingo Molnar:
 "The RCU changes in this cycle were:
   - Expedited grace-period updates
   - kfree_rcu() updates
   - RCU list updates
   - Preemptible RCU updates
   - Torture-test updates
   - Miscellaneous fixes
   - Documentation updates"

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (69 commits)
  rcu: Remove unused stop-machine #include
  powerpc: Remove comment about read_barrier_depends()
  .mailmap: Add entries for old [email protected] addresses
  srcu: Apply *_ONCE() to ->srcu_last_gp_end
  rcu: Switch force_qs_rnp() to for_each_leaf_node_cpu_mask()
  rcu: Move rcu_{expedited,normal} definitions into rcupdate.h
  rcu: Move gp_state_names[] and gp_state_getname() to tree_stall.h
  rcu: Remove the declaration of call_rcu() in tree.h
  rcu: Fix tracepoint tracking RCU CPU kthread utilization
  rcu: Fix harmless omission of "CONFIG_" from #if condition
  rcu: Avoid tick_dep_set_cpu() misordering
  rcu: Provide wrappers for uses of ->rcu_read_lock_nesting
  rcu: Use READ_ONCE() for ->expmask in rcu_read_unlock_special()
  rcu: Clear ->rcu_read_unlock_special only once
  rcu: Clear .exp_hint only when deferred quiescent state has been reported
  rcu: Rename some instance of CONFIG_PREEMPTION to CONFIG_PREEMPT_RCU
  rcu: Remove kfree_call_rcu_nobatch()
  rcu: Remove kfree_rcu() special casing and lazy-callback handling
  rcu: Add support for debug_objects debugging for kfree_rcu()
  rcu: Add multiple in-flight batches of kfree_rcu() work
  ...
  • Loading branch information
torvalds committed Jan 28, 2020
2 parents 8b56177 + f8a4bb6 commit d99391e
Show file tree
Hide file tree
Showing 46 changed files with 1,476 additions and 851 deletions.
4 changes: 4 additions & 0 deletions .mailmap
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,10 @@ Paolo 'Blaisorblade' Giarrusso <[email protected]>
Patrick Mochel <[email protected]>
Paul Burton <[email protected]> <[email protected]>
Paul Burton <[email protected]> <[email protected]>
Paul E. McKenney <[email protected]> <[email protected]>
Paul E. McKenney <[email protected]> <[email protected]>
Paul E. McKenney <[email protected]> <[email protected]>
Paul E. McKenney <[email protected]> <[email protected]>
Peter A Jonsson <[email protected]>
Peter Oruba <[email protected]>
Peter Oruba <[email protected]>
Expand Down
53 changes: 28 additions & 25 deletions Documentation/RCU/NMI-RCU.txt → Documentation/RCU/NMI-RCU.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
.. _NMI_rcu_doc:

Using RCU to Protect Dynamic NMI Handlers
=========================================


Although RCU is usually used to protect read-mostly data structures,
Expand All @@ -9,7 +12,7 @@ work in "arch/x86/oprofile/nmi_timer_int.c" and in
"arch/x86/kernel/traps.c".

The relevant pieces of code are listed below, each followed by a
brief explanation.
brief explanation::

static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
{
Expand All @@ -18,12 +21,12 @@ brief explanation.

The dummy_nmi_callback() function is a "dummy" NMI handler that does
nothing, but returns zero, thus saying that it did nothing, allowing
the NMI handler to take the default machine-specific action.
the NMI handler to take the default machine-specific action::

static nmi_callback_t nmi_callback = dummy_nmi_callback;

This nmi_callback variable is a global function pointer to the current
NMI handler.
NMI handler::

void do_nmi(struct pt_regs * regs, long error_code)
{
Expand Down Expand Up @@ -53,11 +56,12 @@ anyway. However, in practice it is a good documentation aid, particularly
for anyone attempting to do something similar on Alpha or on systems
with aggressive optimizing compilers.

Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha,
given that the code referenced by the pointer is read-only?
Quick Quiz:
Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?

:ref:`Answer to Quick Quiz <answer_quick_quiz_NMI>`

Back to the discussion of NMI and RCU...
Back to the discussion of NMI and RCU::

void set_nmi_callback(nmi_callback_t callback)
{
Expand All @@ -68,7 +72,7 @@ The set_nmi_callback() function registers an NMI handler. Note that any
data that is to be used by the callback must be initialized up -before-
the call to set_nmi_callback(). On architectures that do not order
writes, the rcu_assign_pointer() ensures that the NMI handler sees the
initialized values.
initialized values::

void unset_nmi_callback(void)
{
Expand All @@ -82,7 +86,7 @@ up any data structures used by the old NMI handler until execution
of it completes on all other CPUs.

One way to accomplish this is via synchronize_rcu(), perhaps as
follows:
follows::

unset_nmi_callback();
synchronize_rcu();
Expand All @@ -98,24 +102,23 @@ to free up the handler's data as soon as synchronize_rcu() returns.
Important note: for this to work, the architecture in question must
invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.

.. _answer_quick_quiz_NMI:

Answer to Quick Quiz

Why might the rcu_dereference_sched() be necessary on Alpha, given
that the code referenced by the pointer is read-only?
Answer to Quick Quiz:
Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?

Answer: The caller to set_nmi_callback() might well have
initialized some data that is to be used by the new NMI
handler. In this case, the rcu_dereference_sched() would
be needed, because otherwise a CPU that received an NMI
just after the new handler was set might see the pointer
to the new NMI handler, but the old pre-initialized
version of the handler's data.
The caller to set_nmi_callback() might well have
initialized some data that is to be used by the new NMI
handler. In this case, the rcu_dereference_sched() would
be needed, because otherwise a CPU that received an NMI
just after the new handler was set might see the pointer
to the new NMI handler, but the old pre-initialized
version of the handler's data.

This same sad story can happen on other CPUs when using
a compiler with aggressive pointer-value speculation
optimizations.
This same sad story can happen on other CPUs when using
a compiler with aggressive pointer-value speculation
optimizations.

More important, the rcu_dereference_sched() makes it
clear to someone reading the code that the pointer is
being protected by RCU-sched.
More important, the rcu_dereference_sched() makes it
clear to someone reading the code that the pointer is
being protected by RCU-sched.
34 changes: 23 additions & 11 deletions Documentation/RCU/arrayRCU.txt → Documentation/RCU/arrayRCU.rst
Original file line number Diff line number Diff line change
@@ -1,19 +1,21 @@
Using RCU to Protect Read-Mostly Arrays
.. _array_rcu_doc:

Using RCU to Protect Read-Mostly Arrays
=======================================

Although RCU is more commonly used to protect linked lists, it can
also be used to protect arrays. Three situations are as follows:

1. Hash Tables
1. :ref:`Hash Tables <hash_tables>`

2. Static Arrays
2. :ref:`Static Arrays <static_arrays>`

3. Resizeable Arrays
3. :ref:`Resizable Arrays <resizable_arrays>`

Each of these three situations involves an RCU-protected pointer to an
array that is separately indexed. It might be tempting to consider use
of RCU to instead protect the index into an array, however, this use
case is -not- supported. The problem with RCU-protected indexes into
case is **not** supported. The problem with RCU-protected indexes into
arrays is that compilers can play way too many optimization games with
integers, which means that the rules governing handling of these indexes
are far more trouble than they are worth. If RCU-protected indexes into
Expand All @@ -24,30 +26,38 @@ to be safely used.
That aside, each of the three RCU-protected pointer situations are
described in the following sections.

.. _hash_tables:

Situation 1: Hash Tables
------------------------

Hash tables are often implemented as an array, where each array entry
has a linked-list hash chain. Each hash chain can be protected by RCU
as described in the listRCU.txt document. This approach also applies
to other array-of-list situations, such as radix trees.

.. _static_arrays:

Situation 2: Static Arrays
--------------------------

Static arrays, where the data (rather than a pointer to the data) is
located in each array element, and where the array is never resized,
have not been used with RCU. Rik van Riel recommends using seqlock in
this situation, which would also have minimal read-side overhead as long
as updates are rare.

Quick Quiz: Why is it so important that updates be rare when
using seqlock?
Quick Quiz:
Why is it so important that updates be rare when using seqlock?

:ref:`Answer to Quick Quiz <answer_quick_quiz_seqlock>`

.. _resizable_arrays:

Situation 3: Resizeable Arrays
Situation 3: Resizable Arrays
------------------------------

Use of RCU for resizeable arrays is demonstrated by the grow_ary()
Use of RCU for resizable arrays is demonstrated by the grow_ary()
function formerly used by the System V IPC code. The array is used
to map from semaphore, message-queue, and shared-memory IDs to the data
structure that represents the corresponding IPC construct. The grow_ary()
Expand All @@ -60,7 +70,7 @@ the remainder of the new, updates the ids->entries pointer to point to
the new array, and invokes ipc_rcu_putref() to free up the old array.
Note that rcu_assign_pointer() is used to update the ids->entries pointer,
which includes any memory barriers required on whatever architecture
you are running on.
you are running on::

static int grow_ary(struct ipc_ids* ids, int newsize)
{
Expand Down Expand Up @@ -112,7 +122,7 @@ a simple check suffices. The pointer to the structure corresponding
to the desired IPC object is placed in "out", with NULL indicating
a non-existent entry. After acquiring "out->lock", the "out->deleted"
flag indicates whether the IPC object is in the process of being
deleted, and, if not, the pointer is returned.
deleted, and, if not, the pointer is returned::

struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id)
{
Expand Down Expand Up @@ -144,8 +154,10 @@ deleted, and, if not, the pointer is returned.
return out;
}

.. _answer_quick_quiz_seqlock:

Answer to Quick Quiz:
Why is it so important that updates be rare when using seqlock?

The reason that it is important that updates be rare when
using seqlock is that frequent updates can livelock readers.
Expand Down
5 changes: 5 additions & 0 deletions Documentation/RCU/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,13 @@ RCU concepts
.. toctree::
:maxdepth: 3

arrayRCU
rcubarrier
rcu_dereference
whatisRCU
rcu
listRCU
NMI-RCU
UP

Design/Memory-Ordering/Tree-RCU-Memory-Ordering
Expand Down
2 changes: 1 addition & 1 deletion Documentation/RCU/lockdep-splat.txt
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ With this change, the rcu_dereference() is always within an RCU
read-side critical section, which again would have suppressed the
above lockdep-RCU splat.

But in this particular case, we don't actually deference the pointer
But in this particular case, we don't actually dereference the pointer
returned from rcu_dereference(). Instead, that pointer is just compared
to the cic pointer, which means that the rcu_dereference() can be replaced
by rcu_access_pointer() as follows:
Expand Down
Loading

0 comments on commit d99391e

Please sign in to comment.