Skip to content

Commit

Permalink
Merge branch 'perf'
Browse files Browse the repository at this point in the history
Signed-off-by: Avi Kivity <[email protected]>
  • Loading branch information
avikivity committed May 17, 2010
2 parents 3246af0 + a1645ce commit 9beeaa2
Show file tree
Hide file tree
Showing 157 changed files with 11,060 additions and 6,494 deletions.
10 changes: 2 additions & 8 deletions Documentation/kprobes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -165,8 +165,8 @@ the user entry_handler invocation is also skipped.

1.4 How Does Jump Optimization Work?

If you configured your kernel with CONFIG_OPTPROBES=y (currently
this option is supported on x86/x86-64, non-preemptive kernel) and
If your kernel is built with CONFIG_OPTPROBES=y (currently this flag
is automatically set 'y' on x86/x86-64, non-preemptive kernel) and
the "debug.kprobes_optimization" kernel parameter is set to 1 (see
sysctl(8)), Kprobes tries to reduce probe-hit overhead by using a jump
instruction instead of a breakpoint instruction at each probepoint.
Expand Down Expand Up @@ -271,8 +271,6 @@ tweak the kernel's execution path, you need to suppress optimization,
using one of the following techniques:
- Specify an empty function for the kprobe's post_handler or break_handler.
or
- Config CONFIG_OPTPROBES=n.
or
- Execute 'sysctl -w debug.kprobes_optimization=n'

2. Architectures Supported
Expand Down Expand Up @@ -307,10 +305,6 @@ it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
so you can use "objdump -d -l vmlinux" to see the source-to-object
code mapping.

If you want to reduce probing overhead, set "Kprobes jump optimization
support" (CONFIG_OPTPROBES) to "y". You can find this option under the
"Kprobes" line.

4. API Reference

The Kprobes API includes a "register" function and an "unregister"
Expand Down
4 changes: 3 additions & 1 deletion Documentation/trace/kprobetrace.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,9 @@ Synopsis of kprobe_events
$stack : Fetch stack address.
$retval : Fetch return value.(*)
+|-offs(FETCHARG) : Fetch memory at FETCHARG +|- offs address.(**)
NAME=FETCHARG: Set NAME as the argument name of FETCHARG.
NAME=FETCHARG : Set NAME as the argument name of FETCHARG.
FETCHARG:TYPE : Set TYPE as the type of FETCHARG. Currently, basic types
(u8/u16/u32/u64/s8/s16/s32/s64) are supported.

(*) only for return probe.
(**) this is useful for fetching a field of data structures.
Expand Down
10 changes: 5 additions & 5 deletions MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -4353,13 +4353,13 @@ M: Paul Mackerras <[email protected]>
M: Ingo Molnar <[email protected]>
M: Arnaldo Carvalho de Melo <[email protected]>
S: Supported
F: kernel/perf_event.c
F: kernel/perf_event*.c
F: include/linux/perf_event.h
F: arch/*/kernel/perf_event.c
F: arch/*/kernel/*/perf_event.c
F: arch/*/kernel/*/*/perf_event.c
F: arch/*/kernel/perf_event*.c
F: arch/*/kernel/*/perf_event*.c
F: arch/*/kernel/*/*/perf_event*.c
F: arch/*/include/asm/perf_event.h
F: arch/*/lib/perf_event.c
F: arch/*/lib/perf_event*.c
F: arch/*/kernel/perf_callchain.c
F: tools/perf/

Expand Down
9 changes: 2 additions & 7 deletions arch/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -42,15 +42,10 @@ config KPROBES
If in doubt, say "N".

config OPTPROBES
bool "Kprobes jump optimization support (EXPERIMENTAL)"
default y
depends on KPROBES
def_bool y
depends on KPROBES && HAVE_OPTPROBES
depends on !PREEMPT
depends on HAVE_OPTPROBES
select KALLSYMS_ALL
help
This option will allow kprobes to optimize breakpoint to
a jump for reducing its overhead.

config HAVE_EFFICIENT_UNALIGNED_ACCESS
bool
Expand Down
3 changes: 3 additions & 0 deletions arch/x86/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,9 @@ config X86
select HAVE_ARCH_KMEMCHECK
select HAVE_USER_RETURN_NOTIFIER

config INSTRUCTION_DECODER
def_bool (KPROBES || PERF_EVENTS)

config OUTPUT_FORMAT
string
default "elf32-i386" if X86_32
Expand Down
20 changes: 0 additions & 20 deletions arch/x86/Kconfig.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -502,23 +502,3 @@ config CPU_SUP_UMC_32
CPU might render the kernel unbootable.

If unsure, say N.

config X86_DS
def_bool X86_PTRACE_BTS
depends on X86_DEBUGCTLMSR
select HAVE_HW_BRANCH_TRACER

config X86_PTRACE_BTS
bool "Branch Trace Store"
default y
depends on X86_DEBUGCTLMSR
depends on BROKEN
---help---
This adds a ptrace interface to the hardware's branch trace store.

Debuggers may use it to collect an execution trace of the debugged
application in order to answer the question 'how did I get here?'.
Debuggers may trace user mode as well as kernel mode.

Say Y unless there is no application development on this machine
and you want to save a small amount of code size.
9 changes: 0 additions & 9 deletions arch/x86/Kconfig.debug
Original file line number Diff line number Diff line change
Expand Up @@ -174,15 +174,6 @@ config IOMMU_LEAK
Add a simple leak tracer to the IOMMU code. This is useful when you
are debugging a buggy device driver that leaks IOMMU mappings.

config X86_DS_SELFTEST
bool "DS selftest"
default y
depends on DEBUG_KERNEL
depends on X86_DS
---help---
Perform Debug Store selftests at boot time.
If in doubt, say "N".

config HAVE_MMIOTRACE_SUPPORT
def_bool y

Expand Down
13 changes: 11 additions & 2 deletions arch/x86/include/asm/apic.h
Original file line number Diff line number Diff line change
Expand Up @@ -373,6 +373,7 @@ extern atomic_t init_deasserted;
extern int wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip);
#endif

#ifdef CONFIG_X86_LOCAL_APIC
static inline u32 apic_read(u32 reg)
{
return apic->read(reg);
Expand Down Expand Up @@ -403,18 +404,26 @@ static inline u32 safe_apic_wait_icr_idle(void)
return apic->safe_wait_icr_idle();
}

#else /* CONFIG_X86_LOCAL_APIC */

static inline u32 apic_read(u32 reg) { return 0; }
static inline void apic_write(u32 reg, u32 val) { }
static inline u64 apic_icr_read(void) { return 0; }
static inline void apic_icr_write(u32 low, u32 high) { }
static inline void apic_wait_icr_idle(void) { }
static inline u32 safe_apic_wait_icr_idle(void) { return 0; }

#endif /* CONFIG_X86_LOCAL_APIC */

static inline void ack_APIC_irq(void)
{
#ifdef CONFIG_X86_LOCAL_APIC
/*
* ack_APIC_irq() actually gets compiled as a single instruction
* ... yummie.
*/

/* Docs say use 0 for future compatibility */
apic_write(APIC_EOI, 0);
#endif
}

static inline unsigned default_get_apic_id(unsigned long x)
Expand Down
Loading

0 comments on commit 9beeaa2

Please sign in to comment.