Skip to content

Commit

Permalink
samples/bpf: fix bio latency check with tracepoint
Browse files Browse the repository at this point in the history
Recently, a new tracepoint for the block layer, specifically the
block_io_start/done tracepoints, was introduced in commit 5a80bd0
("block: introduce block_io_start/block_io_done tracepoints").

Previously, the kprobe entry used for this purpose was quite unstable
and inherently broke relevant probes [1]. Now that a stable tracepoint
is available, this commit replaces the bio latency check with it.

One of the changes made during this replacement is the key used for the
hash table. Since 'struct request' cannot be used as a hash key, the
approach taken follows that which was implemented in bcc/biolatency [2].
(uses dev:sector for the key)

[1]: iovisor/bcc#4261
[2]: iovisor/bcc#4691

Fixes: 450b787 ("block: move blk_account_io_{start,done} to blk-mq.c")
Signed-off-by: Daniel T. Lee <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Alexei Starovoitov <[email protected]>
  • Loading branch information
DanielTimLee authored and Alexei Starovoitov committed Aug 21, 2023
1 parent 1143042 commit 9263211
Showing 1 changed file with 24 additions and 12 deletions.
36 changes: 24 additions & 12 deletions samples/bpf/tracex3.bpf.c
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,30 @@
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>

struct start_key {
dev_t dev;
u32 _pad;
sector_t sector;
};

struct {
__uint(type, BPF_MAP_TYPE_HASH);
__type(key, long);
__type(value, u64);
__uint(max_entries, 4096);
} my_map SEC(".maps");

/* kprobe is NOT a stable ABI. If kernel internals change this bpf+kprobe
* example will no longer be meaningful
*/
SEC("kprobe/blk_mq_start_request")
int bpf_prog1(struct pt_regs *ctx)
/* from /sys/kernel/tracing/events/block/block_io_start/format */
SEC("tracepoint/block/block_io_start")
int bpf_prog1(struct trace_event_raw_block_rq *ctx)
{
long rq = PT_REGS_PARM1(ctx);
u64 val = bpf_ktime_get_ns();
struct start_key key = {
.dev = ctx->dev,
.sector = ctx->sector
};

bpf_map_update_elem(&my_map, &rq, &val, BPF_ANY);
bpf_map_update_elem(&my_map, &key, &val, BPF_ANY);
return 0;
}

Expand All @@ -47,21 +54,26 @@ struct {
__uint(max_entries, SLOTS);
} lat_map SEC(".maps");

SEC("kprobe/__blk_account_io_done")
int bpf_prog2(struct pt_regs *ctx)
/* from /sys/kernel/tracing/events/block/block_io_done/format */
SEC("tracepoint/block/block_io_done")
int bpf_prog2(struct trace_event_raw_block_rq *ctx)
{
long rq = PT_REGS_PARM1(ctx);
struct start_key key = {
.dev = ctx->dev,
.sector = ctx->sector
};

u64 *value, l, base;
u32 index;

value = bpf_map_lookup_elem(&my_map, &rq);
value = bpf_map_lookup_elem(&my_map, &key);
if (!value)
return 0;

u64 cur_time = bpf_ktime_get_ns();
u64 delta = cur_time - *value;

bpf_map_delete_elem(&my_map, &rq);
bpf_map_delete_elem(&my_map, &key);

/* the lines below are computing index = log10(delta)*10
* using integer arithmetic
Expand Down

0 comments on commit 9263211

Please sign in to comment.