Skip to content

Commit

Permalink
refactor: next-generation metrics (metrics-rs#80)
Browse files Browse the repository at this point in the history
  • Loading branch information
tobz authored Sep 27, 2020
1 parent a796126 commit 36834dd
Show file tree
Hide file tree
Showing 126 changed files with 4,829 additions and 5,794 deletions.
13 changes: 13 additions & 0 deletions .editorconfig
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# http://editorconfig.org
root = true

[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true

[*.rs]
indent_size = 4
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
/target
**/*.rs.bk
Cargo.lock
/.vscode
12 changes: 5 additions & 7 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
[workspace]
members = [
"metrics-core",
"metrics",
"metrics-runtime",
"metrics-macros",
"metrics-util",
"metrics-exporter-log",
"metrics-exporter-http",
"metrics-observer-yaml",
"metrics-observer-prometheus",
"metrics-observer-json",
"metrics-benchmark",
"metrics-exporter-tcp",
"metrics-exporter-prometheus",
"metrics-tracing-context",
]
49 changes: 17 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The Metrics project: a metrics ecosystem for Rust.

Running applications in production can be hard when you don't have insight into what the application is doing. We're lucky to have so many good system monitoring programs and services to show us how our servers are performing, but we still have to do the work of instrumenting our applications to gain deep insight into their behavior and performance.

_Metrics_ makes it easy to instrument your application to provide real-time insight into what's happening. It provides a number of practical features that make it easy for library and application authors to start collecting and exporting metrics from their codebase.
`metrics` makes it easy to instrument your application to provide real-time insight into what's happening. It provides a number of practical features that make it easy for library and application authors to start collecting and exporting metrics from their codebase.

# why would I collect metrics?

Expand All @@ -32,50 +32,35 @@ Some of the most common scenarios for collecting metrics from an application:

Importantly, this works for both library authors and application authors. If the libraries you use are instrumented, you unlock the power of being able to collect those metrics in your application for free, without any extra configuration. Everyone wins, and learns more about their application performance at the end of the day.

# project goals

Firstly, we want to establish standardized interfaces by which everyone can interoperate: this is the goal of the `metrics` and `metrics-core` crates.

`metrics` provides macros similar to `log`, which are essentially zero cost and invisible when not in use, but automatically funnel their data when a user opts in and installs a metrics recorder. This allows library authors to instrument their libraries without needing to care which metrics system end users will be utilizing.

`metrics-core` provides foundational traits for core components of the metrics ecosystem, primarily the output side. There are a large number of output formats and transports that application authors may consider or want to use. By focusing on the API boundary between the systems that collect metrics and the systems they're exported to, these pieces can be easily swapped around depending on the needs of the end user.

Secondly, we want to provide a best-in-class reference runtime: this is the goal of the `metrics-runtime` crate.

Unfortunately, a great interface is no good without a suitable implementation, and we want to make sure that for users looking to instrument their applications for the first time, that they have a batteries-included option that gets them off to the races quickly. The `metrics-runtime` crate provides a best-in-class implementation of a metrics collection system, including support for the core metric types -- counters, gauges, and histograms -- as well as support for important features such as scoping, labels, flexible approaches to recording, and more.

On top of that, collecting metrics isn't terribly useful unless you can export those values, and so `metrics-runtime` pulls in a small set of default observers and exporters to allow users to quickly set up their application to be observable by their existing downstream metrics aggregation/storage.

# project layout

The Metrics project provides a number of crates for both library and application authors.

If you're a library author, you'll only care about using [`metrics`] to instrument your library. If you're an application author, you'll primarily care about [`metrics-runtime`], but you may also want to use [`metrics`] to make instrumenting your own code even easier.
If you're a library author, you'll only care about using [`metrics`] to instrument your library. If
you're an application author, you'll likely also want to instrument your application, but you'll
care about "exporters" as a means to take those metrics and ship them somewhere for analysis.

Overall, this repository is home to the following crates:

* [`metrics`][metrics]: A lightweight metrics facade, similar to [`log`](https://docs.rs/log).
* [`metrics-core`][metrics-core]: Foundational traits for interoperable metrics libraries.
* [`metrics-runtime`][metrics-runtime]: A batteries-included metrics library.
* [`metrics-exporter-http`][metrics-exporter-http]: A metrics-core compatible exporter for serving metrics over HTTP.
* [`metrics-exporter-log`][metrics-exporter-log]: A metrics-core compatible exporter for forwarding metrics to logs.
* [`metrics-observer-json`][metrics-observer-json]: A metrics-core compatible observer that outputs JSON.
* [`metrics-observer-yaml`][metrics-observer-yaml]: A metrics-core compatible observer that outputs YAML.
* [`metrics-observer-prometheus`][metrics-observer-prometheus]: A metrics-core compatible observer that outputs the Prometheus exposition format.
* [`metrics-util`][metrics-util]: Helper types/functions used by the metrics ecosystem.
* [`metrics-macros`][metrics-macros]: Procedural macros that power `metrics`.
* [`metrics-tracing-context`][metrics-tracing-context]: Allow capturing [`tracing`][tracing] span
fields as metric labels.
* [`metrics-exporter-tcp`][metrics-exporter-tcp]: A `metrics`-compatible exporter for serving metrics over TCP.
* [`metrics-exporter-prometheus`][metrics-exporter-prometheus]: A `metrics`-compatible exporter for
serving a Prometheus scrape endpoint.
* [`metrics-util`][metrics-util]: Helper types/functions used by the `metrics` ecosystem.

# contributing

We're always looking for users who have thoughts on how to make metrics better, or users with interesting use cases. Of course, we're also happy to accept code contributions for outstanding feature requests! 😀
We're always looking for users who have thoughts on how to make `metrics` better, or users with interesting use cases. Of course, we're also happy to accept code contributions for outstanding feature requests! 😀

We'd love to chat about any of the above, or anything else, really! You can find us over on [Discord](https://discord.gg/eTwKyY9).

[metrics]: https://github.com/metrics-rs/metrics/tree/master/metrics
[metrics-core]: https://github.com/metrics-rs/metrics/tree/master/metrics-core
[metrics-runtime]: https://github.com/metrics-rs/metrics/tree/master/metrics-runtime
[metrics-exporter-http]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-http
[metrics-exporter-log]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-log
[metrics-observer-json]: https://github.com/metrics-rs/metrics/tree/master/metrics-observer-json
[metrics-observer-yaml]: https://github.com/metrics-rs/metrics/tree/master/metrics-observer-yaml
[metrics-observer-prometheus]: https://github.com/metrics-rs/metrics/tree/master/metrics-observer-prometheus
[metrics-macros]: https://github.com/metrics-rs/metrics/tree/master/metrics-macros
[metrics-tracing-context]: https://github.com/metrics-rs/metrics/tree/master/metrics-tracing-context
[metrics-exporter-tcp]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-tcp
[metrics-exporter-prometheus]: https://github.com/metrics-rs/metrics/tree/master/metrics-exporter-prometheus
[metrics-util]: https://github.com/metrics-rs/metrics/tree/master/metrics-util
[tracing]: https://tracing.rs
2 changes: 1 addition & 1 deletion ci/azure-test-minimum.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ jobs:
steps:
- template: azure-install-rust.yml
parameters:
rust_version: 1.39.0
rust_version: 1.40.0
- script: cargo test
displayName: cargo test
15 changes: 15 additions & 0 deletions metrics-benchmark/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
[package]
name = "metrics-benchmark"
version = "0.1.1-alpha.1"
authors = ["Toby Lawrence <[email protected]>"]
edition = "2018"

[dependencies]
log = "0.4"
env_logger = "0.7"
getopts = "0.2"
hdrhistogram = "7.0"
quanta = "0.6"
atomic-shim = "0.1"
metrics = { version = "0.13.0-alpha.0", path = "../metrics" }
metrics-util = { version = "0.4.0-alpha.0", path = "../metrics-util" }
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,22 +1,12 @@
#[macro_use]
extern crate log;
extern crate env_logger;
extern crate getopts;
extern crate hdrhistogram;
extern crate metrics_core;
extern crate metrics_runtime;
extern crate tokio;

#[macro_use]
extern crate metrics;

use atomic_shim::AtomicU64;
use getopts::Options;
use hdrhistogram::Histogram;
use metrics_runtime::{exporters::HttpExporter, observers::JsonBuilder, Receiver};
use quanta::Clock;
use log::{error, info};
use metrics::{gauge, histogram, increment};
use metrics_util::DebuggingRecorder;
use std::{
env,
ops::Sub,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
Expand All @@ -28,23 +18,21 @@ use std::{
const LOOP_SAMPLE: u64 = 1000;

struct Generator {
t0: Option<u64>,
t0: Option<Instant>,
gauge: i64,
hist: Histogram<u64>,
done: Arc<AtomicBool>,
rate_counter: Arc<AtomicU64>,
clock: Clock,
}

impl Generator {
fn new(done: Arc<AtomicBool>, rate_counter: Arc<AtomicU64>, clock: Clock) -> Generator {
fn new(done: Arc<AtomicBool>, rate_counter: Arc<AtomicU64>) -> Generator {
Generator {
t0: None,
gauge: 0,
hist: Histogram::<u64>::new_with_bounds(1, u64::max_value(), 3).unwrap(),
done,
rate_counter,
clock,
}
}

Expand All @@ -59,22 +47,22 @@ impl Generator {

self.gauge += 1;

let t1 = self.clock.now();
let t1 = Instant::now();

if let Some(t0) = self.t0 {
let start = if counter % LOOP_SAMPLE == 0 {
self.clock.now()
let start = if counter % 1000 == 0 {
Some(Instant::now())
} else {
0
None
};

counter!("ok.gotem", 1);
timing!("ok.gotem", t0, t1);
gauge!("total", self.gauge);
increment!("ok");
gauge!("total", self.gauge as f64);
histogram!("ok", t1.sub(t0));

if start != 0 {
let delta = self.clock.now() - start;
self.hist.saturating_record(delta);
if let Some(val) = start {
let delta = Instant::now() - val;
self.hist.saturating_record(delta.as_nanos() as u64);

// We also increment our global counter for the sample rate here.
self.rate_counter
Expand Down Expand Up @@ -121,8 +109,7 @@ pub fn opts() -> Options {
opts
}

#[tokio::main]
async fn main() {
fn main() {
env_logger::init();

let args: Vec<String> = env::args().collect();
Expand Down Expand Up @@ -159,35 +146,23 @@ async fn main() {
info!("duration: {}s", seconds);
info!("producers: {}", producers);

let receiver = Receiver::builder()
.histogram(Duration::from_secs(5), Duration::from_millis(100))
.build()
.expect("failed to build receiver");

let controller = receiver.controller();

let addr = "0.0.0.0:23432"
.parse()
.expect("failed to parse http listen address");
let builder = JsonBuilder::new().set_pretty_json(true);
let exporter = HttpExporter::new(controller.clone(), builder, addr);
tokio::spawn(exporter.async_run());
let recorder = DebuggingRecorder::new();
let snapshotter = recorder.snapshotter();
recorder.install().expect("failed to install recorder");

receiver.install();
info!("receiver configured");
info!("sink configured");

// Spin up our sample producers.
let done = Arc::new(AtomicBool::new(false));
let rate_counter = Arc::new(AtomicU64::new(0));
let mut handles = Vec::new();
let clock = Clock::new();

for _ in 0..producers {
let d = done.clone();
let r = rate_counter.clone();
let c = clock.clone();
let handle = thread::spawn(move || {
Generator::new(d, r, c).run();
let mut gen = Generator::new(d, r);
gen.run();
});

handles.push(handle);
Expand All @@ -202,7 +177,7 @@ async fn main() {
let t1 = Instant::now();

let start = Instant::now();
let _snapshot = controller.snapshot();
let _snapshot = snapshotter.snapshot();
let end = Instant::now();
snapshot_hist.saturating_record(duration_as_nanos(end - start) as u64);

Expand All @@ -219,7 +194,7 @@ async fn main() {
info!("--------------------------------------------------------------------------------");
info!(" ingested samples total: {}", total);
info!(
"snapshot end-to-end: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
"snapshot retrieval: min: {:9} p50: {:9} p95: {:9} p99: {:9} p999: {:9} max: {:9}",
nanos_to_readable(snapshot_hist.min()),
nanos_to_readable(snapshot_hist.value_at_percentile(50.0)),
nanos_to_readable(snapshot_hist.value_at_percentile(95.0)),
Expand Down
47 changes: 0 additions & 47 deletions metrics-core/CHANGELOG.md

This file was deleted.

30 changes: 0 additions & 30 deletions metrics-core/CODE_OF_CONDUCT.md

This file was deleted.

16 changes: 0 additions & 16 deletions metrics-core/Cargo.toml

This file was deleted.

Loading

0 comments on commit 36834dd

Please sign in to comment.