Skip to content

Commit

Permalink
[benchmarking] Refactor Benchmark (MystenLabs#1533)
Browse files Browse the repository at this point in the history
* refactor

* clean up

* refactor
rebase

* rebase

* rebase

* add how-to

* format

* lint

* don't pop key, copy it

* Add a safe client function that augments the follower API with downloading the transactions information (MystenLabs#1446)

Co-authored-by: George Danezis <[email protected]>

* Removing dev-addresses from Move.toml (MystenLabs#1536)

There is no undefined address in the [addresses] section (e.g. AddressToBeFilledIn = "_") so having a [dev-addresses] section is not necessary.

* [move][adapter] Static init rules (MystenLabs#1532)

- Made init rules static
- The existence of the function is optional
- The function must be named 'init'
- The function must be private
- The function can have a single parameter, &mut TxContext
- Alternatively, the function can have zero parameters

* adds a counter example (MystenLabs#1539)

* sui: reorganize binaries

* wallet: log git revision on start

* Refactor Build index by common user workflow (MystenLabs#1530)

* Refactor Build index by common user workflow

* Update index.md

Fix paths missing Markdown file extension

* Update index.md (MystenLabs#1545)

Removing tutorial series until ready

* Update index.md

Remove version from API link
Capitalize REST

* comments and add unwind logic for process

* fix: remove one unneeded instance of key_pair.copy()

* fix: remove three unneeded instances of key_pair.copy()

* fix: remove one unneeded instance of key_pair.copy()

* refactor
rebase

* rebase

* refactor

* rebase

* rebase

* add how-to

* lint

* don't pop key, copy it

* rebase

* Update index.md

Remove version from API link
Capitalize REST

* rebase

Co-authored-by: Lu Zhang <[email protected]>
Co-authored-by: George Danezis <[email protected]>
Co-authored-by: George Danezis <[email protected]>
Co-authored-by: jaredcosulich <[email protected]>
Co-authored-by: Todd Nowacki <[email protected]>
Co-authored-by: Damir S <[email protected]>
Co-authored-by: Brandon Williams <[email protected]>
Co-authored-by: Clay-Mysten <[email protected]>
Co-authored-by: François Garillot <[email protected]>
  • Loading branch information
10 people authored Apr 22, 2022
1 parent 825ad99 commit 64357d7
Show file tree
Hide file tree
Showing 12 changed files with 633 additions and 186 deletions.
2 changes: 1 addition & 1 deletion doc/src/learn/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,4 @@ Take note of these related repositories of information to make best use of the k
* [Book](https://github.com/diem/move/blob/main/language/documentation/book/src/introduction.md) - A summary with pages on [various topics](https://github.com/diem/move/tree/main/language/documentation/book/src).
* [Examples](https://github.com/diem/move/tree/main/language/documentation/examples/experimental) - A set of samples, such as for [defining a coin](https://github.com/diem/move/tree/main/language/documentation/examples/experimental/basic-coin) and [swapping it](https://github.com/diem/move/tree/main/language/documentation/examples/experimental/coin-swap).
* [Awesome Move](https://github.com/MystenLabs/awesome-move/blob/main/README.md) - A summary of resources related to Move, from blockchains through code samples.
* [Sui API Reference](https://app.swaggerhub.com/apis/MystenLabs/sui-api/0.1 ) - The reference files for the Sui Rest API.
* [Sui API Reference](https://app.swaggerhub.com/apis/MystenLabs/sui-api/) - The reference files for the Sui REST API.
8 changes: 8 additions & 0 deletions network_utils/src/network.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,14 @@ impl NetworkClient {
}
}

pub fn base_address(&self) -> &str {
&self.base_address
}

pub fn base_port(&self) -> u16 {
self.base_port
}

async fn batch_send_one_chunk(
&self,
requests: Vec<Bytes>,
Expand Down
3 changes: 3 additions & 0 deletions sui/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,6 @@ jsonrpsee-proc-macros = "0.10.1"

[dev-dependencies]
tracing-test = "0.2.1"

[features]
benchmark = ["narwhal-node/benchmark"]
169 changes: 91 additions & 78 deletions sui/src/benchmark.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

use futures::{join, StreamExt};
use rayon::iter::{IntoParallelRefIterator, ParallelIterator};
use std::panic;
use std::thread;
use std::{thread::sleep, time::Duration};
use sui_core::authority_client::{AuthorityAPI, NetworkAuthorityClient};
Expand All @@ -14,23 +15,24 @@ use sui_types::messages::{BatchInfoRequest, BatchInfoResponseItem};
use sui_types::serialize::*;
use tokio::runtime::{Builder, Runtime};
use tracing::{error, info};

pub mod bench_types;
pub mod load_generator;
pub mod transaction_creator;
pub mod validator_preparer;
use crate::benchmark::bench_types::{Benchmark, BenchmarkType};
use crate::benchmark::load_generator::{
calculate_throughput, check_transaction_response, send_tx_chunks, spawn_authority_server,
FixedRateLoadGenerator,
calculate_throughput, check_transaction_response, send_tx_chunks, FixedRateLoadGenerator,
};
use crate::benchmark::transaction_creator::TransactionCreator;
use crate::benchmark::validator_preparer::ValidatorPreparer;

use self::bench_types::{BenchmarkResult, MicroBenchmarkResult, MicroBenchmarkType};

const FOLLOWER_BATCH_SIZE: u64 = 10_000;

pub fn run_benchmark(benchmark: Benchmark) -> BenchmarkResult {
// Only microbenchmark support is supported
// Only microbenchmark is supported
info!("benchmark : {:?}", benchmark);
BenchmarkResult::MicroBenchmark(run_microbenchmark(benchmark))
}

Expand All @@ -52,7 +54,14 @@ fn run_microbenchmark(benchmark: Benchmark) -> MicroBenchmarkResult {
} else {
num_cpus::get()
};

let validator_preparer = ValidatorPreparer::new(
benchmark.running_mode,
benchmark.working_dir,
benchmark.committee_size,
network_client.base_address(),
network_client.base_port(),
benchmark.db_cpus,
);
match type_ {
MicroBenchmarkType::Throughput { num_transactions } => run_throughout_microbench(
network_client,
Expand All @@ -61,8 +70,7 @@ fn run_microbenchmark(benchmark: Benchmark) -> MicroBenchmarkResult {
benchmark.batch_size,
benchmark.use_move,
num_transactions,
benchmark.committee_size,
benchmark.db_cpus,
validator_preparer,
),
MicroBenchmarkType::Latency {
num_chunks,
Expand All @@ -73,11 +81,10 @@ fn run_microbenchmark(benchmark: Benchmark) -> MicroBenchmarkResult {
network_server,
connections,
benchmark.use_move,
benchmark.committee_size,
benchmark.db_cpus,
num_chunks,
chunk_size,
period_us,
validator_preparer,
),
}
}
Expand All @@ -89,8 +96,7 @@ fn run_throughout_microbench(
batch_size: usize,
use_move: bool,
num_transactions: usize,
committee_size: usize,
db_cpus: usize,
mut validator_preparer: ValidatorPreparer,
) -> MicroBenchmarkResult {
assert_eq!(
num_transactions % batch_size,
Expand All @@ -100,66 +106,65 @@ fn run_throughout_microbench(
// In order to simplify things, we send chunks on each connection and try to ensure all connections have equal load
assert!(
(num_transactions % connections) == 0,
"num_transactions must {} be multiple of number of TCP connections {}",
"num_transactions must be a multiple of number of TCP connections {}, got {}",
connections,
num_transactions,
connections
);
let mut tx_cr = TransactionCreator::new(committee_size, db_cpus);
let mut tx_cr = TransactionCreator::new();

let chunk_size = batch_size * connections;
let txes = tx_cr.generate_transactions(
connections,
use_move,
batch_size * connections,
num_transactions / chunk_size,
&mut validator_preparer,
);

// Make multi-threaded runtime for the authority
thread::spawn(move || {
get_multithread_runtime().block_on(async move {
let server = spawn_authority_server(network_server, tx_cr.authority_state).await;
if let Err(e) = server.join().await {
error!("Server ended with an error: {e}");
}
validator_preparer.deploy_validator(network_server);

let result = panic::catch_unwind(|| {
// Follower to observe batches
let follower_network_client = network_client.clone();
thread::spawn(move || {
get_multithread_runtime()
.block_on(async move { run_follower(follower_network_client).await });
});
});

// Wait for server start
sleep(Duration::from_secs(3));
sleep(Duration::from_secs(3));

// Follower to observe batches
let follower_network_client = network_client.clone();
thread::spawn(move || {
get_multithread_runtime()
.block_on(async move { run_follower(follower_network_client).await });
});
// Run load
let (elapsed, resp) = get_multithread_runtime()
.block_on(async move { send_tx_chunks(txes, network_client, connections).await });

sleep(Duration::from_secs(3));
let _: Vec<_> = resp
.par_iter()
.map(|q| check_transaction_response(deserialize_message(&(q.as_ref().unwrap())[..])))
.collect();

// Run load
let (elapsed, resp) = get_multithread_runtime()
.block_on(async move { send_tx_chunks(txes, network_client, connections).await });
elapsed
});
validator_preparer.clean_up();

let _: Vec<_> = resp
.par_iter()
.map(|q| check_transaction_response(deserialize_message(&(q.as_ref().unwrap())[..])))
.collect();
MicroBenchmarkResult::Throughput {
chunk_throughput: calculate_throughput(num_transactions, elapsed),
match result {
Ok(elapsed) => MicroBenchmarkResult::Throughput {
chunk_throughput: calculate_throughput(num_transactions, elapsed),
},
Err(err) => {
panic::resume_unwind(err);
}
}
}

fn run_latency_microbench(
network_client: NetworkClient,
network_server: NetworkServer,
_network_server: NetworkServer,
connections: usize,
use_move: bool,
committee_size: usize,
db_cpus: usize,

num_chunks: usize,
chunk_size: usize,
period_us: u64,
mut validator_preparer: ValidatorPreparer,
) -> MicroBenchmarkResult {
// In order to simplify things, we send chunks on each connection and try to ensure all connections have equal load
assert!(
Expand All @@ -168,49 +173,57 @@ fn run_latency_microbench(
num_chunks * chunk_size,
connections
);
let mut tx_cr = TransactionCreator::new(committee_size, db_cpus);

let mut tx_cr = TransactionCreator::new();

// These TXes are to load the network
let load_gen_txes = tx_cr.generate_transactions(connections, use_move, chunk_size, num_chunks);
let load_gen_txes = tx_cr.generate_transactions(
connections,
use_move,
chunk_size,
num_chunks,
&mut validator_preparer,
);

// These are tracer TXes used for measuring latency
let tracer_txes = tx_cr.generate_transactions(1, use_move, 1, num_chunks);
let tracer_txes =
tx_cr.generate_transactions(1, use_move, 1, num_chunks, &mut validator_preparer);

// Make multi-threaded runtime for the authority
thread::spawn(move || {
get_multithread_runtime().block_on(async move {
let server = spawn_authority_server(network_server, tx_cr.authority_state).await;
if let Err(e) = server.join().await {
error!("Server ended with an error: {e}");
}
validator_preparer.deploy_validator(_network_server);

let result = panic::catch_unwind(|| {
let runtime = get_multithread_runtime();
// Prep the generators
let (mut load_gen, mut tracer_gen) = runtime.block_on(async move {
join!(
FixedRateLoadGenerator::new(
load_gen_txes,
period_us,
network_client.clone(),
connections,
),
FixedRateLoadGenerator::new(tracer_txes, period_us, network_client, 1),
)
});
});

// Wait for server start
sleep(Duration::from_secs(3));
let runtime = get_multithread_runtime();
// Prep the generators
let (mut load_gen, mut tracer_gen) = runtime.block_on(async move {
join!(
FixedRateLoadGenerator::new(
load_gen_txes,
period_us,
network_client.clone(),
connections,
),
FixedRateLoadGenerator::new(tracer_txes, period_us, network_client, 1),
)
});
// Run the load gen and tracers
let (load_latencies, tracer_latencies) =
runtime.block_on(async move { join!(load_gen.start(), tracer_gen.start()) });

// Run the load gen and tracers
let (load_latencies, tracer_latencies) =
runtime.block_on(async move { join!(load_gen.start(), tracer_gen.start()) });
(load_latencies, tracer_latencies)
});
validator_preparer.clean_up();

MicroBenchmarkResult::Latency {
load_chunk_size: chunk_size,
load_latencies,
tick_period_us: period_us as usize,
chunk_latencies: tracer_latencies,
match result {
Ok((load_latencies, tracer_latencies)) => MicroBenchmarkResult::Latency {
load_chunk_size: chunk_size,
load_latencies,
tick_period_us: period_us as usize,
chunk_latencies: tracer_latencies,
},
Err(err) => {
panic::resume_unwind(err);
}
}
}

Expand Down
26 changes: 22 additions & 4 deletions sui/src/benchmark/bench_types.rs
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
// Copyright (c) 2022, Mysten Labs, Inc.
// SPDX-License-Identifier: Apache-2.0

use super::load_generator::calculate_throughput;
use clap::*;
use std::default::Default;
use std::path::PathBuf;
use strum_macros::EnumString;
use sui_network::transport;

use super::load_generator::calculate_throughput;

#[derive(Debug, Clone, Parser)]
#[clap(
name = "Sui Benchmark",
about = "Local test and benchmark of the Sui authorities"
)]
pub struct Benchmark {
/// Size of the Sui committee. Minimum size is 4 to tolerate one fault
#[clap(long, default_value = "10", global = true)]
/// Size of the Sui committee.
#[clap(long, default_value = "1", global = true)]
pub committee_size: usize,
/// Timeout for sending queries (us)
#[clap(long, default_value = "40000000", global = true)]
Expand All @@ -38,6 +38,17 @@ pub struct Benchmark {
#[clap(long, default_value = "2000", global = true)]
pub batch_size: usize,

#[clap(
arg_enum,
default_value = "local-single-validator-thread",
global = true,
ignore_case = true
)]
pub running_mode: RunningMode,

#[clap(long, global = true)]
pub working_dir: Option<PathBuf>,

/// Type of benchmark to run
#[clap(subcommand)]
pub bench_type: BenchmarkType,
Expand All @@ -60,6 +71,13 @@ pub enum BenchmarkType {
// ... more benchmark types here
}

#[derive(Debug, Parser, Clone, Copy, ArgEnum, EnumString)]
#[clap(rename_all = "kebab-case")]
pub enum RunningMode {
LocalSingleValidatorThread,
LocalSingleValidatorProcess,
}

#[derive(Debug, Clone, Parser, Eq, PartialEq, EnumString)]
#[clap(rename_all = "kebab-case")]
pub enum MicroBenchmarkType {
Expand Down
Loading

0 comments on commit 64357d7

Please sign in to comment.