diff --git a/doc/src/learn/architecture/consensus.md b/doc/src/learn/architecture/consensus.md index cd9ab27a3e4d4..6d3738329925e 100644 --- a/doc/src/learn/architecture/consensus.md +++ b/doc/src/learn/architecture/consensus.md @@ -10,8 +10,8 @@ The names highlight that the components split the responsibilities of: In August 2022, Bullshark replaced the Tusk component of the consensus protocol as the default for reduced latency and support for fairness (where even slow validators can contribute). See [DAG Meets BFT - The Next Generation of BFT Consensus](https://decentralizedthoughts.github.io/2022-06-28-DAG-meets-BFT/) for a comparison of the protocols. -Still, you may easily use Tusk instead of Bullshark by reverting the change shown in: -https://github.com/MystenLabs/narwhal/blob/85c226f2824010ff695d0bc5789a24cad2bce289/node/src/lib.rs#L266 +Still, you may easily use Tusk instead of Bullshark by updating the order engine at: +https://github.com/MystenLabs/sui/blame/0440605cbb45e6cdd790ab678f1f6409f1e938a3/narwhal/node/src/lib.rs#L191 Consensus is accomplished in two layered modules, so Narwhal can also be used coupled with an external consensus algorithm, such as HotStuff, Istanbul BFT, or Tendermint. Narwhal is undergoing integration in the [Celo](https://www.youtube.com/watch?v=Lwheo3jhAZM) and [Sommelier](https://www.prnewswire.com/news-releases/sommelier-partners-with-mysten-labs-to-make-the-cosmos-blockchain-the-fastest-on-the-planet-301381122.html) blockchain. @@ -24,11 +24,11 @@ The Sui Consensus Engine approach can offer dramatic scalability benefits in the ## Features The Narwhal mempool offers: -* a high-throughput data availability engine, with cryptographic proofs of data availability at [a primary node](https://github.com/MystenLabs/narwhal/tree/main/primary) +* a high-throughput data availability engine, with cryptographic proofs of data availability at [a primary node](https://github.com/MystenLabs/sui/blob/main/narwhal/primary) * a structured graph data structure for traversing this information -* a scaled architecture, splitting the disk I/O and networking requirements across several [workers](https://github.com/MystenLabs/narwhal/tree/main/worker) +* a scaled architecture, splitting the disk I/O and networking requirements across several [workers](https://github.com/MystenLabs/sui/blob/main/narwhal/worker) -The [consensus](https://github.com/MystenLabs/narwhal/tree/main/consensus) component offers a zero-message overhead consensus algorithm, leveraging graph traversals. +The [consensus](https://github.com/MystenLabs/sui/blob/main/narwhal/consensus) component offers a zero-message overhead consensus algorithm, leveraging graph traversals. ## Architecture @@ -86,7 +86,7 @@ Narwhal is implemented using [Tokio](https://github.com/tokio-rs/tokio), [RocksD ## Configuration -To conduct a fresh deployment of Sui Consensus Engine, follow the instructions at [Running Benchmarks](https://github.com/mystenlabs/narwhal/tree/main/benchmark). +To conduct a fresh deployment of Sui Consensus Engine, follow the instructions at [Running Benchmarks](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark). ## Further reading diff --git a/narwhal/README.md b/narwhal/README.md index d873cefd8d35d..c645bb2880dcf 100644 --- a/narwhal/README.md +++ b/narwhal/README.md @@ -16,7 +16,7 @@ You also need to install [Clang](https://clang.llvm.org/) (required by RocksDB) ``` $ fab local ``` -This command may take a long time the first time you run it (compiling rust code in `release` mode may be slow), and you can customize a number of benchmark parameters in [fabfile.py](https://github.com/mystenlabs/narwhal/blob/main/benchmark/fabfile.py). When the benchmark terminates, it displays a summary of the execution similarly to the one below. +This command may take a long time the first time you run it (compiling rust code in `release` mode may be slow), and you can customize a number of benchmark parameters in [fabfile.py](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/fabfile.py). When the benchmark terminates, it displays a summary of the execution similarly to the one below. ``` ----------------------------------------- SUMMARY: diff --git a/narwhal/benchmark/README.md b/narwhal/benchmark/README.md index 6c7307d253cec..782e18cb69138 100644 --- a/narwhal/benchmark/README.md +++ b/narwhal/benchmark/README.md @@ -8,7 +8,7 @@ When running benchmarks, the codebase is automatically compiled with the feature ### Parametrize the benchmark -After [cloning the repo and installing all dependencies](https://github.com/mystenlabs/narwhal#quick-start), you can use [Fabric](http://www.fabfile.org/) to run benchmarks on your local machine. Locate the task called `local` in the file [fabfile.py](https://github.com/mystenlabs/narwhal/blob/main/benchmark/fabfile.py): +After [cloning the repo and installing all dependencies](https://github.com/mystenlabs/narwhal#quick-start), you can use [Fabric](http://www.fabfile.org/) to run benchmarks on your local machine. Locate the task called `local` in the file [fabfile.py](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/fabfile.py): ```python @task @@ -177,7 +177,7 @@ This operation is manual (AWS exposes APIs to manipulate keys) and needs to be r ### Step 3. Configure the testbed -The file [settings.json](https://github.com/mystenlabs/narwhal/blob/main/benchmark/settings.json) (located in [narwhal/benchmarks](https://github.com/mystenlabs/narwhal/blob/main/benchmark)) contains all the configuration parameters of the testbed to deploy. Its content looks as follows: +The file [settings.json](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/settings.json) (located in [narwhal/benchmarks](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark)) contains all the configuration parameters of the testbed to deploy. Its content looks as follows: ```json { @@ -250,14 +250,14 @@ If you require more nodes than data centers, the Python scripts will distribute ### Step 4. Create a testbed -The AWS instances are orchestrated with [Fabric](http://www.fabfile.org) from the file [fabfile.py](https://github.com/mystenlabs/narwhal/blob/main/benchmark/fabfile.py) (located in [narwhal/benchmarks](https://github.com/mystenlabs/narwhal/blob/main/benchmark)); you can list all possible commands as follows: +The AWS instances are orchestrated with [Fabric](http://www.fabfile.org) from the file [fabfile.py](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/fabfile.py) (located in [narwhal/benchmarks](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark)); you can list all possible commands as follows: ``` $ cd narwhal/benchmark $ fab --list ``` -The command `fab create` creates new AWS instances; open [fabfile.py](https://github.com/mystenlabs/narwhal/blob/main/benchmark/fabfile.py) and locate the `create` task: +The command `fab create` creates new AWS instances; open [fabfile.py](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/fabfile.py) and locate the `create` task: ```python @task @@ -289,7 +289,7 @@ The commands `fab stop` and `fab start` respectively stop and start the testbed ### Step 5. Run a benchmark -After setting up the testbed, running a benchmark on AWS is similar to running it locally (see [Run Local Benchmarks](https://github.com/mystenlabs/narwhal/tree/main/benchmark#local-benchmarks)). Locate the task `remote` in [fabfile.py](https://github.com/mystenlabs/narwhal/blob/main/benchmark/fabfile.py): +After setting up the testbed, running a benchmark on AWS is similar to running it locally (see [Run Local Benchmarks](https://github.com/mystenlabs/narwhal/tree/main/benchmark#local-benchmarks)). Locate the task `remote` in [fabfile.py](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/fabfile.py): ```python @task @@ -320,7 +320,7 @@ Once you specified both `bench_params` and `node_params` as desired, run: $ fab remote ``` -This command first updates all machines with the latest commit of the GitHub repo and branch specified in your file [settings.json](https://github.com/mystenlabs/narwhal/blob/main/benchmark/settings.json) (step 3); this ensures that benchmarks are always run with the latest version of the code. It then generates and uploads the configuration files to each machine, runs the benchmarks with the specified parameters, and downloads the logs. It finally parses the logs and prints the results into a folder called `results` (which is automatically created if it doesn't already exist). You can run `fab remote` multiple times without fear of overriding previous results; the command either appends new results to a file containing existing results or prints them in separate files. If anything goes wrong during a benchmark, you can always stop it by running `fab kill`. +This command first updates all machines with the latest commit of the GitHub repo and branch specified in your file [settings.json](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/settings.json) (step 3); this ensures that benchmarks are always run with the latest version of the code. It then generates and uploads the configuration files to each machine, runs the benchmarks with the specified parameters, and downloads the logs. It finally parses the logs and prints the results into a folder called `results` (which is automatically created if it doesn't already exist). You can run `fab remote` multiple times without fear of overriding previous results; the command either appends new results to a file containing existing results or prints them in separate files. If anything goes wrong during a benchmark, you can always stop it by running `fab kill`. ### Step 6. Plot the results diff --git a/narwhal/benchmark/data/latest/README.md b/narwhal/benchmark/data/latest/README.md index 56cc937896146..05e6c1ce89fe9 100644 --- a/narwhal/benchmark/data/latest/README.md +++ b/narwhal/benchmark/data/latest/README.md @@ -1,6 +1,6 @@ # Experimental Data -This folder contains some raw data and plots obtained running a geo-replicated benchmark on AWS as explained in the [benchmark's readme file](https://github.com/mystenlabs/narwhal/blob/main/benchmark/README.md). The results are taken running the code tagged as [v0.2.0](https://github.com/asonnino/narwhal/tree/v0.2.0). +This folder contains some raw data and plots obtained running a geo-replicated benchmark on AWS as explained in the [benchmark's readme file](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/README.md). The results are taken running the code tagged as [v0.2.0](https://github.com/asonnino/narwhal/tree/v0.2.0). ### Filename format The filename format of raw data is the following: @@ -18,7 +18,7 @@ where: For instance, a file called `bench-0-50-1-True-100000-512.txt` indicates it contains results of a benchmark run with 50 nodes, 1 worker per node collocated on the same machine as the primary, 100K input rate, a transaction size of 512B, and 0 faulty nodes. ### Experimental step -The content of our [settings.json](https://github.com/mystenlabs/narwhal/blob/main/benchmark/settings.json) file looks as follows: +The content of our [settings.json](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/settings.json) file looks as follows: ```json { "key": { @@ -37,7 +37,7 @@ The content of our [settings.json](https://github.com/mystenlabs/narwhal/blob/ma } } ``` -We set the following `node_params` in our [fabfile](https://github.com/mystenlabs/narwhal/blob/main/benchmark/fabfile.py): +We set the following `node_params` in our [fabfile](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/fabfile.py): ```python node_params = { 'header_num_of_batches_threshold': 32, # number of batches diff --git a/narwhal/benchmark/data/paper-data/README.md b/narwhal/benchmark/data/paper-data/README.md index ee2575b42dd9a..99ffc555aebf2 100644 --- a/narwhal/benchmark/data/paper-data/README.md +++ b/narwhal/benchmark/data/paper-data/README.md @@ -1,6 +1,6 @@ # Experimental Data -This folder contains the raw data and plots used in the evaluation section of the paper [Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus](https://arxiv.org/pdf/2105.11827.pdf). The data are obtained running a geo-replicated benchmark on AWS as explained in the [benchmark's readme file](https://github.com/mystenlabs/narwhal/blob/main/benchmark#readme). The results are taken running the code tagged as [v0.2.0](https://github.com/asonnino/narwhal/tree/v0.2.0). +This folder contains the raw data and plots used in the evaluation section of the paper [Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus](https://arxiv.org/pdf/2105.11827.pdf). The data are obtained running a geo-replicated benchmark on AWS as explained in the [benchmark's readme file](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark#readme). The results are taken running the code tagged as [v0.2.0](https://github.com/asonnino/narwhal/tree/v0.2.0). ### Filename format The filename format of raw data is the following: @@ -18,7 +18,7 @@ where: For instance, a file called `bench-0-50-1-True-100000-512.txt` indicates it contains results of a benchmark run with 50 nodes, 1 worker per node collocated on the same machine as the primary, 100K input rate, a transaction size of 512B, and 0 faulty nodes. ### Experimental step -The content of our [settings.json](https://github.com/mystenlabs/narwhal/blob/main/benchmark/settings.json) file looks as follows: +The content of our [settings.json](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/settings.json) file looks as follows: ```json { "key": { @@ -37,7 +37,7 @@ The content of our [settings.json](https://github.com/mystenlabs/narwhal/blob/ma } } ``` -We set the following `node_params` in our [fabfile](https://github.com/mystenlabs/narwhal/blob/main/benchmark/fabfile.py): +We set the following `node_params` in our [fabfile](https://github.com/MystenLabs/sui/blob/main/narwhal/benchmark/fabfile.py): ```python node_params = { 'header_num_of_batches_threshold': 1000, # number of batches