Yet another implementation of The Computer Language Benchmarks Game, Visit HERE
The idea is to build an automatic process for benchmark generation and pulishing
It currently use CI to generate benchmark results to garantee all the numbers are generated from the same environment at nearly the same time. All benchmark tests run in sequence within a single CI job
Once a change is merged into main branch, the CI job will re-generate and publish the static website
To achieve better SEO, the published site is static and prerenderd, powered by nuxt.js.
The website is hosted on Vercel
git clone https://github.com/hanabi1224/Another-Benchmarks-Game.git
cd website
yarn
yarn generate
yarn dev
All benchmarks are defined in bench.yaml
Current benchmarks problems and their implementations are from The Computer Language Benchmarks Game ( Repo)
The 1st step is to build source code from various of lanuages
cd bench
dotnet run -p tool --task build
The 2nd step is to test built binaries to ensure the correctness of their implementation
cd bench
dotnet run -p tool --task test
The 3rd step is to generate benchmarks
cd bench
dotnet run -p tool --task bench
For usage
cd bench
dotnet run -p tool --help
Lastly you can re-generate website with latest benchmark numbers
cd website
yarn
yarn content
yarn generate
serve dist
Intergrate test environment info into website
Intergrate build / test / benchmark infomation into website
...
TODO
Code of problem implementation from The Computer Language Benchmarks Game is under their Revised BSD
Other code in this repo is under MIT.