Skip to content

ElekCorp/Programming-Language-Benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bench MIT License

Why Build This

The idea is to build an automatic process for benchmark generation and pulishing.

Comparable numbers

It currently use CI to generate benchmark results to guarantee all the numbers are generated from the same environment at nearly the same time. All benchmark tests are executed in a single CI job

Automatic publish

Once a change is merged into main branch, the CI job will re-generate and publish the static website

Main Goals

  • Compare performance differences between different languages. Note that implementations might be using different optimizations, e.g. with or w/o multithreading, please do read the source code to check if it's a fair comparision or not.
  • Compare performance differences between different compilers or runtimes of the same language with the same source code.
  • Facilitate benchmarking on real server environments as nowadays more and more applications are deployed with docker/k8s. It's likely to get a very different result from what you get on your dev machine.
  • A reference for CI setup / Dev environment setup / package management setup for different languages. Refer to Github action

Build

To achieve better SEO, the published site is static and prerenderd, powered by nuxt.js.

Host

The website is hosted on Vercel

Development

git clone https://github.com/hanabi1224/Another-Benchmarks-Game.git

cd website
yarn
yarn generate
yarn dev

Benchmarks

All benchmarks are defined in bench.yaml

Current benchmarks problems and their implementations are from The Computer Language Benchmarks Game ( Repo)

Local development

Prerequisites

docker

net5

node LTS

yarn

Build

The 1st step is to build source code from various of lanuages

cd bench
# To build a subset
dotnet run -p tool -- --task build --langs lisp,go --problems nbody,helloworld --force-rebuild
# To build all
dotnet run -p tool -- --task build

Test

The 2nd step is to test built binaries to ensure the correctness of their implementation

cd bench
# To test a subset
dotnet run -p tool -- --task test --langs lisp,go --problems nbody,helloworld
# To test all
dotnet run -p tool -- --task test

Bench

The 3rd step is to generate benchmarks

cd bench
# To bench a subset
dotnet run -p tool -- --task bench --langs lisp,go --problems nbody,helloworld
# To bench all
dotnet run -p tool -- --task bench

For usage

cd bench
dotnet run -p tool -- -h

BenchTool
  Main function

Usage:
  BenchTool [options]

Options:
  --config <config>              Path to benchmark config file [default: bench.yaml]
  --algorithm <algorithm>        Root path that contains all algorithm code [default: algorithm]
  --include <include>            Root path that contains all include project templates [default: include]
  --build-output <build-output>  Output folder of build step [default: build]
  --task <task>                  Benchmark task to run, valid values: build, test, bench [default: build]
  --force-pull-docker            A flag that indicates whether to force pull docker image even when it exists [default: False]
  --force-rebuild                A flag that indicates whether to force rebuild [default: False]
  --fail-fast                    A Flag that indicates whether to fail fast when error occurs [default: False]
  --build-pool                   A flag that indicates whether builds that can run in parallel [default: False]
  --verbose                      A Flag that indicates whether to print verbose infomation [default: False]
  --no-docker                    A Flag that forces disabling docker [default: False]
  --langs <langs>                Languages to incldue, e.g. --langs go csharp [default: ]
  --problems <problems>          Problems to incldue, e.g. --problems binarytrees nbody [default: ]
  --environments <environments>  OS environments to incldue, e.g. --environments linux windows [default: ]
  --version                      Show version information
  -?, -h, --help                 Show help and usage information

Referesh website

Lastly you can re-generate website with latest benchmark numbers

cd website
yarn
yarn content
yarn generate
serve dist

TODOs

Intergrate test environment info into website

Intergrate build / test / benchmark infomation into website

...

How to contribute

TODO

Thanks

This is inspired by The Computer Language Benchmarks Game, thanks to the creator.

LICENSES

Code of problem implementation from The Computer Language Benchmarks Game is under their Revised BSD

Other code in this repo is under MIT.

About

Yet another implementation of computer language benchmarks game

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C# 14.8%
  • Rust 13.1%
  • Common Lisp 9.3%
  • Zig 4.7%
  • D 4.5%
  • Java 3.9%
  • Other 49.7%