These are some files to determine performance of rustpython.
Running cargo bench
from the root of the repository will start the benchmarks. Once done there will be a graphical
report under target/criterion/report/index.html
that you can use use to view the results.
To view Python tracebacks during benchmarks, run RUST_BACKTRACE=1 cargo bench
. You can also bench against a
specific installed Python version by running:
$ PYTHON_SYS_EXECUTABLE=python3.7 cargo bench
Simply adding a file to the benchmarks/
directory will add it to the set of files benchmarked. Each file is tested
in two ways:
- The time to parse the file to AST
- The time it takes to execute the file
Micro benchmarks are small snippets of code added under the microbenchmarks/
directory. A microbenchmark file has
two sections:
- Optional setup code
- The code to be benchmarked
These two sections are delimited by # ---
. For example:
a_list = [1,2,3]
# ---
len(a_list)
Only len(a_list)
will be timed. Setup or benchmarked code can optionally reference a variable called ITERATIONS
. If
present then the benchmark code will be invoked 5 times with ITERATIONS
set to a value between 100 and 1,000. For
example:
obj = [i for i in range(ITERATIONS)]
ITERATIONS
can appear in both the setup code and the benchmark code.
On MacOS you will need to add the following to a .cargo/config
file:
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]