Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question for taxonomy of bench suites #2

Closed
jdomke opened this issue Jul 4, 2024 · 3 comments
Closed

question for taxonomy of bench suites #2

jdomke opened this issue Jul 4, 2024 · 3 comments

Comments

@jdomke
Copy link
Collaborator

jdomke commented Jul 4, 2024

i don't think that it is feasible to list all sub-benchmarks, for example https://github.com/zjin-lcf/HeCBench has 100++ in the repo. so how do we deal with such complex benchmark suites?

@pearce8
Copy link
Collaborator

pearce8 commented Jul 9, 2024

Maybe we should have a "concise" table where we try to indicate what is in a suite, but don't give the details.

For some or all of the suites, we could then add a second table to list a (subset?) of what is in them?

@AndiH
Copy link
Member

AndiH commented Jul 9, 2024

I thought about the HeCBench repo as well, but the same already applied for the RAJAPerf suite, which I included. I think we shouldn't resolve it this deep; there should only be one entry for HeCBench in the table, and the notes could then give a little more meat, if needed. Or we could expose the categories of benchmarks in HeCBench; see below.

In general, I think we have the following categories

  1. Benchmarks; which are single programs executing HPC workloads (either synthetic or applications(-inspired))
  2. Benchmark Suites; which are collections of Benchmarks
  3. And, in a sense, also Benchmark (Meta-)Suites; which are (in part) collections of Benchmark Suites

RAJAPerf, for which I went through the details now, is mostly 3. It's a Suite which collects benchmarks, which themselves are aligned into packages; like the LCAS or PolyBench.

My best strategy to deal with it is to expose RAJAPerf as a suite and then PolyBench/LCAS/… as benchmarks of the suite, with optional comments indicating a little more details (not really used right now). The packaged collections of benchmarks (like PolyBench) are all custom and less proper as standalone suites, so I think it makes sense to view it this way. But, in general, benchmark suites will include benchmarks, which we'll need to list twice: think of STREAM; it'll have an entry when it is included in a suite, but it should get also its own entry, because it can also be run standalone. I'd say this is fine; there is repetition, but it's a needed one. And if PolyBench or LCALS have standalone versions as distinct benchmark suites, they can be included again – potentially then resolving more detail.

Back to HeCBench: Taking these thoughts as a guideline, I think we can consider two options

  1. HeCBench is a suite, with the categories exposed as "benchmarks", each one with notes to tell the components; it would give tribute to the variety of the included benchmarks
  2. HeCBench is a suite, as a single line in the table, and only mention the categories in the notes; to showcase, that this is an integrated suite with many options

What do you think?

@AndiH
Copy link
Member

AndiH commented Jul 9, 2024

For some or all of the suites, we could then add a second table to list a (subset?) of what is in them?

I'd be happy if we could fit everything in the same table, to keep the same structure; I think the sample table on Overleaf would be capable of handling this already in one.

@AndiH AndiH closed this as completed Aug 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants