-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question for taxonomy of bench suites #2
Comments
Maybe we should have a "concise" table where we try to indicate what is in a suite, but don't give the details. For some or all of the suites, we could then add a second table to list a (subset?) of what is in them? |
I thought about the HeCBench repo as well, but the same already applied for the RAJAPerf suite, which I included. I think we shouldn't resolve it this deep; there should only be one entry for HeCBench in the table, and the notes could then give a little more meat, if needed. Or we could expose the categories of benchmarks in HeCBench; see below. In general, I think we have the following categories
RAJAPerf, for which I went through the details now, is mostly 3. It's a Suite which collects benchmarks, which themselves are aligned into packages; like the LCAS or PolyBench. My best strategy to deal with it is to expose RAJAPerf as a suite and then PolyBench/LCAS/… as benchmarks of the suite, with optional comments indicating a little more details (not really used right now). The packaged collections of benchmarks (like PolyBench) are all custom and less proper as standalone suites, so I think it makes sense to view it this way. But, in general, benchmark suites will include benchmarks, which we'll need to list twice: think of STREAM; it'll have an entry when it is included in a suite, but it should get also its own entry, because it can also be run standalone. I'd say this is fine; there is repetition, but it's a needed one. And if PolyBench or LCALS have standalone versions as distinct benchmark suites, they can be included again – potentially then resolving more detail. Back to HeCBench: Taking these thoughts as a guideline, I think we can consider two options
What do you think? |
I'd be happy if we could fit everything in the same table, to keep the same structure; I think the sample table on Overleaf would be capable of handling this already in one. |
benchmark-survey/benchmarks.yaml
Line 7 in e4fcb39
i don't think that it is feasible to list all sub-benchmarks, for example https://github.com/zjin-lcf/HeCBench has 100++ in the repo. so how do we deal with such complex benchmark suites?
The text was updated successfully, but these errors were encountered: