Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] is there a benchmark/comparison test to the standard libmysqlclient available? #458

Open
LowLevelMahn opened this issue Feb 27, 2025 · 2 comments

Comments

@LowLevelMahn
Copy link

not the speed of the database itself but the overhead of protocol-handling etc. in speed and memory usage?

@LowLevelMahn LowLevelMahn changed the title [Question] is there a benchmark/comparison test to the standard mysql client available? [Question] is there a benchmark/comparison test to the standard libmysqlclient available? Feb 27, 2025
@anarthal
Copy link
Collaborator

Hi!

Not right now, although I'd like to add one. Are you aware of any standard tests I could perform? Or otherwise any specific comparison you'd like to see?

The points I don't see clear (and why I haven't done it yet):

  • Should we use text queries or prepared statements? The extra round-trip of statement preparation/execution, which you can easily avoid using with_params.
  • Should we include session establishment overhead somehow? Part of this overhead can be avoided withconnection_pool.
  • Finally, should we use sync functions, async functions using callbacks, or async functions using coroutines? I'd inclide towards the last one.

I'm happy to hear your suggestions.

Thanks,
Ruben.

@LowLevelMahn
Copy link
Author

LowLevelMahn commented Feb 27, 2025

Are you aware of any standard tests I could perform? Or otherwise any specific comparison you'd like to see?

any standard test i found are db-performance related - not the performance of the protocol encoding/decoding
in principle is it encoding-speed: large queries (with many parameter with large values , strings in select etc.) for prepared statements
and queries with large (many rows/cols, datasize) resultset (decoding-speed)

(maybe only using temporary tables - or what is best to keep the db performance out of the test)
Test 1: with libmysqlclient -> this is the Oracle-Speed
Test 2: the same queries using Boost.Mysql

additionally on procotol-level (no connection to the DB, just the buffers with binary data grabbed from Test 2 as input or regression test) just to check how fast the encoding/decoding behaves (if that is relevant at all)

Should we use text queries or prepared statements?

both but i think the prepared statements are more encoding speed relevant

the additional tests could use the Test1/2 queries

the problem is to keep the DB mostly out of the test - so maybe a temporary/virtual table or something were the speed is more constant

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants