Skip to content

Commit

Permalink
adapted rust service definition to changed proto format
Browse files Browse the repository at this point in the history
  • Loading branch information
Trisfald authored and aspurio committed Jan 10, 2022
1 parent fd1476d commit cfbd33b
Show file tree
Hide file tree
Showing 8 changed files with 125 additions and 27 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ The benchmark can be configured through the following environment variables:
|--------|---------------|:---------------:|
|GRPC_BENCHMARK_DURATION|Duration of the benchmark.|20s|
|GRPC_BENCHMARK_WARMUP|Duration of the warmup. Stats won't be collected.|5s|
|GRPC_REQUEST_SCENARIO|Scenario (from [scenarios/](scenarios/)) containing the protobuf and the data to be sent in the client request. It is advised to pass this argument to `build.sh` and run the former each time `helloworld.proto` is different from the previously ran scenario.|string_100B|
|GRPC_REQUEST_SCENARIO|Scenario (from [scenarios/](scenarios/)) containing the protobuf and the data to be sent in the client request.|string_100B|
|GRPC_SERVER_CPUS|Maximum number of cpus used by the server.|1|
|GRPC_SERVER_RAM|Maximum memory used by the server.|512m|
|GRPC_CLIENT_CONNECTIONS|Number of connections to use.|50|
Expand All @@ -63,6 +63,7 @@ The benchmark can be configured through the following environment variables:
### Parameter recommendations
* `GRPC_BENCHMARK_DURATION` should not be too small. Some implementations need a *warm-up* before achieving their optimal performance and most real-life gRPC services are expected to be long running processes. From what we measured, **300s** should be enough.
* `GRPC_SERVER_CPUS` + `GRPC_CLIENT_CPUS` should not exceed total number of cores on the machine. The reason for this is that you don't want the `ghz` client to steal precious CPU cycles from the service under test. Keep in mind that having the `GRPC_CLIENT_CPUS` too low may not saturate the service in some of the more performant implementations. Also keep in mind limiting the number of `GRPC_SERVER_CPUS` to 1 will severely hamper the performance for some technologies - is running a service on 1 CPU your use case? It may be, but keep in mind eventual load balancer also incurs some costs.
* `GRPC_REQUEST_SCENARIO` is a parameter to both `build.sh` and `bench.sh`. The images must be rebuilt each time you intend to use a scenario having a different `helloworld.proto` from the one ran previously.

Other parameters will depend on your use-case. Choose wisely.

Expand Down
2 changes: 1 addition & 1 deletion rust_grpcio_bench/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ struct GreeterService;
impl Greeter for GreeterService {
fn say_hello(&mut self, ctx: RpcContext<'_>, req: HelloRequest, sink: UnarySink<HelloReply>) {
let mut resp = HelloReply::default();
resp.set_message(req.get_name().to_string());
resp.set_response(req.get_request().clone());
let f = sink
.success(resp)
.map_err(move |e| println!("failed to reply {:?}: {:?}", req, e))
Expand Down
36 changes: 36 additions & 0 deletions rust_thruster_mt_bench/Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion rust_thruster_mt_bench/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ pub async fn say_hello(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> Middlewa
Ok(message_to_context(
context,
hello_world::HelloReply {
message: hello_world_request.name,
response: hello_world_request.request,
},
)
.await)
Expand Down
36 changes: 36 additions & 0 deletions rust_thruster_st_bench/Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion rust_thruster_st_bench/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ pub async fn say_hello(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> Middlewa
Ok(message_to_context(
context,
hello_world::HelloReply {
message: hello_world_request.name,
response: hello_world_request.request,
},
)
.await)
Expand Down
65 changes: 45 additions & 20 deletions rust_tonic_mt_bench/Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 3 additions & 3 deletions rust_tonic_mt_bench/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ impl Greeter for MyGreeter {
request: Request<HelloRequest>,
) -> Result<Response<HelloReply>, Status> {
let reply = hello_world::HelloReply {
message: request.into_inner().name,
response: request.into_inner().request,
};
Ok(Response::new(reply))
}
Expand All @@ -31,10 +31,10 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
let cpus = std::env::var("GRPC_SERVER_CPUS")
.map(|v| v.parse().unwrap())
.unwrap_or(1);

println!("Running with {} threads", cpus);

// Esentially the same as tokio::main, but with number of threads set to
// Esentially the same as tokio::main, but with number of threads set to
// avoid thrashing when cggroup limits are applied by Docker.
tokio::runtime::Builder::new_multi_thread()
.worker_threads(cpus)
Expand Down

0 comments on commit cfbd33b

Please sign in to comment.