This is a simple asynchronous demo, that demonstrates use of the contexts and asynchronous message handling and operations, to obtain highly concurrent processing with minimal fuss.
This is set up for configuration with CMake for ease of use.
You can override the level of concurrency with the PARALLEL
option.
This determines how many requests the server will accept at a time, and keep outstanding. Note that for our toy implementation, we create this many "logical" flows of execution (contexts) (these are NOT threads), where a request is followed by a reply.
The value of PARALLEL
must be at least one, and may be as large
as your memory will permit. (The default value is 128.) Probably
you want the value to be small enough to ensure that you have enough
file descriptors. (You can create more contexts than this, but generally
you can’t have more than one client per descriptor. Contexts can be used
on the client side to support many thousands of concurrent requests over
even just a single TCP connection, however.)
You can also build this all by hand with Make or whatever.
On UNIX-style systems:
% export CPPFLAGS="-D PARALLEL=32 -I /usr/local/include"
% export LDFLAGS="-L /usr/local/lib -lnng"
% export CC="cc"
% ${CC} ${CPPFLAGS} server.c -o server ${LDFLAGS}
% ${CC} ${CPPFLAGS} client.c -o client ${LDFLAGS}
The easiest thing is to simply use the run.sh
script, which
sends COUNT (10) random jobs to the server in parallel.
You can of course run the client and server manually instead.
The server takes the address (url) as its only argument.
The client takes the address (url), followed by the number of milliseconds the server should "wait" before responding (to simulate an expensive operation.)