On 23rd June 2011 we launched the first full-stack 100% real time web framework. Now we're pleased to present Version 0.2.0 - the first real time web platform designed to distribute requests over multiple servers.
0.2.0 takes us much closer to our goal of a providing a horizontally-scalable platform for the next generation of high-traffic real time web applications. It contains many of the features you've been asking for, plus a few surprises:
0.2 has been re-architected so the front end (which serves the client assets, HTTP API, and handles the websockets) is completely independent to the back end (which connects to databases/Redis and processes incoming requests). Requests are transmitted asynchronously from the front end to the back end as internal RPC calls.
Splitting up the architecture makes the code cleaner and easer to test and benchmark. However, more importantly, you're now able to run front end and back end servers independently over high speed, low latency, ZeroMQ sockets. To find out how to install ZeroMQ type socketstream help
.
There may also be interesting opportunities in the future around using Mongrel2 (which is also built on ZeroMQ) once it fully supports websockets - or even writing back end request handlers in different languages/platforms (e.g. Twitter's Finagle). If anyone wants to explore these areas, please do.
But back to scaling...
Whereas version 0.1 was limited to doing everything in a single thread, 0.2 supports multiple back end server processes running across multiple CPU cores and machines - opening up new opportunities to horizontally scale up according to your traffic at any given moment.
Initial benchmarks show near-linear horizontal scalability when multiple back end servers are added to complete a CPU-heavy task (such as computing 1000 random strings per request), fully exploiting the resources of modern multi-core CPUs. The best thing is these back end servers can come and go as they please with zero-configuration - so new EC2 instances could be automatically launched and terminated using AWS Auto Scaling.
None of this changes the way you normally start the server with socketstream server
- this continues to work just as before. But when you're ready to scale up we now have a number of options for you to explore.
For full details on how to use this, checkout the Scaling section in the User Guide: /doc/guide/en/deploying/scaling.md
Think of 0.2 as our first step towards producing a world-class horizontally scalable platform designed with cloud computing in mind. It's worth noting that while we feel ZeroMQ is the best choice for now, our internal RPC layer supports multiple transports, allowing us to experiment with other technologies in the future as they mature.
Version 0.2 uses Connect for HTTP middleware, allowing you to hook into the wealth of 3rd party middleware or easily write your own.
Though we have yet to test and document it, this means we're now able to support Connect Auth out of the box as well as expose the entire request object to /config/http.coffee - should you wish to access incoming POST headers.
The /config/http.coffee file is now optional. If you've changed this in the past, you'll need to copy and merge this file from a new 0.2 project. Until this is done, any existing custom middleware you have will be ignored.
The latest version of Socket.IO 0.8 has been integrated for greater compatibility with older browsers and the latest websocket standards in Firefox 6 and Chrome 14. It has also allowed us to clean up a whole bunch of client/server code.
It has resulted in slightly different behavior to the version of Socket.IO in SocketStream 0.1. You'll notice disconnects/reconnects are no longer acted upon instantly. Instead the browser waits a while hoping the server will come back. You may configure Socket.IO directly from /app/config.coffee. See /doc/guide/en/developing/environments_and_configuration.md for details.
Our thanks go to Guillermo Rauch once again for improving on this already excellent library.
In 0.1 if you wanted to perform certain actions server-side when a client disconnects (e.g. clean up code in the database, remove the user's avatar from the screen), you had to put this code into /app/server/app.coffee and be sure to call that method at least once per session. This was ugly.
In 0.2 we now have something better: server-side events. We recommend responding to these in a new file which lives in /config/events.coffee (or of course, events.js). It's created for you by default when you generate a new project with all possible events commented out until you wish to use them.
Now you're able to run your own custom server-side code whenever a client initializes, sends a regular heartbeat, or disconnects (i.e. the user shuts their browser down or the connection times out). What's more, the work done by these custom event hooks is automatically load-balanced across all available back end servers in the same way as any other request.
We've added a few benchmarks to stress-tests the ZeroMQ-powered router and back end servers. This command has been invaluable to us as we've been experimenting with different scaling technologies and configurations.
Many more benchmarks will be added in the future, including those targeting the front end and your own custom application code.
### Plug Sockets (Highly Experimental)
ZeroMQ also brings us another brand new feature in 0.2 which we're very excited about.
Plug Sockets allow SocketStream to talk to legacy apps or external servers at high speed. This is ideal if you want to send incoming data to a game server in C, a message server in Erlang, or a financial system in Java. In fact, with ZeroMQ bindings in over 20 languages, you can use Plug Sockets to talk to just about anything that beeps.
In the near future Plug Sockets will also make it easy to create a spiffy new real time web interface for an existing Rails project, whilst keeping all your existing models, business logic and specs in Ruby.
You may either choose to take advantage of the JSON RPC asynchronous request handler we use internally, or write your own binary-level protocol for maximum speed. The choice is yours.
See /doc/guide/en/optional_modules/plug_sockets.md for full details.
In 0.1 any changes to client-side code in development mode would be reflected in the browser without having to restart the server.
In 0.2 we go one step further. If you have ZeroMQ installed (type socketstream help
to see how) the back end worker processes is now spawned by a process manager. If your server-side code changes (in development mode) we automatically kill the child process and restart a new one.
We're also experimenting with automatically reloading the browser window/tab when files change. So far this is working for changes to /lib/client, /lib/css and /app/server files. This new feature is currently turned off by default as it needs some more work, but can be easily enabled with SS.config.client.auto_reload = true. Note: Changes are detected instantly on Unix/Linux but are slightly delayed on OSX due to way Node.js polls for files changes. Hopefully all this can be improved in the future.
- SS.server methods can now take multiple arguments (though stick to passing an object to the first arg for 100% HTTP API compatibility)
- New @request object accessible within /app/server methods. Contains HTTP POST data if sent to the HTTP API
- Now supports the Node.js debugger (just type
socketstream debug server
). Can be used with any command - Removed /static dir concept. Browser Check now looks for /app/views/incompatible.jade (or .html) by default (can be changed in config)
- exports.authenticate = true now properly enforced by back end request handler, regardless of request origin
- SS.config.redis.password can now be optionally supplied
- Improved HTTP API when specifying requests with .json Now more closely aligned with http://en.wikipedia.org/wiki/JSON-RPC
- Removed SS.libs as NPM 1.x now loads correct version automatically
- Enabled the 'session' , 'request', and 'user' reserved variables to be renamed if required using SS.config.reserved_vars
- Improved logic when processing calls to /app/server
- Removed /vendor idea. Use /node_modules instead
- Sharing code between /app/server files can now be done using SS.require() which looks for files in /lib/server
- Publishing events to users now works in the same way as channels - more efficient when messaging multiples
- Params no longer need to be provided when publishing events
- Experimenting with a small number of specs in Jasmine. These are implemented in an external project for now
- Moved many sections out of the README into /doc/guide. Thanks to CC Maco Young for help in this area
There have been many more changes since the original 0.2 preview was released on 7th August. Check out HISTORY.md.
All of this has been achieved with minimal changes to the SocketStream developer API. Almost all features present in 0.1 are fully supported in 0.2. The main exception is HTTP Basic Auth which will be back in the near future, once we figure out the best way to integrate authentication with server-side models coming in 0.3.
As a result of experimenting with various different ways to do sessions, both the @session variable or @getSession(cb) callback function will retrieve the current session but we recommend using @session. In 0.2 session details are stored in Redis and cached on the front end server (passed to the back end via the internal RPC transport) to avoid hammering Redis unnecessarily.
On top of this you may need to delete your /config/http.coffee file if you don't have any custom HTTP Middleware, or upgrade it to the new format if you do (create a new project to see an example).
### How can you help
Many thanks to those who've already submitted patches and pull requests to 0.1.
It was necessary to reorganize the code to support the new distributed architecture and provide a stable foundation for future features. Now that's over with, feel free to take a section of code and work on it to make it more efficient - or implement a new feature (with the obvious caveat that not all features should find their way into the core).
As always, don't be afraid to make huge changes if you can demonstrate the net result will be simpler and more efficient.
You may wonder why all the emphasis on scalability? Well, we needed to make SocketStream scale before we tackle our next major project:
A real time website for SocketStream at www.socketstream.org We have some big ideas for this site. It will also be the first real test of the new ZeroMQ scaling technology on a public website - useful once the link hits Hacker News ;)
After that, we'll start working on Version 0.3 which among other things will look at implementing models (client and server-side), Backbone.js integration (which we'll make easy but optional), and a re-write of the client-side asset manager to deliver a bunch of new features. If you'd like to help out with any of this please get in touch.
You can see a discussion about our future plans for 0.3 and beyond here: http://groups.google.com/group/socketstream/browse_thread/thread/b6f580691151b038
It is becoming increasingly apparent that SocketStream is gradually morphing into two things: a real-time full-stack developer framework, and (increasingly more so in the future) a high-performance distributed real-time hosting platform that will simply and effortlessly serve any SocketStream project directory across multiple servers with minimal configuration.
Both on their own are exciting projects to work on - but together they have the potential to provide a very powerful and compelling platform. One we fully intend to leverage in a number of of our own projects planned for later this year and 2012.
There may be scope in the future to extract the distributed hosting components into a separate socketstream-server
project/NPM which will only be required by production servers - potentially paving the way for virtual hosting (many websites on one server), or simply making it easy to bring up more servers to handle a large high-traffic website. As always, your comments and thoughts are welcome. Let's see how this evolves...
Thanks to the many people who have suggested features and contributed ideas on our Google Group and IRC channel. Please keep them coming :)