Skip to content

Commit

Permalink
Document all the new HTTP stuff
Browse files Browse the repository at this point in the history
  • Loading branch information
bdarnell committed Apr 27, 2014
1 parent 500a605 commit 425a31d
Show file tree
Hide file tree
Showing 15 changed files with 328 additions and 44 deletions.
3 changes: 3 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@
]
# I wish this could go in a per-module file...
coverage_ignore_classes = [
# tornado.concurrent
"TracebackFuture",

# tornado.gen
"Multi",
"Runner",
Expand Down
5 changes: 5 additions & 0 deletions docs/gen.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,11 @@
.. autoclass:: YieldPoint
:members:

.. autofunction:: with_timeout
.. autoexception:: TimeoutError

.. autofunction:: maybe_future

Other classes
-------------

Expand Down
5 changes: 5 additions & 0 deletions docs/http1connection.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
``tornado.http1connection`` -- HTTP/1.x client/server implementation
====================================================================

.. automodule:: tornado.http1connection
:members:
1 change: 1 addition & 0 deletions docs/iostream.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,3 +48,4 @@
----------

.. autoexception:: StreamClosedError
.. autoexception:: UnsatisfiableReadError
1 change: 1 addition & 0 deletions docs/networking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Asynchronous networking
gen
ioloop
iostream
http1connection
httpclient
netutil
tcpserver
93 changes: 86 additions & 7 deletions docs/releases/next.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,6 @@ Backwards-compatibility notes
of the old ``TracebackFuture`` class. ``TracebackFuture`` is now
simply an alias for ``Future``.

`tornado.httpclient`
~~~~~~~~~~~~~~~~~~~~

* The command-line HTTP client (``python -m tornado.httpclient $URL``)
now works on Python 3.

`tornado.gen`
~~~~~~~~~~~~~

Expand All @@ -39,6 +33,57 @@ Backwards-compatibility notes
* Performance of coroutines has been improved.
* Coroutines no longer generate ``StackContexts`` by default, but they
will be created on demand when needed.
* New function `.with_timeout` wraps a `.Future` and raises an exception
if it doesn't complete in a given amount of time.

`tornado.http1connection`
~~~~~~~~~~~~~~~~~~~~~~~~~

* New module contains the HTTP implementation shared by `tornado.httpserver`
and ``tornado.simple_httpclient``.

`tornado.httpclient`
~~~~~~~~~~~~~~~~~~~~

* The command-line HTTP client (``python -m tornado.httpclient $URL``)
now works on Python 3.

`tornado.httpserver`
~~~~~~~~~~~~~~~~~~~~

* ``tornado.httpserver.HTTPRequest`` has moved to
`tornado.httputil.HTTPServerRequest`.
* HTTP implementation has been unified with ``tornado.simple_httpclient``
in `tornado.http1connection`.
* Now supports ``Transfer-Encoding: chunked`` for request bodies.
* Now supports ``Content-Encoding: gzip`` for request bodies if ``gzip=True``
is passed to the `.HTTPServer` constructor.
* The ``connection`` attribute of `.HTTPServerRequest` is now documented
for public use; applications are expected to write their responses
via the `.HTTPConnection` interface.
* The `.HTTPServerRequest.write` and `.HTTPServerRequest.finish` methods
are now deprecated.
* `.HTTPServer` now supports `.HTTPServerConnectionDelegate` in addition to
the old ``request_callback`` interface. The delegate interface supports
streaming of request bodies.
* `.HTTPServer` now detects the error of an application sending a
``Content-Length`` error that is inconsistent with the actual content.
* New constructor arguments ``max_header_size`` and ``max_body_size``
allow separate limits to be set for different parts of the request.
``max_body_size`` is applied even in streaming mode.
* New constructor argument ``chunk_size`` can be used to limit the amount
of data read into memory at one time per request.
* New constructor arguments ``idle_connection_timeout`` and ``body_timeout``
allow time limits to be placed on the reading of requests.

`tornado.httputil`
~~~~~~~~~~~~~~~~~~

* `.HTTPServerRequest` was moved to this module from `tornado.httpserver`.
* New base classes `.HTTPConnection`, `.HTTPServerConnectionDelegate`,
and `.HTTPMessageDelegate` define the interaction between applications
and the HTTP implementation.


`tornado.ioloop`
~~~~~~~~~~~~~~~~
Expand All @@ -48,6 +93,7 @@ Backwards-compatibility notes
(when possible) to avoid a garbage-collection-related problem in unit tests.
* New method `.IOLoop.clear_instance` makes it possible to uninstall the
singleton instance.
* `.IOLoop.add_timeout` is now a bit more efficient.

`tornado.iostream`
~~~~~~~~~~~~~~~~~~
Expand All @@ -57,6 +103,17 @@ Backwards-compatibility notes
for use with coroutines.
* No longer gets confused when an ``IOError`` or ``OSError`` without
an ``errno`` attribute is raised.
* `.BaseIOStream.read_bytes` now accepts a ``partial`` keyword argument,
which can be used to return before the full amount has been read.
This is a more coroutine-friendly alternative to ``streaming_callback``.
* `.BaseIOStream.read_until` and ``read_until_regex`` now acept a
``max_bytes`` keyword argument which will cause the request to fail if
it cannot be satisfied from the given number of bytes.
* `.IOStream` no longer reads from the socket into memory if it does not
need data to satisfy a pending read. As a side effect, the close callback
will not be run immediately if the other side closes the connection
while there is unconsumed data in the buffer.
* The default ``chunk_size`` has been increased to 64KB (from 4KB)

`tornado.netutil`
~~~~~~~~~~~~~~~~~
Expand All @@ -81,7 +138,13 @@ Backwards-compatibility notes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

* Improved default cipher suite selection (Python 2.7+).

* HTTP implementation has been unified with ``tornado.httpserver``
in `tornado.http1connection`
* Streaming request bodies are now supported via the ``body_producer``
keyword argument to `tornado.httpclient.HTTPRequest`.
* The ``expect_100_continue`` keyword argument to
`tornado.httpclient.HTTPRequest` allows the use of the HTTP ``Expect:
100-continue`` feature.

`tornado.stack_context`
~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -103,6 +166,11 @@ Backwards-compatibility notes

* When gzip support is enabled, all ``text/*`` mime types will be compressed,
not just those on a whitelist.
* `.Application` now implements the `.HTTPMessageDelegate` interface.
* It is now possible to support streaming request bodies with the
`.stream_request_body` decorator and the new `.RequestHandler.data_received`
method.
* `.RequestHandler.flush` now returns a `.Future` if no callback is given.

`tornado.websocket`
~~~~~~~~~~~~~~~~~~~
Expand All @@ -116,3 +184,14 @@ Backwards-compatibility notes
messages larger than 2GB on 64-bit systems.
* The fallback mechanism for detecting a missing C compiler now
works correctly on Mac OS X.
* Arguments to `.WebSocketHandler.open` are now decoded in the same way
as arguments to `.RequestHandler.get` and similar methods.

`tornado.wsgi`
~~~~~~~~~~~~~~

* New class `.WSGIAdapter` supports running a Tornado `.Application` on
a WSGI server in a way that is more compatible with Tornado's non-WSGI
`.HTTPServer`. `.WSGIApplication` is deprecated in favor of using
`.WSGIAdapter` with a regular `.Application`.
* `.WSGIAdapter` now supports gzipped output.
2 changes: 2 additions & 0 deletions docs/web.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@
.. automethod:: RequestHandler.send_error
.. automethod:: RequestHandler.write_error
.. automethod:: RequestHandler.clear
.. automethod:: RequestHandler.data_received


Cookies
Expand Down Expand Up @@ -219,6 +220,7 @@
.. autofunction:: authenticated
.. autofunction:: addslash
.. autofunction:: removeslash
.. autofunction:: stream_request_body

Everything else
---------------
Expand Down
2 changes: 2 additions & 0 deletions tornado/gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -467,6 +467,8 @@ def with_timeout(timeout, future, io_loop=None):
relative to `.IOLoop.time`)
Currently only supports Futures, not other `YieldPoint` classes.
.. versionadded:: 3.3
"""
# TODO: allow yield points in addition to futures?
# Tricky to do with stack_context semantics.
Expand Down
82 changes: 73 additions & 9 deletions tornado/http1connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,11 @@
# License for the specific language governing permissions and limitations
# under the License.

"""Client and server implementations of HTTP/1.x.
.. versionadded:: 3.3
"""

from __future__ import absolute_import, division, print_function, with_statement

from tornado.concurrent import Future
Expand All @@ -26,9 +31,22 @@
from tornado.util import GzipDecompressor

class HTTP1ConnectionParameters(object):
"""Parameters for `.HTTP1Connection` and `.HTTP1ServerConnection`.
"""
def __init__(self, no_keep_alive=False, protocol=None, chunk_size=None,
max_header_size=None, header_timeout=None, max_body_size=None,
body_timeout=None, use_gzip=False):
"""
:arg bool no_keep_alive: If true, always close the connection after
one request.
:arg str protocol: "http" or "https"
:arg int chunk_size: how much data to read into memory at once
:arg int max_header_size: maximum amount of data for HTTP headers
:arg float header_timeout: how long to wait for all headers (seconds)
:arg int max_body_size: maximum amount of data for body
:arg float body_timeout: how long to wait while reading body (seconds)
:arg bool use_gzip: if true, decode incoming ``Content-Encoding: gzip``
"""
self.no_keep_alive = no_keep_alive
self.protocol = protocol
self.chunk_size = chunk_size or 65536
Expand All @@ -39,12 +57,19 @@ def __init__(self, no_keep_alive=False, protocol=None, chunk_size=None,
self.use_gzip = use_gzip

class HTTP1Connection(object):
"""Handles a connection to an HTTP client, executing HTTP requests.
"""Implements the HTTP/1.x protocol.
We parse HTTP headers and bodies, and execute the request callback
until the HTTP conection is closed.
This class can be on its own for clients, or via `HTTP1ServerConnection`
for servers.
"""
def __init__(self, stream, is_client, params=None, context=None):
"""
:arg stream: an `.IOStream`
:arg bool is_client: client or server
:arg params: a `.HTTP1ConnectionParameters` instance or ``None``
:arg context: an opaque application-defined object that can be accessed
as ``connection.context``.
"""
self.is_client = is_client
self.stream = stream
if params is None:
Expand Down Expand Up @@ -85,6 +110,16 @@ def __init__(self, stream, is_client, params=None, context=None):
self._expected_content_remaining = None

def read_response(self, delegate):
"""Read a single HTTP response.
Typical client-mode usage is to write a request using `write_headers`,
`write`, and `finish`, and then call ``read_response``.
:arg delegate: a `.HTTPMessageDelegate`
Returns a `.Future` that resolves to None after the full response has
been read.
"""
if self.params.use_gzip:
delegate = _GzipMessageDelegate(delegate, self.params.chunk_size)
return self._read_message(delegate)
Expand Down Expand Up @@ -190,10 +225,8 @@ def _clear_callbacks(self):
def set_close_callback(self, callback):
"""Sets a callback that will be run when the connection is closed.
Use this instead of accessing
`HTTPConnection.stream.set_close_callback
<.BaseIOStream.set_close_callback>` directly (which was the
recommended approach prior to Tornado 3.0).
.. deprecated:: 3.3
Use `.HTTPMessageDelegate.on_connection_close` instead.
"""
self._close_callback = stack_context.wrap(callback)

Expand All @@ -211,18 +244,34 @@ def close(self):
self._clear_callbacks()

def detach(self):
"""Take control of the underlying stream.
Returns the underlying `.IOStream` object and stops all further
HTTP processing. May only be called during
`.HTTPMessageDelegate.headers_received`. Intended for implementing
protocols like websockets that tunnel over an HTTP handshake.
"""
stream = self.stream
self.stream = None
return stream

def set_body_timeout(self, timeout):
"""Sets the body timeout for a single request.
Overrides the value from `.HTTP1ConnectionParameters`.
"""
self._body_timeout = timeout

def set_max_body_size(self, max_body_size):
"""Sets the body size limit for a single request.
Overrides the value from `.HTTP1ConnectionParameters`.
"""
self._max_body_size = max_body_size

def write_headers(self, start_line, headers, chunk=None, callback=None,
has_body=True):
"""Implements `.HTTPConnection.write_headers`."""
if self.is_client:
self._request_start_line = start_line
# Client requests with a non-empty body must have either a
Expand Down Expand Up @@ -298,7 +347,7 @@ def _format_chunk(self, chunk):
return chunk

def write(self, chunk, callback=None):
"""Writes a chunk of output to the stream."""
"""Implements `.HTTPConnection.write`."""
if self.stream.closed():
self._write_future = Future()
self._write_future.set_exception(iostream.StreamClosedError())
Expand All @@ -312,7 +361,7 @@ def write(self, chunk, callback=None):
return self._write_future

def finish(self):
"""Finishes the request."""
"""Implements `.HTTPConnection.finish`."""
if (self._expected_content_remaining is not None and
self._expected_content_remaining != 0 and
not self.stream.closed()):
Expand Down Expand Up @@ -492,7 +541,14 @@ def finish(self):


class HTTP1ServerConnection(object):
"""An HTTP/1.x server."""
def __init__(self, stream, params=None, context=None):
"""
:arg stream: an `.IOStream`
:arg params: a `.HTTP1ConnectionParameters` or None
:arg context: an opaque application-defined object that is accessible
as ``connection.context``
"""
self.stream = stream
if params is None:
params = HTTP1ConnectionParameters()
Expand All @@ -502,6 +558,10 @@ def __init__(self, stream, params=None, context=None):

@gen.coroutine
def close(self):
"""Closes the connection.
Returns a `.Future` that resolves after the serving loop has exited.
"""
self.stream.close()
# Block until the serving loop is done, but ignore any exceptions
# (start_serving is already responsible for logging them).
Expand All @@ -511,6 +571,10 @@ def close(self):
pass

def start_serving(self, delegate):
"""Starts serving requests on this connection.
:arg delegate: a `.HTTPServerConnectionDelegate`
"""
assert isinstance(delegate, httputil.HTTPServerConnectionDelegate)
self._serving_future = self._server_request_loop(delegate)
# Register the future on the IOLoop so its errors get logged.
Expand Down
Loading

0 comments on commit 425a31d

Please sign in to comment.