mirror of
https://github.com/python/cpython.git
synced 2024-11-30 18:51:15 +01:00
38e5d27cae
(Very rough.)
197 lines
7.4 KiB
TeX
197 lines
7.4 KiB
TeX
\section{\module{asyncore} ---
|
|
Asynchronous socket handler}
|
|
|
|
\declaremodule{builtin}{asyncore}
|
|
\modulesynopsis{A base class for developing asynchronous socket
|
|
handling services.}
|
|
\moduleauthor{Sam Rushing}{rushing@nightmare.com}
|
|
\sectionauthor{Christopher Petrilli}{petrilli@amber.org}
|
|
% Heavily adapted from original documentation by Sam Rushing.
|
|
|
|
This module provides the basic infrastructure for writing asynchronous
|
|
socket service clients and servers.
|
|
|
|
There are only two ways to have a program on a single processor do
|
|
``more than one thing at a time.'' Multi-threaded programming is the
|
|
simplest and most popular way to do it, but there is another very
|
|
different technique, that lets you have nearly all the advantages of
|
|
multi-threading, without actually using multiple threads. It's really
|
|
only practical if your program is largely I/O bound. If your program
|
|
is CPU bound, then pre-emptive scheduled threads are probably what
|
|
you really need. Network servers are rarely CPU-bound, however.
|
|
|
|
If your operating system supports the \cfunction{select()} system call
|
|
in its I/O library (and nearly all do), then you can use it to juggle
|
|
multiple communication channels at once; doing other work while your
|
|
I/O is taking place in the ``background.'' Although this strategy can
|
|
seem strange and complex, especially at first, it is in many ways
|
|
easier to understand and control than multi-threaded programming.
|
|
The module documented here solves many of the difficult problems for
|
|
you, making the task of building sophisticated high-performance
|
|
network servers and clients a snap.
|
|
|
|
\begin{classdesc}{dispatcher}{}
|
|
The first class we will introduce is the \class{dispatcher} class.
|
|
This is a thin wrapper around a low-level socket object. To make
|
|
it more useful, it has a few methods for event-handling on it.
|
|
Otherwise, it can be treated as a normal non-blocking socket object.
|
|
|
|
The direct interface between the select loop and the socket object
|
|
are the \method{handle_read_event()} and
|
|
\method{handle_write_event()} methods. These are called whenever an
|
|
object `fires' that event.
|
|
|
|
The firing of these low-level events can tell us whether certain
|
|
higher-level events have taken place, depending on the timing and
|
|
the state of the connection. For example, if we have asked for a
|
|
socket to connect to another host, we know that the connection has
|
|
been made when the socket fires a write event (at this point you
|
|
know that you may write to it with the expectation of success).
|
|
The implied higher-level events are:
|
|
|
|
\begin{tableii}{l|l}{code}{Event}{Description}
|
|
\lineii{handle_connect()}{Implied by a write event}
|
|
\lineii{handle_close()}{Implied by a read event with no data available}
|
|
\lineii{handle_accept()}{Implied by a read event on a listening socket}
|
|
\end{tableii}
|
|
\end{classdesc}
|
|
|
|
This set of user-level events is larger than the basics. The
|
|
full set of methods that can be overridden in your subclass are:
|
|
|
|
\begin{methoddesc}{handle_read}{}
|
|
Called when there is new data to be read from a socket.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{handle_write}{}
|
|
Called when there is an attempt to write data to the object.
|
|
Often this method will implement the necessary buffering for
|
|
performance. For example:
|
|
|
|
\begin{verbatim}
|
|
def handle_write(self):
|
|
sent = self.send(self.buffer)
|
|
self.buffer = self.buffer[sent:]
|
|
\end{verbatim}
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{handle_expt}{}
|
|
Called when there is out of band (OOB) data for a socket
|
|
connection. This will almost never happen, as OOB is
|
|
tenuously supported and rarely used.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{handle_connect}{}
|
|
Called when the socket actually makes a connection. This
|
|
might be used to send a ``welcome'' banner, or something
|
|
similar.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{handle_close}{}
|
|
Called when the socket is closed.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{handle_accept}{}
|
|
Called on listening sockets when they actually accept a new
|
|
connection.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{readable}{}
|
|
Each time through the \method{select()} loop, the set of sockets
|
|
is scanned, and this method is called to see if there is any
|
|
interest in reading. The default method simply returns \code{1},
|
|
indicating that by default, all channels will be interested.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{writeable}{}
|
|
Each time through the \method{select()} loop, the set of sockets
|
|
is scanned, and this method is called to see if there is any
|
|
interest in writing. The default method simply returns \code{1},
|
|
indiciating that by default, all channels will be interested.
|
|
\end{methoddesc}
|
|
|
|
In addition, there are the basic methods needed to construct and
|
|
manipulate ``channels,'' which are what we will call the socket
|
|
connections in this context. Note that most of these are nearly
|
|
identical to their socket partners.
|
|
|
|
\begin{methoddesc}{create_socket}{family, type}
|
|
This is identical to the creation of a normal socket, and
|
|
will use the same options for creation. Refer to the
|
|
\refmodule{socket} documentation for information on creating
|
|
sockets.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{connect}{address}
|
|
As with the normal socket object, \var{address} is a
|
|
tuple with the first element the host to connect to, and the
|
|
second the port.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{send}{data}
|
|
Send \var{data} out the socket.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{recv}{buffer_size}
|
|
Read at most \var{buffer_size} bytes from the socket.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{listen}{\optional{backlog}}
|
|
Listen for connections made to the socket. The \var{backlog}
|
|
argument specifies the maximum number of queued connections
|
|
and should be at least 1; the maximum value is
|
|
system-dependent (usually 5).
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{bind}{address}
|
|
Bind the socket to \var{address}. The socket must not already
|
|
be bound. (The format of \var{address} depends on the address
|
|
family --- see above.)
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{accept}{}
|
|
Accept a connection. The socket must be bound to an address
|
|
and listening for connections. The return value is a pair
|
|
\code{(\var{conn}, \var{address})} where \var{conn} is a
|
|
\emph{new} socket object usable to send and receive data on
|
|
the connection, and \var{address} is the address bound to the
|
|
socket on the other end of the connection.
|
|
\end{methoddesc}
|
|
|
|
\begin{methoddesc}{close}{}
|
|
Close the socket. All future operations on the socket object
|
|
will fail. The remote end will receive no more data (after
|
|
queued data is flushed). Sockets are automatically closed
|
|
when they are garbage-collected.
|
|
\end{methoddesc}
|
|
|
|
|
|
\subsection{Example basic HTTP client \label{asyncore-example}}
|
|
|
|
As a basic example, below is a very basic HTTP client that uses the
|
|
\class{dispatcher} class to implement its socket handling:
|
|
|
|
\begin{verbatim}
|
|
class http_client(asyncore.dispatcher):
|
|
def __init__(self, host,path):
|
|
asyncore.dispatcher.__init__(self)
|
|
self.path = path
|
|
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
|
|
self.connect( (host, 80) )
|
|
self.buffer = 'GET %s HTTP/1.0\r\b\r\n' % self.path
|
|
|
|
def handle_connect(self):
|
|
pass
|
|
|
|
def handle_read(self):
|
|
data = self.recv(8192)
|
|
print data
|
|
|
|
def writeable(self):
|
|
return (len(self.buffer) > 0)
|
|
|
|
def handle_write(self):
|
|
sent = self.send(self.buffer)
|
|
self.buffer = self.buffer[sent:]
|
|
\end{verbatim}
|