Sockets and the socket API facilitate inter-process communication in networks which may be physical (connected to other networks using wires or wirelessly) or logical (a computer's local network).
In simple words, sockets enable sending messages across a network. The internet (via your ISP) relies on sockets to communicate with your computer.
On the whole, the subject of sockets is expansive, with its nuances explained in literal volumes. For this reason, many find learning about sockets overwhelming since their workings involve several mechanisms, each with its own nuances.
But don't worry – we'll walk you through socket programming in Python with easy examples in this guide. You can follow along the tutorial and execute the examples yourself with Python 3.6 and above.
Overview of Socket API
The origins of sockets date back to 1971 – their use began with ARPANET, and they were later adapted as an API in the BSD operating system in 1983. These sockets were termed "Berkeley sockets."
The socket module in Python supplies an interface to the Berkeley sockets API. This is the Python module we will walk you through using.
The methods and functions in the socket module include:
- .accept()
- .bind()
- .close()
- .connect()
- .connect_ex()
- .listen()
- .recv()
- .send()
- socket()
Python equips you with an easy-to-use API that maps directly to the system calls written in C, which makes it reliably consistent. Python also equips you with classes that simplify using these low-level socket functions.
Another relevant module in Python's standard library is the socketserver module, which is a framework for network servers. However, the functioning of this module is beyond the scope of this guide.
TCP: The Default Protocol
To use Python's socket module, you must create a socket object using socket.socket(). Further, you must specify the socket type as socket.SOCK_STREAM.
When you do this, you will be using the Transmission Control Protocol by default. TCP is used as the default for two reasons:
- TCP detects dropped packets in the network and ensures the sender retransmits them.
- Applications read data sent by the sender in the precise order it was sent.
Other protocols approach data transmission in different fashions, and though you can create sockets that use these other protocols in Python, it's best to avoid using them.
This is because other protocols tend to send data out of order, and the receiver may read the data out of order.
For instance, in the User Datagram Protocol, the data received from sockets created using socket.SOCK_DGRAM may not be read in the order the sender writes it.
Using the default TCP protocol is your best bet because networks are essentially best-effort delivery systems. So, there are no guarantees that the sent data will reach its destination. If the data does reach its destination, there are no guarantees that the receiver will receive exactly what the sender sent.
Additionally, network devices like switches and routers are inherently limited in bandwidth. These devices have finite CPU power, memory, interface packet buffers, and buses – just like any client or server.
Relying on TCP ensures zero packet loss and guarantees that the data will be read in the order it was sent. It also circumvents the other pitfalls of communicating over a network.
Creating a Simple Echo Client and Server
With the basics out of the way, you're ready to implement a simple client and server. The server will echo (send back) the received data to the client.
Let's begin with the server implementation:
import socket HOST = "127.0.0.1" # This is the standard loopback interface address (localhost) PORT = 65432 # Defines the port to listen on with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() conn, addr = s.accept() with conn: print(f"Connected by {addr}") while True: data = conn.recv(1024) if not data: break conn.sendall(data)
In the API call above, socket.socket() creates a socket object which supports the context manager type. For this reason, we are using it in a "with" statement and refraining from calling s.close() to use it.
Instead, we can simply use a "pass" statement in the program to use the object, like so:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: pass # Using the socket object without calling s.close()
Now, let's discuss the arguments in the program.
Both arguments are constants in the socket() object, with AF_INET specifying the address family for IPv4 and SOCK_STREAM specifying the socket type for TCP.
Next, the program passes the values to .bind(). Since we have specified the IPv4 address family, .bind() accepts two arguments: host and port.
The host value can be empty, an IP address, or a hostname. Bear in mind that the value must be an IPv4-formatted address string if an IP address value is passed.
127.0.0.1 is the standard IPv4 address for the loopback interface. When used, it limits server connection to the processes on the host.
On the other hand, if you pass an empty string as the host value, the server will accept connections on all available IPv4 interfaces.
If you pass a hostname as the host value, the output is dependent on the output of the name resolution process. So, you might receive a different IP address every time you run the code.
As you'd expect, the port argument in .bind() represents the TCP port number that must accept client connections. Port 0 is reserved, and you can use ports between 1 and 65535. Note that you may need to provide superuser privileges if the port number is below 1024 on certain systems.
The program's next statement involves the use of .listen(). The statement enables the server to accept connections, turning the server into a "listening" socket.
Interestingly, the .listen() method includes a backlog parameter, which specifies the number of unaccepted connections the system allows before refusing new connections.
This parameter was made optional from Python 3.5 onwards, so there is no harm in not specifying anything. Python automatically assigns a default backlog parameter if the method is written without a parameter.
It's worth noting that supplying a higher backlog value can help with performance if you expect the server to receive several connection requests simultaneously.
Supplying a large backlog value increases the number of acceptable pending connections. Note that the greatest possible value of backlog is system-dependent.
Coming to the details of the .accept() method: its function is to block execution and wait for a client to connect. On connection, the method returns a new socket object that represents the connection. It also returns a tuple which holds the client's address.
As you can guess, in the program above, the tuple will comprise (host, port). If you were to use the IPv6 address family, the tuple would comprise (host, port, flowinfo, scopeid).
More importantly, the socket object from .accept() method is the one used to interact with the client. At this stage, you need to discern between this socket object and the server's listening socket, which accepts new connections.
When the method supplies the client socket object conn, an infinite while loop repeatedly traverses the blocking calls to conn.recv(). Blocking calls are calls that suspend the process until the event, which in this case is data transfer, is complete.
This way, the loop reads the client's data before echoing it back with the conn.sendall() statement.
If conn.recv() returns an empty bytes object, it indicates a closed connection from the client's end and the loop ends. In our program, the with statement closes the loop at the end of the data block.
Creating the Echo Client
The client's functioning is much more straightforward than that of the server.
import socket HOST = "127.0.0.1" PORT = 65432 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.connect((HOST, PORT)) s.sendall(b"This is example data.") data = s.recv(1024) print(f"Received {data!r}")
This code creates a socket object and connects to the server using .connect(). Next, it calls s.sendall() to send its message. Finally, the call to s.recv() reads the server's reply before printing it.
Running the Client and Server
Launch a terminal or command prompt on your machine and navigate to your script library. Then, run the server code with the python command like so:
$ python echo-server.py
It might seem that your terminal has frozen when it waits for a client connection. Under the hood, the server is suspended or blocked on .accept(). Launch another terminal or command prompt window and run:
$ python echo-client.py
You will see:
b'This is example data'
Switch back to the window running your server code, and you will see:
$ python echo-server.py Connected by ('127.0.0.1', 64623)
Note how the server output is the addr tuple returned from s.accept(), which is the client's IP address and TCP port number. The port number in your output will likely be different from the one in the output above.
Viewing Socket State
You can use the netstat command to check the current state of the sockets on your host. The command works on Linux, Windows, and macOS.
On a macOS, the output of the command looks like this:
$ netstat -an Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 0 127.0.0.1.65432 *.* LISTEN
The local address in the output above is 127.0.0.1.65432, but if the server code has passed an empty string instead of an IP address, the output would be different:
$ netstat -an Active Internet connections (including servers) Proto Recv-Q Send-Q Local Address Foreign Address (state) tcp4 0 0 *.65432 *.* LISTEN
Since the local address is *65432, the server will use all available host interfaces in the IPv4 address family to accept incoming connections.
Note how the Proto column has "tcp4" under it, indicating how the call to socket() had socket.AF_INET as a parameter.
Also, note that the outputs above have been trimmed down to show the vital echo server data. You might see a larger output depending on which machine you execute the command.
Besides netstat, you can use lsof (list open files) command to access the active connections data.
The command will also output other vital data. It is available by default on macOS, and you can use your package manager to install it on a Linux machine.
Here's what the command's output looks like:
$ lsof -i -n COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME Python 67982 nathan 3u IPv4 0xecf272 0t0 TCP *:65432 (LISTEN)
Using the -i flag with the command displays the process ID and user ID of the open internet sockets on your machine.
Regardless, there's a chance you'll encounter the following error if you attempt to connect to a port with no listening socket:
$ python echo-client.py Traceback (most recent call last): File "./echo-client.py", line 9, in <module> s.connect((HOST, PORT)) ConnectionRefusedError: [Errno 61] Connection refused
If you see this error, either the server isn't running or the specific port number is incorrect. A firewall may also be blocking the connection. Besides the ConnectionRefusedError, you might see the Connection timed-out error.
To avoid this issue, set a firewall rule allowing the client access to the TCP port.
Handling Multiple Connections
The echo server program has several limitations, the biggest being that it only serves one client before exiting. The client program has the same limitation and an additional problem:
When the client uses s.recv(), it may only return one byte – b'T' from b'This is example data.'
In the code, the data = s.recv(1024) statement has a bufsize argument of 1024. It indicates the maximum data to be received at once. Bear in mind that this argument doesn't indicate that .recv() will return 1024 bytes.
Interestingly, .send() works the same way, returning the number of bytes sent. The returned bytes might be less than the size of the passed data. Consequently, you must check this and call .send() as often as necessary to send all the data.
In the echo server program, we avoided calling .send() by using .sendall().
So, we are left with two problems: the need for calling recv() and .send() until all the data is received or sent and the need for handling several connections simultaneously.
Fortunately, there are several approaches to concurrency. Using asynchronous I/O is perhaps the most popular approach. The asyncio module was introduced to Python's standard library in Python 3.4.
But you can also take the traditional approach of using threads for concurrency. However, achieving concurrency is rarely this straightforward. There are several subtleties to be considered and protected against. A single one of these subtleties can break an application in serious ways.
Learning and using concurrent programming can seem frightening now that you know this. But if your application needs to scale, you will need to use concurrent programming since you will need to rely on more than one core or processor.
The good news is we will be using something more traditional than threads in this guide – the .select() system call – which is much simpler to learn than the other methods.
You can check for I/O completion on more than one socket with the .select() method. Meaning, you see which sockets have I/O ready for reading or writing by calling this method. To use the most efficient implementation of the method, we will use the selectors module in the standard Python library.
The module is an improved version of the select module primitives and allows efficient, high-level I/O multiplexing. Though you won't achieve concurrency, the approach can offer the performance boost needed for your workload to function optimally.
Bear in mind that the success of using the .select() system call depends on what the application does when it services a request and the number of clients it serves.
On the one hand, the asyncio module utilizes single-threaded cooperative multitasking. This is coupled with an event loop to manage tasks. On the other hand, with .select(), you will write a simple and synchronous version of an event loop.
If you use multiple threads and achieve concurrency, you must use the GIL with PyPy and CPython. This limits the volume of work you can complete parallelly.
These facts make choosing to use .select() a viable option – you don't have to use threads, asyncio, or another asynchronous library. Most network applications are I/O bound and wait on the local network for tasks like disk writes, limiting performance.
You can learn about the concurrent.futures module if you are dealing with client requests that initiate CPU work. Its ProcessPoolExecutor class allows you to use a pool of processes to execute calls asynchronously.
Multi-Connection Client and Server
In this section, we create a server and client that can handle multiple connections.
Multi-Connection Server
Let's begin with setting up the listening socket:
import sys import socket import selectors import types sel = selectors.DefaultSelector() host, port = sys.argv[1], int(sys.argv[2]) lsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) lsock.bind((host, port)) lsock.listen() print(f"Listening on {(host, port)}") lsock.setblocking(False) sel.register(lsock, selectors.EVENT_READ, data=None)
The primary difference between this server and the previously discussed echo server is the call to lsock.setblocking(False). This call configures the socket, so calls made to it don't get blocked.
We write this call to use it with sel.select() and wait for events on the socket(s) before reading and writing data.
The final statement in the code above registers the socket to be monitored by sel.select(). Since we want to read events using the listening socket, we pass selectors.EVENT_READ as an argument.
The third parameter in the sel.register() statement is "data," which is returned when .select() returns. This parameter helps track what is sent and received on the socket.
Now, let's look at the event loop:
try: while True: events = sel.select(timeout=None) for key, mask in events: if key.data is None: accept_wrapper(key.fileobj) else: service_connection(key, mask) except KeyboardInterrupt: print("Caught keyboard interrupt, exiting") finally: sel.close()
The first statement in the while loop involves using sel.select() with the timeout parameter set to None. It blocks execution till one or more sockets are ready for I/O and returns a list of tuples for every socket.
Every returned tuple will have a key and a mask value. The key value is a SelectorKey namedtuple containing a fileobj attribute. key.fileobj is the socket object, and the mask value is an event mask of the ready operations.
When the key.data value is "None," it indicates that the connection is from the listening socket, and you must accept it. To do this, you must call accept_wrapper() to get the socket object and register it with the selector.
If you notice that key.data doesn't have the value "None," it indicates that the client socket has been accepted. So, you can call service_connection(), passing key and mask as arguments to service the socket.
Let's see how the accept_wrapper() function works:
def accept_wrapper(sock): conn, addr = sock.accept() print(f"Accepted connection from {addr}") conn.setblocking(False) data = types.SimpleNamespace(addr=addr, inb=b"", outb=b"") events = selectors.EVENT_READ | selectors.EVENT_WRITE sel.register(conn, events, data=data)
Since the listening socket is registered for the selectors.EVENT_READ event, the socket will be ready to read. To proceed, call sock.accept() and conn.setblocking(False), which sets the socket in non-blocking mode.
At this stage, we need an object to hold the data, which we can accomplish using a SimpleNameSpace. Since knowing when the client connection is ready for reading and writing, both these events are written with the bitwise OR operator, like so:
def accept_wrapper(sock): conn, addr = sock.accept() print(f"Accepted connection from {addr}") conn.setblocking(False) data = types.SimpleNamespace(addr=addr, inb=b"", outb=b"") events = selectors.EVENT_READ | selectors.EVENT_WRITE sel.register(conn, events, data=data)
Now is the time to define the service_connection() function to see what is done when the connection is ready:
def service_connection(key, mask): sock = key.fileobj data = key.data if mask & selectors.EVENT_READ: recv_data = sock.recv(1024) if recv_data: data.outb += recv_data else: print(f"Closing connection to {data.addr}") sel.unregister(sock) sock.close() if mask & selectors.EVENT_WRITE: if data.outb: print(f"Echoing {data.outb!r} to {data.addr}") sent = sock.send(data.outb) data.outb = data.outb[sent:]
The code above is the heart of the server. The .select() function contains the socket and data object and returns a namedtuple called key. mask holds the ready events.
When the socket is ready to be read, selectors.EVENT_READ and mask will evaluate to True, which results in a call to sock.recv(). Then, ready data is appended to data.outb so it can be sent later.
If data isn't received, it indicates that the client has closed their socket, and the server should too. We call sel.unregister() before closing the socket, so .select() doesn't have to monitor it.
In contrast, when the socket is ready for writing, the data it receives is stored in data.outb and echoed to the client with sock.send() before being removed from the send buffer.
Multi-Connection Client
As you'd suspect, the client code is similar to the server code except that it begins by initiating connections using start_connections().
import sys import socket import selectors import types sel = selectors.DefaultSelector() messages = [b"Message 1 from client.", b"Message 2 from client."] def start_connections(host, port, num_conns): server_addr = (host, port) for i in range(0, num_conns): connid = i + 1 print(f"Starting connection {connid} to {server_addr}") sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setblocking(False) sock.connect_ex(server_addr) events = selectors.EVENT_READ | selectors.EVENT_WRITE data = types.SimpleNamespace( connid=connid, msg_total=sum(len(m) for m in messages), recv_total=0, messages=messages.copy(), outb=b"", ) sel.register(sock, events, data=data)
The num_conns value is taken from the CLI and indicates the number of connections the client wants to create with the server. Every socket is set to non-blocking mode.
We use .connect_ex() rather than .connect() since it would raise a BlockingIOError exception. Initially, the method returns errno.EINPROGRESS, an error indicator, instead of raising an exception that stops the progress. When the client connects to the server, the socket becomes ready to read and write and .select() returns it.
The data to be stored is created using SimpleNamespace. The messages that the client wants to send to the server are copied with messages.copy() since each connection calls socket.send() and modifies the list.
The "data" object stores everything the client wants to send and has sent and received.
The service_connection() function remains mostly the same for the client as it was for the server, with some changes:
def service_connection(key, mask): sock = key.fileobj data = key.data if mask & selectors.EVENT_READ: recv_data = sock.recv(1024) # Should be ready to read if recv_data: print(f"Received {recv_data!r} from connection {data.connid}") data.recv_total += len(recv_data) if not recv_data or data.recv_total == data.msg_total: print(f"Closing connection {data.connid}") sel.unregister(sock) sock.close() if mask & selectors.EVENT_WRITE: if not data.outb and data.messages: data.outb = data.messages.pop(0) if data.outb: print(f"Sending {data.outb!r} to connection {data.connid}") sent = sock.send(data.outb) # Should be ready to write data.outb = data.outb[sent:]
The only difference is that the client tracks the number of bytes it received so it can close the connection. The server detects this and knows to close its connection.
To run the server, launch a terminal and run the program, remembering to pass the host and port numbers. Same goes for the client.