Polling, server sent events, and WebSockets – System Design
High amounts of data may be streamed to or from a server using polling, WebSockets, and server-sent events. When creating a system, it’s critical to understand the distinctions between them, and system design interviews may also bring up these issues.
Following a quick definition of each term, we’ll investigate the critical information you need to know about Polling, server-sent events, and WebSockets. We will also discuss long polling vs Websockets vs server-sent events and their benefits and drawbacks.
The functions of Polling, server sent events, and WebSockets
Web browsers and servers use HTTP. The server answers HTTP requests from browsers. This roundtrip occurs when you type “http://www.sample.com” into your browser and receive a web page.
The widely used HTTP protocol works well. This is different in a collaborative document editor that updates in real time. Multiple regular HTTP inquiries will impede the procedure. Polling, WebSockets, and SSE all play a role in it.
These three protocols emphasize stream data speed and memory utilization. A system’s technique depends on the use case. Let’s talk about these protocols’ functions and uses.
The first polling protocol was short polling, which allows clients to request frequent information updates from a server. The brief polling stages are:
- The client requests fresh data by sending an HTTP request to the server.
- The server returns new information or none at all.
- The client repeats the request regularly (e.g., 2 seconds).
Because it is a component of HTTP, Short polling has the benefits of being relatively straightforward and widely supported. Short polling has the drawback of having a high request overhead on both the client and the server, who must respond to incoming requests whether or not there is new information. Extended polling is favoured over short polling if you desire a polling connection.
A more effective variation of short polling is long polling. Long polling’s steps are:
- The client requests fresh information over HTTP from the server.
- The server delays responding until there is new information.
- When the client receives the previous answer, it repeats the request.
Long polling reduces HTTP requests to convey the same data to the client. The server must store unfulfilled client requests and handle circumstances when it gets new information, but the client still needs to request it.
Long polling consumes less bandwidth than short polling and is commonly accepted since it is part of the HTTP protocol. To accommodate this, Server-Sent Events vs WebSockets server-side implementations are far more complex than brief polling.
Long surveys have drawbacks. Keeping unfulfilled requests on the server may take more server resources than rapid polling and lower the number of connections. If a client has several open requests, message ordering may be lost.
With the help of server-sent events, a server may transmit fresh data to a client without constantly reestablishing the connection. For instance, a social networking platform may instantly employ SSE to distribute new items to user feeds. SSE connections follow the EventSource interface, which runs the underlying HTTP interactions.
The SSE process involves these significant steps:
- The client generates a fresh Event Source object with the server as its destination.
- The server establishes an SSE connection.
- The client gets messages containing Event Source handlers.
- Both parties cut the connection.
SSEs eliminate the requirement for client-server connection reestablishment by providing a stable one-way data stream. All browsers except IE offer EventSource, making SSE more straightforward to construct than WebSockets. Luckily, IE support is a common issue. Polyfill libraries handle it.
SSEs have drawbacks. If your service outgrows the one-way connection design, switch to WebSockets. SSEs through HTTP (not HTTP/2) can only create six connections per browser. Therefore if a user opens several tabs of your website, the SSE will only work for the first six.
A two-way message-carrying protocol called WebSockets is based on TCP, Layer 4 of the OSI networking architecture. WebSockets transmit data more quickly because it has less protocol overhead and runs at a lower level in the network stack than HTTP. The general process of a WebSocket connection is as follows:
- After HTTP, the client-server connection is upgraded via WebSockets.
- WebSockets TCP communications are sent and received through port 443 (or 80 if TLS encryption is not used).
- Both parties cut the connection.
WebSockets speed up message transport since the client and server don’t have to reconnect each time. Data may be sent safely and promptly using WebSockets. TCP ensures that messages arrive in the proper order.
WebSockets’ most significant issue is their initial developer effort. Certain functions, such as automatically reconnecting need custom programming. Large organizations’ firewalls may prevent WebSockets since they utilize ports.
Polling, server-sent events, and WebSockets stream data in online applications. SSE offers one-way server-to-client data streaming, polling entails frequent HTTP requests for updates, and WebSockets permits two-way communication. Designing systems that need real-time updates and effective data transfer for various use cases, such as collaborative editors or social