HTTP/2 — A Revision of the Major Protocol of the World Wide Web

Randolph Perkins
4 min readDec 20, 2020
(Basically everywhere you look)

HTTP/2 is a revised, faster, more option-laden version of HyperText Transfer Protocol that you have been using, knowingly or not, for the last five years. Since its implementation in 2015, deriving from the experimental SPDY (pronounced “speedy”) protocol by Google, HTTP/2 is now standardized, and used by about 98% of web browsers, and supported on around 50% of the top ten million websites. In this post, I will touch briefly on its predecessor, HTTP/1.1, and then delve into the improvements that /2 brings to the web, while also highlighting its key terms and functions, before concluding with the protocol’s influence on its proposed successor, HTTP/3.

HTTP Origins + HTTP/1.1

HTTP is known as a request-response protocol in the client/server computing model. In this relationship a web browser may be thought of as the client, whereas the application running which hosts the website would be the server. The client in this case submits an HTTP request message to the server, and the server provides resources (HTML, JavaScript, et al), performs functions for the client, and then returns a response message, containing status information and possibly requesting more content therein.

Computer scientist Tim Berners-Lee and his team at CERN (European Organization for Nuclear Research) are credited with creating the original protocol and related technology such as HTML, while first proposing a model for the World Wide Web in 1989. In its earliest incarnation, HTTP only used one method, “GET”, which made a request to a server, and the server’s response would be in HTML form.

(No, not this kind of cookie.)

HTTP is known as a stateless protocol, in that it does not need to retain information about individual clients while undergoing multiple requests. Yet some web apps create states on their own using hidden variables or HTTP cookies. HTTP/1.1 is a revision of the original protocol (HTTP/1.0). A major advantage of /1.1 is it reuse of connections after the web page has been delivered, in the form of images, scripts, and CSS or related stylesheets. This creates significantly less latency, an area that /2 would improve on even further.

HTTP/2— A Proposal

HTTP/2 would become the second major overhaul to the protocol, following /1.1, the latter of which was standardized in the late 1990s. There were a few areas of concern that the standard looked to overcome. First, the new implementation sought to create the option to choose other non-HTTP protocols. Additionally, the new protocol was intended to improve page load speed (decreased latency) via the pipelining of server requests, compression of HTTP headers, and the allowing of servers to push content, a means of supplying data to the server needed to render a page without waiting for the browser to examine responses. Another proposed feature of the new protocol was the ability to retain compatibility with older versions of HTTP and their methods. Much of the functional aspects of HTTP/1.1 were thus retained, thus making the transition easier.

Implementation

As stated above, the resulting implementation of HTTP/2 has been highly successful, with the vast majority of web browsers supporting the protocol. However, as with any widespread update of such an important aspect of web data transfer, the protocol is not without its criticisms. Some developers have claimed that the resulting standard was too dependent on the SPDY model developed by Google, and missed other opportunities for improvement over version 1.1. Others have argued that HTTP itself is needlessly complex, and data transfer could be better achieved by other protocol designs. Yet possibly the most vocal concerns have revolved around encryption. While ultimately the designers of HTTP/2 did not implement mandatory encryption features, this proposal was floated during its early design phase. However, most client implementations of HTTP/2 do in fact require encryption in some form, making it thus a de facto necessity. Many developers find this to be an additional resource cost, and convey that a wide variety of HTTP applications have little or no need for encryption. Despite these criticisms, however, the new protocol has been largely successful in improving speed of data transfer, and has overall been well received.

The Future of HTTP

HTTP/3 is the next major version of the protocol, and carries the same semantics as its precursors. As of late 2020, the protocol is already in use by a few major browsers such as Chrome and Firefox, albeit in experimental form. It utilizes QUIC (pronounced “quick”), a transport layer protocol which improves the performance of web applications by establishing multiplexed connections between endpoints. Multiplexing is a method in which analog or digital signals are combined into a shared signal over a medium, and has been used in telecommunications. We should see much wider implementation of HTTP/3 as the paramount data transfer system in the coming years.

--

--