HTTP 2.0 is the most recent major revision of the HTTP protocol that powers the internet.
One significant advantage HTTP 2.0 brings to the table is its efficient parallelism in executing requests on a connection to download remote resources. While the previous versions of HTTP can only have one request pending on a connection at any given point in time, HTTP 2.0 can multiplex many requests at the same time and fetch the requested resources out of order and hence is not slowed down by the slowness of prior requests – no head-of-line blocking. In addition, it uses a binary protocol with header compression that can lead to a reduction in the data downloaded over the wire. While HTTP 1.1 allows submitting requests (via pipelining) even before the previous requests are completed, the responses can be fetched only in-order and hence can slow down due to head-of-line blocking. In addition, HTTP 2.0 allows the server to push resources even without being asked by the client – for example, the server could proactively push CSS, js, and images on a page immediately after the page was delivered – this can speed up the page load time.
Most browsers and some web servers already support HTTP 2.0 today. Though HTTPS is not mandatory by the specification, most implementations today do it over HTTPS, which means HTTPS is a must for HTTP 2.0.
In terms of some of the practices that are used today to workaround limitations in prior HTTP versions, such as domain sharding to overcome “only one simultaneous request on a connection”, becomes obsolete with HTTP 2.0 and is, in fact, an anti-pattern as it leads to multiple connections establishment overhead.
Curious to see how fast the web adapts HTTP 2.0.