HTTP/2 risks creating dumb pipes with SPDY

03 Jul 2014

For many years, web content has been transported over the Internet using the Hypertext Transfer Protocol version 1.1 (HTTP/1.1), running on the Transport Control Protocol (TCP). HTTP/1.1 was published by the Internet Engineering Task Force (IETF) in 1997 (RFC 2068), was modified in 1999 (RFC 2616) and in 2014 (RFC 7230-7325), and has been the work-horse for the web for many years. It is also a widely used application-layer protocol for fixed and mobile internet services.

The size and complexity of web sites and internet applications has increased tremendously since 1997 when HTTP/1.1 was first introduced. As a consequence, HTTP/1.1 creates a bottleneck for web performance, as is illustrated in Figure 1. In response, web developers have used a variety of workarounds, such as HTTP pipelining, opening multiple TCP connections to the same host, or in some cases using “website sharding” techniques to distribute web content across multiple servers. While these techniques increase web performance, they create suboptimum queuing conditions for HTTP/1.1, which is essentially a session based protocol. As a consequence, industry players have been debating alternative solutions to upgrade HTTP/1.1 to HTTP/2. Late in 2013 the IETF adopted the SPDY protocol as a baseline for the standardization of HTTP/2. While SPDY offers a variety of enhancements to HTTP, it also has the potential to compromise security, traffic management and policy enforcement regimes in fixed, mobile and enterprise networks.

Figure 1: HTTP/1.1 throttles web performance as bandwidth demands increase

Source: Google 2012

SPDY (which is pronounced ‘speedy’) was originally developed by Mike Belshe and Roberto Peon at Google, with the stated goals of reducing web page load times by 50%, using techniques that minimize deployment complexity, avoid changes to web content and leverage open source developments. To achieve these requirements, SPDY introduced four key capabilities that supersede HTTP/1.1, which include:

  • The ability to asynchronously multiplex web resource requests in a single TCP connection and allow for these resources to be prioritized.
  • Header compression, and;
  • Server push and hint capabilities to improve the web content retrieval process.

Related content

No Comments Yet! Be the first to share what you think!
This website uses cookies
This provides customers with a personalized experience and increases the efficiency of visiting the site, allowing us to provide the most efficient service. By using the website and accepting the terms of the policy, you consent to the use of cookies in accordance with the terms of this policy.