Ramblings of an old Doc

 

Mark Nottingham (head of the IETF Group – Internet Engineering Task Force) has announced that HTTP/2 has been completed…and Google Chrome is embracing it, dumping SPDY (in 2016). The whole web won’t convert to HTTP/2 all at once, though. Also, Nottingham said in 2014: “HTTP/2 isn’t magic Web performance pixie dust; you can’t drop it in and expect your page load times to decrease by 50%.”  Once server admins get the hang of HTTP/2, however, it should boost web performance.”

Who started all this? Need you ask? Google…to decrease load times by 55-60%, Google invented SPDY. Now, with HTTP/2-HTTP-bis (where the connection is made twice, as I understand it), SPDY which was Google invented and nonstandard can be replaced by a “non-proprietary”/industry standardized coding.

There have been accusations that Google is forcing this change in the web, but Nottingham rejects that maintaining that HTTP/2 will result in a faster and safer web. So it has been approved as have HPACK specifications and it remains to assign RFC numbers and some editing and then it will be published.

So, “How is this HTTP/2 different from HTTP 1.1?” Well…according to the Wiki is:

“The element that is modified is how the data is framed and transported between the client and the server”….“HTTP/2 allows the server to "push" content, that is, to respond with data for more queries than the client requested. This allows the server to supply data it knows a web browser will need to render a web page, without waiting for the browser to examine the first response, and without the overhead of an additional request cycle.” – Wikipedia

There’s more, but I admit to nodding off while reading…

So, basically…

“While you can happily go about your online lives in relative ignorance of HTTP, the technology underpinning the web is still fundamentally important. HTTP/2 offers some nice upgrades over its predecessor such as requiring SSL/TLS encryption by default and improved data transfer speeds between servers and clients. HTTP/2 is also better designed to quickly handle modern, complex websites that contain a lot of data.” – PCWorld

And…websites (rich in info) won’t have to minify, so the info will be richer…

So, you’re up to date. Anyone who can explain this better in a non-soporific way is more than welcome to do so.

Sources:

http://www.pcworld.com/article/2885657/prepare-for-faster-safer-web-browsing-the-next-gen-http2-protocol-is-done.html

http://www.pcworld.com/article/2882018/google-chrome-embraces-the-faster-more-secure-next-gen-http-20-standard.html

https://www.mnot.net/blog/2015/02/18/http2


Comments
on Feb 18, 2015


Anyone who can explain this better in a non-soporific way is more than welcome to do so.

It's newer...and has a higher number [2 vs 1.1] ergo it is better.

How's that? ....

on Feb 18, 2015


How's that?

Sufficient

on Feb 18, 2015

Daiwa


Quoting Jafo,

How's that?



Sufficient

Barely. 

ZZZzzzzzzzzz.

 

on Feb 19, 2015

on Feb 19, 2015


“The element that is modified is how the data is framed and transported between the client and the server”….“HTTP/2 allows the server to "push" content, that is, to respond with data for more queries than the client requested. This allows the server to supply data it knows a web browser will need to render a web page, without waiting for the browser to examine the first response, and without the overhead of an additional request cycle.”

 

Perhaps someone can quell my worries about above.

Now when you visit a otherwise clean web page that has some infected links nothing happens until you actually click on link. What prevents someone to write malicious code that while pretending  that it is needed for rendering actually acts as press on link or just loads something without your input. 

For me it looks as step back in security.

on Feb 20, 2015

I would much prefer a better version of the SMTP protocol, as the current standard is a disaster. 

on Feb 20, 2015

Pupetier

Now when you visit a otherwise clean web page that has some infected links nothing happens until you actually click on link. What prevents someone to write malicious code that while pretending that it is needed for rendering actually acts as press on link or just loads something without your input.

There is nothing that prevents servers from serving malicious scripts or embedded objects today.

HTTP2 servers cannot arbitrarily push data at their whim. When you request a page, they can optionally tell the client they will be sending additional cacheable files (scripts, stylesheets, images, etc.); the client can reject them, or it can do nothing and let the server send them along on that same connection. It is purely an optimization so that the client doesn't need to make additional requests for such related items.

The connection is kept open long enough for the client to request/receive everything it needs (subject to timeouts), then the connection is closed and that's that. This is similar to the way that HTTP 1.1 already behaves.

Keeping long-term persistent connections (like websockets, for example) would completely destroy the scalability of the protocol, and would be a big step back, rather than forward. You only keep a long-term persistent connection when you know you need constant, real-time communication (such as with games). Otherwise, you keep it open long enough to limit the overhead of reopening connections with the same peers, and close it to make room for others when you're done.

Conversely, having the server initiate connections to the client wouldn't work due to firewalls, etc.

See here for more details.