by lazyloop on 1/1/15, 11:36 PM with 146 comments
by jacquesm on 1/2/15, 12:49 AM
Edit: downvoters: please explain what's to like about HTTP2. I have a very hard time finding anything to like.
For example: no more easy debugging on the wire, another TCP like implementation inside the HTTP protocol, tons of binary data rather than text and a whole slew of features that we don't really need but that please some corporate sponsor because their feature made it in. Counter examples appreciated.
Compare: http://tools.ietf.org/html/rfc1945
by magila on 1/2/15, 2:07 AM
by drawkbox on 1/2/15, 5:29 AM
Binary in Hyper Text Transfer will never seem right. I understand it is more performant but it always creates more bugs, ask any game developer, binary needed but also living on the edge of indexes/ordering/headers/harder to debug/etc. Indexing, overflows, incorrect implementations, will follow.
Many of the advancements in HTTP2 are good but there are some steps backwards we'll have to re-learn again. It isn't all about performance when it comes to correct interoperability as standards lead to many interpretations, it is why XML then JSON won data transfer, it is easy to interoperate, yes binary is more efficient over the wire but not to interoperate. Should we go back to binary formats for data exchange on the network? The protocol level is lower level but still it has been beneficial in the current standards to spreading innovation with lower barriers to understanding.
HTTP2 is one of those 'version 2' of an app that some of the legacy genius of it was lost and overlooked in the redesign, like simplicity. An engineers job is to make something complex into something simple and blackboxing data isn't simplifying it.
by hjfgdx on 1/2/15, 2:06 AM
by gmzll on 1/2/15, 4:29 AM
by fubarred on 1/2/15, 3:04 AM
Perhaps folks like 'cperciva would be kind enough to propose a single, simple TOML-based cert system that is extremely lightweight with the fewest of features. (Not that TLS/SSL would change without focused, sustained herculean effort immediately after yet another Heartbleed.)
by nly on 1/2/15, 3:34 AM
Oh, wait... maybe that was a dream.
by sanxiyn on 1/2/15, 2:29 AM
by Pxtl on 1/2/15, 1:54 AM
by dreszg on 1/1/15, 11:55 PM
by cdent on 1/2/15, 1:24 PM
Is this the inevitable path of any technology which has initial promise for enabling individual public expression?
by TwoBit on 1/2/15, 3:44 AM
by lkrubner on 1/2/15, 5:03 AM
What I would like to see is the industry ask itself, can HTTP be retro-fitted to work for software over TCP or UDP? It is clear that HTTP is a fantastic protocol for sharing documents. But it is what we want when our goal is to offer software as a service?
I'll briefly focus on one particular issue. WebSockets undercuts a lot of the original ideas that Sir Tim Berners-Lee put into the design of the Web. In particular, the idea of the URL is undercut when WebSockets are introduced. The old idea was:
1 URL = 1 document = 1 page = 1 DOM
Right now, in every web browser that exists, there is still a so-called "address bar" into which you can type exactly 1 address. And yet, for a system that uses WebSockets, what would make more sense is a field into which you can type or paste multiple URLs (a vector of URLs), since the page will end up binding to potentially many URLs. This is a fundamental change, that takes us to a new system which has not been thought through with nearly the soundness of the original HTTP.
Slightly off-topic, but even worse is the extent to which the whole online industry is still relying on HTML/XML, which are fundamentally about documents. Just to give one example of how awful this is, as soon as you use HTML or XML, you end up with a hierarchical DOM. This makes sense for documents, but not for software. With software you often want either no DOM at all, or you want multiple DOMs. Again, the old model was:
1 URL = 1 document = 1 page = 1 DOM
We have been pushing technologies, such as Javascript and HTML and HTTP, to their limits, trying to get the system that we really want. The unspecified, informal system that many of us now work towards is an ugly hybrid:
1 URL = multiple URLs via Ajax, Websockets, etc = 1 document (containing what we treat as multiple documents) = 1 DOM (which we struggle against as it often doesn't match the structure, or lack of structure, that we actually want).
Much of the current madness that we see with the multiplicity of Javascript frameworks arises from the fact that developers want to get away from HTTP and HTML and XML and DOMs and the url=page binding, but the stack fights against them every step of the way.
Perhaps the most extreme example of the brokenness are all the many JSON APIs that now exist. If you do an API call against many of these APIs, you get back multiple JSON documents, and yet, if you look at the HTTP headers, the HTTP protocol is under the misguided impression that it just sent you 1 document. At a minimum, it would be useful to have a protocol that was at least aware of how many documents it was sending to you, and had first-class support for counting and sorting and sending and re-sending each of the documents that you are suppose to receive. A protocol designed for software would at least offer as much first-class support for multiple documents/objects/entities as TCP allows for multiple packets. And even that would only be a small step down the road that we nee d to go.
A new stack, designed for software instead of documents, is needed.
I would have been happy if they simply let HTTP remain at 1.1 forever -- it is a fantastic protocol for exchanging documents. And then the industry could have focused its energy on a different protocol, designed from the ground up for offering software over TCP.
by thomasfoster96 on 1/2/15, 10:50 AM
Waiting months/years for HTTP\2 support to appear in all the tools I use - :( ....
by alexwilliamsca on 1/3/15, 2:26 PM