Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of this sounds like transport layer instead of application layer. I'm curious why it ended up in HTTP instead of TCP.

For example, you could specify that ports 49152 to 50179 formed a block on a given host, and so on in groups of 1028. Ports in a block would have to connect to the same port on the other end, and would share a packet buffer. There could be some rules to let the network stack pass on data to a process in anticipation of a three way handshake completing, TLS could assume that new connections shared the same secrets as existing ones, and so on. That seems simpler, and it prevents the wheel from having to be reinvented for every new application protocol.

An obvious reason not to do that is that TCP is too widely used for the effects of changes to be predicted. Is that the only reason?



It ended up in application layer because of http gateways and NATs. The reason why spdy was https only at google was because https is pretty much the only port that nearly all corporate gateways let through as a "black box"

Google even did tests with http over SCTP[1] and found that it solved a lot of the problems that spdy did. It's well accepted that the transport-layer is the "right" layer to fix this in, but it would not allow as wide a deployment.

1: http://en.wikipedia.org/wiki/Stream_Control_Transmission_Pro...


This is the major knock on HTTP2. It's entirely focused on hacking around transport layer issues while keeping TCP intact and has shied away from fixing real and serious problems with the HTTP protocol itself. Here's a quote from the HTTP2 FAQ: ( https://http2.github.io/faq/#can-http2-make-cookies-or-other... )

"In particular, we want to be able to translate from HTTP/1 to HTTP/2 and back with no loss of information. If we started “cleaning up” the headers (and most will agree that HTTP headers are pretty messy), we’d have interoperability problems with much of the existing Web."

Pretty disappointing really. It took 15 years of haggling and we ended up with precisely the old protocol but now in a new, improved binary format with multiplexing.


> Pretty disappointing really. It took 15 years of haggling and we ended up with precisely the old protocol but now in a new, improved binary format with multiplexing.

Yeah, we can have a perfect reimplementation of the widely used protocol to fix all of the problems at once. Just look at how successful IPv6 has been.


IPv6 is doing fine. I have IPv6 at home, and natively on all of my servers. According to my Openwrt AP stats, about 47% of all my household inbound traffic receiving is via IPv6 (over 8 days of basically just web browsing).


IPv6 doesn't fix most of the problems with IP, in fact it makes some worse (routing tables are now bigger because of bigger addresses and there can be more of them). If IPv6 actually did attempt to fix issues in IP it might have seen faster adoption.


Routing tables will be smaller due to aggregation - that's the major factor driving up IPv4 route tables.


I think you're confusing the transport layer with the session and presentation layers. That being said, HTTP/2 is more than just a different way of opening connections and, yes, changing TCP in such fundamental ways would be impossible given the widespread adoption and burden a layer that is not expected to do all of that (when it should be flexible and enable more complex operations from the upper layers).


> I think you're confusing the transport layer with the session and presentation layers.

There is no confusion. No one uses OSI model. So there are no session and presentation layers, at least in the minds of OP.


Ok, so in this view what is TLS? It's not the "transport layer" but it also is application agnostic.


Application layer. Everything above the transport layer is on application layer.


It's funny to think that a mere 20 or so years ago, ethernet wasn't even a foregone conclusion. The first time I connected a computer to a network, I had to make sure I was plugging into the ethernet and not the token ring. Now here we are pushing transport layer fixes into the application layer just because we don't want to have to explain to sysadmins how to reconfigure their firewalls?


I think it is aimed more at home users that are stuck behind a NAT the ISP controls than sysadmins.


So we're not fixing the transport layer due to limitations in the network layer that have been fixed, we're just not using them.


You mean IPv6? That suffers from exactly the same problem: ISP's just don't give a damn and don't roll it out.

You cannot expect your average internet user to know about these things, so they get away with it.


Major residential ISPs aren't rolling out IPv6 largely because they already have enough IPv4 space for their customers.

Take a major UK ISP like Virgin Media. According to Hurricane Electrics BGP looking glass project they are originating 9.4M IPv4 addresses[0], however, at the end of 2012, according to Wikipedia[1], they only had 4.8M customers.

[0] http://bgp.he.net/AS5089#_asinfo [1] https://en.wikipedia.org/wiki/Virgin_Media


Virgin Media can't even expand that easily because they must lay cable in areas that weren't covered by NTL/Telewest infrastructure: and I believe is extremely expensive and time consuming with planning permission being what it is.


Pretty much. I find myself suspecting that we will see more and more sofware "fixes" piled on top to avoid doing mass hardware replacements...


Yet it can be done. Digital over the air TV for example.


One thing when it is done at the consumers expense, another when the company has to cover it.


yes, tcpcrypt already exists: http://tcpcrypt.org/


> I'm curious why it ended up in HTTP instead of TCP.

...or just run HTTP over SCTP.

http://en.wikipedia.org/wiki/Stream_Control_Transmission_Pro...


Check out http://en.wikipedia.org/wiki/Stream_Control_Transmission_Pro...:

It's not supported at all on Windows or OS X and the implementations everywhere else are far less tested than TCP. That's a large immediate problem, particularly since you have to update the kernel to fix it, and it also means that many, many intermediaries (home routers, proxies, corporate firewalls, etc.) have never been pushed to support it at all, let alone as well as commonly-used protocols.

The good news is that the semantics are close enough that if the situation improves for both client support and, critically, intermediaries it would be relatively straightforward to migrate to a HTTP/2-over-SCTP hybrid if proved better in some way.


I'm completely aware of the fact that SCTP isn't in the native Windows networking stack, and I'll take your word that it's not in OSX. But as long as we put up all these Frankenstein solutions of handling things up-in-the-stack because firewall admins can't be bothered up upgrade or configure their stuff correctly, we are just not putting out enough pressure that this will get changed!

And even if we now introduce this sad compromise that is HTTP/2, there also will be a lot of proxy/firewall appliances that block HTTP/2, and there will be the equivalent of the government-entity of large corporation that was stuck on IE4/WinXP until 2014 using an outdated web-browser or intranet server.

So, maybe we should try to get incompatible protocols out much earlier and if they turn out to have merit, we could enable them on released products and have Chrome/Firefox put up a nagging reminder: "Your web-experience would be much improved (or: this premium content could be watched at higher resolution, or security to your banking website, or...) if your network infrastructure would support SCTP/IPv6/DNSsec/, please ask your ISP or Administrator".


First, “sad compromise” is a pejorative value judgement and that line of reasoning has just been marketed by people who are appealing to the authority of the legacy OSI model to make “this is new and different and I don't like that” sound more compelling. To make that argument more compelling, someone has to actually do the hard work of analyzing the protocol and pointing out actual, specific engineering problems caused by it which would be fixed by using something like SCTP or why, for example, the predicted sky falling hasn't occurred with in 15 years of TLS not being implemented at the kernel level.

Thus far, the only serious work I've seen shows that something like SCTP or QUIC could possibly be a fair percentage faster on lossy networks. That's something which merits future work, particularly since either would be relatively easy to swap into place for the lower levels of HTTP2 now that the protocol has first-class support for the concepts, but it doesn't seem like a good reason to roll back deployment of a production-ready protocol to wait for everyone to upgrade their kernels first.

> there also will be a lot of proxy/firewall appliances that block HTTP/2

The beauty of reusing HTTPS is that this not the case for most firewalls and since HTTP/2 did not change the semantics, the default behaviour for anyone running an old tampering proxy is not to enjoy the performance benefits but otherwise experience no problems. That seems like a good compromise to me: full backwards compatibility with the cost of non-support being born by the slackers and reusing existing practice means that a much smaller percentage of users are affected.

> nagging reminder: "Your web-experience would be much improved (or: this premium content could be watched at higher resolution, or security to your banking website, or...) if your network infrastructure would support SCTP/IPv6/DNSsec/, please ask your ISP or Administrator".

The problem with this is that most users will just ignore the message and the few who try to escalate it are probably going to be told no because if their ISP/corporate IT was good they'd never have seen the message in the first place.


> It's not supported at all on Windows or OS X

It's supported everywhere I care about…

More seriously, though, there was a time when I had to install MacTCP and others had to install WinSock or Trumpet or whatever it was called. The upgrade was worth it then, and I bet it could be worth it again.


Out of all the options putting transport layer concerns in the HTTP protocol is my least favorite.

Google built a multiplexed stream transport over UDP called QUIC. HTTP over QUIC gives you most of what HTTP2 does.

Also there is a transport layer protocol called SCTP. Again, HTTP over SCTP gives you most of what HTTP2 does.


HTTP over SCTP give you more than what HTTP2 does. I just wish Google would have made SPDY only work over SCTP as a measure to push for widespread adoption. Imagine finally having a seamless connection for every application on mobile devices where it would switch from wifi to 4G without any serious latency or dropped connections.


STCP is what you want.

Sadly, Microsoft have completely failed to support it, so it's not going to happen.


To answer your specific question about why not in TCP, it's not deployable. Here's why:

* In order to get multiplexing and other features introduced with HTTP/2, you need to change protocol framing. However, this means that the protocol is no longer backwards compatible. There are many ways to roll out non-backwards-compatible changes, but for TCP, they were deemed unacceptable. For example, you could negotiate the protocol change via a TCP extension. However, TCP extensions are known not to be widely deployable (https://www.imperialviolet.org/binary/ecntest.pdf) over the public internet. You could use a specific port, but that doesn't traverse enough middleboxes on the public internet (http://www.ietf.org/mail-archive/web/tls/current/msg05593.ht...). Yada yada.

* More importantly, TCP is generally implemented in the OS. That means, updating a protocol requires OS updates. That means we need both server and client OS updates in order to speak the new protocol. If you look at the very sad numbers on Windows versions uptake, Android version uptake, etc, you'll understand why many people don't want to wait for all OSes to update in order to take advantage of new protocol features.

To quote Roberto (http://www.ietf.org/mail-archive/web/tsvwg/current/msg12184....) in his response to IETF transport protocol folks asking why application protocol folks are hacking around the transport:

""" I understand the points you make, and am sympathetic. I feel the same way when I see people abusing HTTP to provide async notifications, etc. The fact is, however, if it isn't deployed, no matter how nice it would theoretically be when it is, it isn't useful and people WILL work around the problem. That is true regardless of whether the problem is at the application layer (e.g. HTTP), or at the transport layer (e.g. TCP), or elsewhere. The primary motivation is to get things working, and making things that lack redundancy, or are elegant comes in at a distant second.

Deployed is the most important feature, thus things which quickly move protocols and protocol changes from theoretical to deployed are by far the most important things. The longer this takes, the more likely that the work-around becomes standard practice, at which point we've all "lost" the game. -=R """

Basically, this is another instance of Linus's quote (https://lkml.org/lkml/2009/3/25/632): "Theory and practice sometimes clash. And when that happens, theory loses. Every single time." Theoretically, it'd be far more elegant to fix this in the transport layer. But in practice, it doesn't work. Except, maybe if you implement on top of UDP (e.g. QUIC), since that would be in user-space and firewalls don't filter out UDP as much.

For more extensive analysis of HTTP/2's deployability options, amongst other considerations, check out my post on HTTP/2 considerations: https://insouciant.org/tech/http-slash-2-considerations-and-....


That Torvalds email is just lovely. Makes one ponder certain other Linux related projects that should perhaps take things to heed.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: