The three drawbacks of the original TCP algorithm were the window size (the maximum value is just too small for today's speeds), poor handling of missing packets (addressed by extensions such as selective-ACK), and the fact that it only manages one stream at a time, and some applications want multiple streams that don't block each other. You could use multiple TCP connections, but that adds its own overhead, so SCTP and QUIC were designed to address those issues.
The congestion control algorithm is not part of the on-the-wire protocol, it's just some code on each side of the connection that decides when to (re)send packets to make the best use of the available bandwidth. Anything that implements a reliable stream on top of datagrams needs to implement such an algorithm. The original ones (Reno, Vegas, etc) were very simple but already did a good job, although back then network equipment didn't have large buffers. A lot of research is going into making better algorithms that handle large buffers, large roundtrip times, varying bandwidth needs and also being fair when multiple connections share the same bandwidth.
> The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP) while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability.
[…]
> SCTP may be characterized as message-oriented, meaning it transports a sequence of messages (each being a group of bytes), rather than transporting an unbroken stream of bytes as in TCP. As in UDP, in SCTP a sender sends a message in one operation, and that exact message is passed to the receiving application process in one operation. In contrast, TCP is a stream-oriented protocol, transporting streams of bytes reliably and in order. However TCP does not allow the receiver to know how many times the sender application called on the TCP transport passing it groups of bytes to be sent out. At the sender, TCP simply appends more bytes to a queue of bytes waiting to go out over the network, rather than having to keep a queue of individual separate outbound messages which must be preserved as such.
> The term multi-streaming refers to the capability of SCTP to transmit several independent streams of chunks in parallel, for example transmitting web page images simultaneously with the web page text. In essence, it involves bundling several connections into a single SCTP association, operating on messages (or chunks) rather than bytes.
* https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...
For example, sending some data on a fresh TCP connection is slow, and the “ramp up time” to the bandwidth of the network is almost entirely determined by the latency.
Amazing speed ups can be achieved in a data centre network by shaving microseconds off the round trip time!
Similarly, many (all?) TCP stacks count segments, not bytes, when determining this ramp up rate. This means that jumbo frames can provide 6x the bandwidth during this period!
If you read about the network design of AWS, they put a lot of effort into low switching latency and enabling jumbo frames.
The real pros do this kind of network tuning, everyone else wonders why they don’t get anywhere near 10 Gbps through a 10 Gbps link.
If the net were designed today it would be some complicated monstrosity where every packet was reminiscent of X.509 in terms of arcane complexity. It might even have JSON in it. It would be incredibly high overhead and we’d see tons of articles about how someone made it fast by leveraging CPU vector instructions or a GPU to parse it.
This is called Eroom’s law, or Moore’s law backwards, and it is very real. Bigger machines let programmers and designers loose to indulge their desire to make things complicated.
UDP has its place as well, and if we have more simple and effective solutions like WireGuard’s handshake and encryption on top of it we’d be better off as an industry.
Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
https://news.ycombinator.com/newsguidelines.htmlWell ... he seems very motivated. I am more skeptical.
For instance, Google via chrome controls a lot of the internet, even more so via its search engine, AI, youtube and so forth.
Even aside from this people's habits changed. In the 1990s everyone and their Grandma had a website. Nowadays ... it is a bit different. We suddenly have horrible blogging sites such as medium.com, pestering people with popups. Of course we also had popups in the 1990s, but the diversity was simply higher. Everything today is much more streamlined it seems. And top-down controlled. Look at Twitter, owned by a greedy and selfish billionaire. And the US president? Super-selfish too. We lost something here in the last some 25 years.