It’s hard to imagine now, but in the mid-1980s, the Internet came close to collapsing due to the number of users congesting its networks. Computers would request packets as quickly as they could, and when a router failed to process a packet in time, the transmitting computer would immediately request it again. This tended to result in an unintentional denial-of-service, and was degrading performance significantly. [Navek]’s recent video goes over TCP congestion control, the solution to this problem which allows our much larger modern internet to work.
In a 1987 paper, Van Jacobson described a method to restrain congestion: in a TCP connection, each side of the exchange estimates how much data it can have in transit (sent, but not yet acknowledged) at any given time. The sender and receiver exchange their estimates, and use the smaller estimate as the congestion window. Every time a packet is successfully delivered across the connection, the size of the window doubles.
Once packets start dropping, the sender and receiver divide the size of the window, then slowly and linearly ramp up the size of the window until it again starts dropping packets. This is called additive increase/multiplicative decrease, and the overall result is that the size of the window hovers somewhere around the limit. Any time congestion starts to occur, the computers back off. One way to visualize this is to look at a graph of download speed: the process of periodically hitting and cutting back from the congestion limit tends to create a sawtooth wave.
[Navek] notes that this algorithm has rather harsh behavior, and that there are new algorithms that both recover faster from hitting the congestion limit and take longer to reach it. The overall concept, though, remains in widespread use.
If you’re interested in reading more, we’ve previously covered network congestion control in more detail. We’ve also covered [Navek]’s previous video on IPV5.
Thanks to [Mahdi Naghavi] for the tip!
Mid-1980s internet ran on computers with processing power of a current PIC16 running at 16 MHz. You have to keep in mind that back then people only recently invented how to make CPUs and use them to build computers so it was all still fresh. Back then something like ESP32 would probably be classified as top secret supercomputer by NSA, CIA and FBI.
????
By mid 80s we had DOS, multiple megabyte harddrives, 32bit CPUs and even multiple megabyte RAMs.
I think you’re thinking of 70s
Mid-1980s internet ran on computers much more powerful than a PIC16 series at 16MHz. PICs are notoriously inefficient when it comes to general purpose computation, whereas by the mid-1980s we had 32-bit computers running at 8MHz (68000, 80286, VAX 8600).
Consider a 16-bit calculation on a PIC16:
MOVWF src1+1,W
ADDWF src2+1,W
MOVF dst1+1
MOVWF src1,W
ADDWF src2,W
BTFSC STATUS,0
INCF dst1+1
MOVF dst1 ;2µs.
vs 68000:
ADD.W D0,D1 ;0.5µs
Or consider dhrystones:
PIC18F26K20: 380.
Motorola 68000 at 8MHz: 2100.
VAX 8600: 7203
IBM PC/AT: 1247
Sun 3/75: 3514.
IBM PC/XT: 386.
So, a PIC18F26K20, which is more powerful than a PIC16, has 1981 PC performance.
People do have a tendency to underestimate the capabilities of older technology.
There are dedicated libraries to do large number arithmetics, idk why you’d use assembly for that XD
The fact there’s libraries for multiplication vs a single instruction is pretty much his point and speaks volumes.
Ok wise guy, then how would you multiply 70368744177664 by 1126999418470405 on a x86 CPU without using external software? On 32-bit Windows XP Matlab can do this calculation easily, but it’s impossible to write number this big in regular .c file and compile it with GCC or Visual Studio 2005. You need software or custom library for that.
@Albert It’s is just addition, of course you can do it, figuring out how to do things like that is a basic skill on constrained embedded systems and used to be on desktop hardware back then too.
And often enough you don’t have headroom for an entire arbitrary precision library anyway.
Matlab is written in C/C++ so it’s possible. Personally I would represent those numbers in arrays of integers and then multiply one by like I would on paper but I am not a programmer – there must be a better way. Search for large number arithmetics for 8051, 6502, or even ATMega.
Because the earlier comment claimed a 16MHz PIC16 series was more powerful than a CPU in the mid-1980s. In a high-level language:
int main()
{
int16_t a,b;
scanf(“%d %d”,&a,&b);
return a+b;
}
How can you tell if a 16MHz PIC16 would be faster than an 8MHz M68000? Or a 1MHz 6502? Or a pdp-11/40? You can’t unless you know how it translates efficiently into machine code.
In either case, 8-bit PICs are very slow and I can’t find any references for how a PIC16 could even run dhrystone.
Also, the PDP-7 was introduced in 1964, the PDP-10 in 1966, and the PDP-11 in 1970. Those weren’t even the first, by any stretch. What’s this noise about “people only recently invented how to make CPUs and use them to build computers”? You’re whole decades off.
Yeah. The Intel 8080 was already a decade old by the mid-1980’s with the Motorola 68000 and Intel 8086 introduced in 1979 so it wasn’t even true for microprocessors, never mind “CPUs”.