by KonradKlause on 11/21/11, 12:52 PM with 58 comments
by adestefan on 11/21/11, 2:28 PM
by kenny_r on 11/21/11, 1:08 PM
by alexchamberlain on 11/21/11, 5:36 PM
by mindslight on 11/21/11, 8:33 PM
But honestly, burning 64 bits of address space for a redundant global identifier just so "nat+dhcp" are only half as complicated? And then needing privacy extensions to keep the uuid from leaking out? All while doing nothing to solve the problem that caused NAT to spring up in the first place.
On the surface, "no NAT" sounds like a reasonable goal, but ignores the realities of what NAT is actually used for - keeping your network your business. How long until consumer providers offer different tiers of plans based on number of devices that can be connected, and smart users are back to NAT anyway? The proper solution to NAT problems is at layer 4 - a standard way of making connections from the outside to a device inside based on some kind of onion address, where the upstream can only see the outer part.
by WalterGR on 11/21/11, 3:57 PM
by 1010010111 on 11/21/11, 7:28 PM
When everything has to pass through centralised "core" routers for enormous segments of the network, it limits how we can work with addresses.
It forces us to allocate addresses in blocks (the larger ones within which most of the individual addresses remain unused). And it creates problematically large routing tables for those "core' routers.
As with everything, there are both costs and benefits to doing things this way. There are always tradeoffs with any approach, whether it is centralised or decentralised. So arguments can go on to infinity about the "best" way. There is no such thing. There are just different alternatives, each with their own costs and benefits. And there's human consensus.
And there are the inevitable workarounds, some of them pure "hacks".
NAT* and IPv6 are a natural result of having a "backbone", "core routers", gigantic ever-growing routing tables and large address allocation minimums.
by seanlinmt on 11/21/11, 4:09 PM