Firewall Wizards mailing list archives

Counterrant (Was Rant, Was Our friend FTP)


From: Robert Graham <robert_david_graham () yahoo com>
Date: Fri, 16 Apr 1999 18:19:09 -0700 (PDT)

--- "Marcus J. Ranum" <mjr () nfr net> wrote:
<rant>
</rant>
PS - I am going to exercise moderator's privilege and not
forward responses to this rant unless they are truly illuminating,
thought-provoking, and (at least) more interesting than the
rant itself. ;)

I'll bite.

<counterrant>

We are stuck with incremental progress rather than revolutionary jumps every few years. But I
don't think that is so bad:

Consider the problems coax Ethernet had. The cable was very sensitive to faults, leading to
frequent LAN-wide outages. Furthermore, it didn't scale well beyond 30% utilization, due to all
the collisions. These were problems that could not be fixed. IBM came out with a new technology
called Token Ring. It wired the network in a star so that any cable problem only affected one user
rather than all users. It used a much saner token passing mechanism rather than a collision
detection/backoff.

Which technology won the race? Ethernet, of course. The collision-based architecture only had
scalability problems in widely separated, repeater-interconnected, terminal-based (small frames)
networks, which would melt at 30% load. In bridged/switched networks concentrating on file
downloads (large frames), this problem disapears. Furthermore, you can wire Ethernet in a star
just as easily as any other technology.  Technologies like Token Ring, FDDI, ATM that attempt to
throw out Ethernet have not been able to succeed against its relentless, incremental improvements.
Sure, we can criticize each of those new technologies and say "if it had only done XXXX, it would
have beaten Ethernet", but the reality is that no technology is perfect the first time out and
needs at least some debugging.

Back in the early 1990s, the prediction was that the Internet was about to meltdown. Routing
tables were exploding, IP wasn't secure, mobile computers didn't work, yadda yadda. All sorts of
different people were proposing solutions to this problem that would completely replace IPv4. What
happened? The size of the Internet grew beyond even beyond the most agressive predictions due to
this new application called "WWW". At the same time, the size of the core routing tables stopped
growing. Like the Ethernet-vs.-TokenRing battle, we see the IP does have a lot more flexibility
built in. CIDR solved the immediate problem of routing table size, proxies/NAT solved the problem
of running out of addresses, URLs solved a big problem of directories, DHCP solved the problem of
mobile computers, and SSL made a vast leap forward in security. All of these problems had huge,
heavy weight, complex solutions that never came to pass because of these simple, elegant fixes.

Marcus described his fantasy as:
start over with the Internet Codebase. Deprecate _all_ the apps
that we are currently running, replacing them with similar apps
built atop decent high level APIs that incorporate necessary
and desirable features sockets lack (session redirection,
connection to service negotiation, session resumption/reconnection,
encryption, authentication, authorization, integrity checksums,
buffering/record formats)  

A comprehensive, complete solution to this problem has already been proposed. It is called the
OSI/ISO suite of protocols. It didn't succeed. Anyone who has been involved with IETF knows the
difficulty faced with trying to replace any existing protocol. It would take years to get
consensus, everyone needs to have their opinions heard, neophytes simultaneously will hold very
strong opinions about things they don't understand and need severe amounts of education. The
fantasy is that competent people and identify the problems and engineer solutions. The failure
here is the word "people", meaning human, emotional beings, whose intelligence is inversely
proportional to the number in a group. The OSI suite failed because the design methodology was to
scrap what had gone on before and to start from a clean slate, which throws out a huge amount of
implementation experience with the bath water. More importantly, it throws out all the
agreement/consensus that took years to build.

On the other hand, we humans are good at patching things. We are masters of the kludge. Open
source demonstrates this wonderfully; every time someone comes with a good patch, it becomes
widely adopted. I am constantly amazed at the truely effective solutions that people come up with,
and because each change is small, groups don't feel the need to overwhelm the change with their
own opinion.

Let's take HTTP for another example. It's _proof_ that you
can massively deploy a whole new protocol in almost no time
at all. 

Several people have won the lottery, but that says little about YOUR chances of doing same. HTTP
is a fluke; it created a massive paradigm shift (ook, ack, brain hurts, marketing speak). It
became popular BEFORE there were RFCs about it. The next such protocol will be implemented by a
vendor like Microsoft, Intel, Cisco, etc., or by the open source community.

My 2-cents is that the IETF (or any other "body") will only be able to solve problems with
incremental improvements. Any revolutionary change will come about by somebody implementing a
solution. In particular, you've got a great opportunity to drop any change into open Mozilla and
Apache. You could also choose the stick instead of the carrot: create hacker programs that exploit
the vulnerabilities and spread them wide on the Internet. Either way, don't hope that "they" will
solve the problem, work on it yourself. "They" will never be competent.

</counterrant>

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com



Current thread: