next up previous contents
Next: Purpose of study Up: Introduction Previous: Statement of the problem   Contents


Historical perspective

The Internet is a very large, very important communications network. It is synonymous with modernity and technology. However, its design starts showing its age in several aspects.

Internet was built as a cooperative, trusting network - Every node makes its best to facilitate communication between any two other nodes than might request it, and trusts their requests to be legitimate. This design comes from the first days of the network --then, the Arpanet-- when its military origins led to research on how to make a network that would keep working even if part of it was damaged.

Originating in the research and military arenas, the Internet we know today inherited a very positive and open structure: A vast majority of the implemented protocols are based on open standards, and are thoroughly documented in the IETF RFCs (Internet Engineering Task Force Request For Comments). Knowing that is the highest source of authority on Internet standards, the author decided to base most of his research on their texts.

The Internet relies mainly in Unix servers for its operation - Unix is an operating system with a long history, with over 30 years in use, and it has proven ideal for the job. The actual suite of protocols that make Internet exist (TCP/IP, UDP/IP, ICMP/IP) were designed on Unix systems, and they still excel at its efficient and compatible implementations.

A phenomenon not new to us is that of external attackers (crackers), people who use vulnerabilities or errors in a given system to exploit it, either to get information restricted to people with authorized access, to modify information or to stop the computer from serving requests. Answers to these attacks have been broad, even overwhelming:

The first and most obvious path followed was bug-fixing: As soon as a bug is detected, people with enough understanding of the attacked program search for the error in the code and fix it. This technique, although the most efficient, has major shortcomings: How long will it take the programmers to identify the error? How long will it take to all the users of the program get the updated software and install it? Meanwhile, how many other systems will be attacked? If the program is commercial, as most software is nowadays, will the vendor acknowledge it had a problem, or will he just pretend nothing happened, until the next version is shipped with this bug patched? Some of those shortcomings are almost nonexistent with free software, such as the FreeBSD, NetBSD, OpenBSD and Linux operating systems, because of the large number of technical users who know the deepest intricacies of their programs, but even there, they can not be overlooked. System administrators are still required to manually patch and update the programs, and this is the most overlooked aspect of security. Administrators are confident in their system's configuration, albeit they still run outdated and unmantained versions of vital components of the system.

A second, pro-active path is the one followed by the OpenBSD team: Secure code auditing[1]. The OpenBSD team took the whole NetBSD free operating system, and began auditing it, line by line, program by program, searching for wrong or potentially dangerous programming or operating techniques. After a tremendous amount of work, they produced the most secure operating system to date. Of course, this highly effective technique also has its shortcomings: New services and new, updated versions of the programs have longer delays to show up on their system because they must be audited, and priority is given to keep auditing the existing code. Thus, OpenBSD is an ideal system for a firewall or for a server exposed to frequent attacks, but it may not be adequate for a normal server, and is usually outright discarded for a personal workstation.

A third approach is the use of firewalls: intermediate systems dedicated to filtering out specific ports or suspicious activity on the external network in order to avoid dangers to the organization's servers or data. Although firewalls are not hot news anymore and most medium and big installations have at least a basic one, they often fall short of our current needs, doing all their work on a source-destination address and port checking. Only a handful of them are able to do stateful inspection [*]of the packages, and they still fail to stop or even detect most basic attacks.

Yet another approach is the hardening not only of the programs, but of the compiler. Stackguard [2], the key part of the Immunix project, is a compiler that warns the programmer about possible insecure programming techniques, and produces binaries that are much more resistant against stack overflows. Of course, this has the disadvantage that the whole system has to be recompiled, making it impossible to be implemented on propietary systems, such as Windows or Solaris, and viable only for free systems, such as Linux and the *BSDs. Further, the whole system has to be recompiled in order to get a trustable installation --a monstrous task. Some programs will not compile correctly with a compiler that is much stricter than standard ANSI C compilers, and routines avoiding stack overflows may make the whole operation of the system slower. Immunix addresses the recompilation problematic by supplying a ready-to-use distribution for Intel-based computers based on RedHat Linux 6.2. However, installing any package distributed in binary --and very likely not compiled with Stackguard-- will open a possible security breach.


next up previous contents
Next: Purpose of study Up: Introduction Previous: Statement of the problem   Contents
Gunnar Wolf
2001-03-12