Computer security
From Wikipedia, the free encyclopedia
- This article describes how security can be achieved through design and engineering. See the computer insecurity article for an alternative approach that describes the battlefield of computer security exploits and defenses.
Computer security is an application of Information Security to both theoretical and actual computer systems. For simplicity, issues of privacy and just causes to collect information should be handled under the subject of Information Privacy Rights. For the purpose of this article, Computer security is a form of computer science risk management trade-offs in the areas of Confidentiality, Integrity and Availability of electronically structured information that is processed on or stored in computer systems.
Structure Plan: A) The issue in a nutshell:
- The risk of inappropriate data: disclosure, alteration or destruction.
- The risk of application subversion: Replacement, alteration or disfunction.
- Balance of risks needed for practical computing systems.
B) Theoretical Computer Systems and their Computer Security Models: C) Applied, Computer security by platform D) A brief summary of commercially obtainable security offerings.
The traditional approach to this challenge is to create computing platforms, languages, and applications that enforce restrictions such that agents (such as users or programs) can only perform actions that have been allowed according to some specified security policy. Computer security can be seen as a subfield of security engineering, which looks at broader security issues in addition to computer security.
A secure system should still permit authorized users to carry out legitimate and useful tasks. It might be possible to secure a computer beyond misuse using extreme measures:
“ | The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts. | ” |
It is important to distinguish the techniques used to increase a system's security from the issue of that system's security status. In particular, systems which contain fundamental flaws in their security designs cannot be made secure without compromising their usability. Consequently, most computer systems cannot be made secure even after the application of extensive "computer security" measures. Furthermore, if they are made secure, ease of use often decreases.
[edit] Secure Operating System
One use of the term computer security refers to technology to implement a secure operating system. Much of this technology is based on science developed in the 1980s and used to produce what may be some of the most impenetrable operating systems ever. Though still valid, the technology is almost inactive today, perhaps because it is complex or not widely understood. Such ultra strong secure operating systems are based on operating system kernel technology that can guarantee that certain security policies are absolutely enforced on an operating environment. An example of such a security policy is the Bell-LaPadula model. The strategy is based on a coupling of special microprocessor hardware features, often involving the Memory Management Unit, to a special correctly implemented operating system kernel. This forms the foundation for a secure operating system that if certain critical parts are designed and implemented correctly can ensure that it is physically impossible for hostile or subversive applications to violate the security policy. This capability is enabled because the operating system not only impose a security policy, but completely protects itself from corruption. Ordinary operating systems lack the completeness property. The design methodology to produce such secure systems is not an ad-hoc best effort activity, but one that is very precise, deterministic and logical.
Systems designed with such methodology represent the state of the art of computer security and the capability to produce them is not widely known. In sharp contrast to most kinds of software, they meet specifications with verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information and military secrets. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of Top Secret to unclassified (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security capability (as Protection Profile) and assurance levels (as EAL levels.) None of these ultra high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria.
[edit] Computer Security By Design
Computer security is a logic-based technology. There is no universal standard notion of what secure behavior is. “Security” is a property that is unique to each situation and so must be overtly defined by a Security Policy, if it is to be seriously enforced. Security is not an ancillary function of a computer application, but often what the application doesn’t do. Unless the application is just trusted to ‘be secure,’ security can only be imposed as a constraint on the application’s behavior from outside of the application. There are several approaches to security in computing, sometimes a combination of approaches is valid:
- Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
- Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example).
- Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
- Trust no software but enforce a security policy with trustworthy mechanisms.
Many approaches unintentionally follow 1. One and 3 lead to failure. Since 2 is expensive and non-deterministic, its use is very limited. Because 4 is often hardware-based mechanisms and avoid abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of 2 and 4 are often used in a layered architecture with thin layers of 2 and thick layers of 4.
There are a strategies and techniques used to design in security. There are few, if any strategies to add-on security after design.
One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way, even if an attacker has subverted one part of the system, fine-grained security ensures that it is just as difficult for them to subvert the rest.
Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessable to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.
The design should use "defense in depth", where more than one subsystem needs to be compromised to compromise the security of the system and the information it holds. Defense in depth works when the subverting one hurdle is not a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure. What constitutes such a decision and what authorities are legitimate is controversial.
In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable in the long term. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.
[edit] Early History of Security By Design
The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics' security was broken, not once, but repeatedly. The strategy was known as 'penetrate and test' and has become widely known as a non-terminating process that fails to produce computer security. This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.
[edit] Secure Coding
The majority of software vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection.
Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++"). Other languages, such as Java, are immune to some of these defects, but are still prone to code/command injection and other software defects which lead to software vulnerabilities.
[edit] Techniques for Creating Secure Systems
The following techniques can be used in engineering secure systems. These techniques, whilst useful, do not of themselves ensure security. One security maxim is "a security system is no stronger than its weakest link"
- Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
- Thus simple microkernels can be written so that we can be sure they don't contain any bugs: eg EROS and Coyotos.
A bigger OS, capable of providing a standard API like POSIX, can be built on a microkernel using small API servers running as normal programs. If one of these API servers has a bug, the kernel and the other servers are not affected: e.g. Hurd.
- Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
- Strong authentication techniques can be used to ensure that communication end-points are who they say they are.
Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system.
- Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
- Mandatory access control can be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges.
- Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. The next sections discuss their use.
Some of the following items may belong to the computer insecurity article:
- Do not run an application with known security flaws. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products.

- Backups are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals.
- Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. The backup needs to be moved between the geographic sites in a secure manner, so as to prevent it from being stolen.
- Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware).
- Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic which can pass through them, based on a set of system administrator defined rules.
- Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer - such as through an interactive logon screen - or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems.
- Encryption is used to protect the message from the eyes of others. It can be done in several ways by switching the characters around, replacing characters with others, and even removing characters from the message. These have to be used in combination to make the encryption secure enough, that is to say, sufficiently difficult to crack. Public key encryption is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of recipients, and only those recipients will be able to read that message.
- Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network.
- Social engineering awareness - Keeping employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers.
[edit] Capabilities vs. ACLs
Within computer systems, the two fundamental means of enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all ACL-based systems — only that the designers of certain utilities must take responsibility to ensure that they do not introduce flaws.
Unfortunately, for various historical reasons, capabilities have been mostly restricted to research operating systems and commercial OSs still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language [2].
The Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.
The most secure computers are those not connected to the Internet and shielded from any interference. In the real world, the most security comes from operating systems where security is not an add-on, such as OS/400 from IBM. This almost never shows up in lists of vulnerabilities for good reason. Years may elapse between one problem needing remediation and the next.
A good example of a secure system is EROS. But see also the article on secure operating systems. TrustedBSD is an example of an open source project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.
[edit] Other Uses of the Term "trusted"
The term "trusted" is often applied to operating systems that meet different levels of the common criteria, some of which are discussed above as the techniques for creating secure systems.
A computer industry group led by Microsoft has used the term "trusted system" to include making computer hardware that could impose restrictions on how people use their computers. The project is called the Trusted Computing Group (TCG). See also Next-Generation Secure Computing Base.
[edit] Notable Persons in Computer Security
[edit] See also
- Attack tree
- Authentication
- Authorization
- Cryptography
- Computer security model
- Differentiated security
- Internet Firewalls
- Network security
- Data security
- Formal methods
- Identity management
- Internet privacy
- Cyber security standards
- Wireless LAN Security
- Timeline of hacker history
- Information Leak Prevention
[edit] References
- Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems, ISBN 0-471-38922-6
- Bruce Schneier: Secrets & Lies: Digital Security in a Networked World, ISBN 0-471-25311-1
- Robert C. Seacord: Secure Coding in C and C++. Addison Wesley, September, 2005. ISBN 0-321-33572-4
- Paul A. Karger, Roger R. Schell: Thirty Years Later: Lessons from the Multics Security Evaluation, IBM white paper.
- Clifford Stoll: Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, Pocket Books, ISBN 0-7434-1146-3
- Stephen Haag, Maeve Cummings, Donald McCubbrey, Alain Pinsonneault, Richard Donovan: Management Information Systems for the information age, ISBN 0-07-091120-7
- Peter G. Neumann: Principled Assuredly Trustworthy Composable Architectures 2004
- Morrie Gasser: Building a secure computer system ISBN 0-442-23022-2 1988
- E. Stewart Lee: Essays about Computer Security Cambridge, 1999
[edit] Free textbooks on this topic
This e-primer provides a comprehensive review of the digital and information and communications technology revolutions and how they are changing the economy and society. The primer also addresses the challenges arising from the widening digital divide.
[edit] External links
- Challenges to entertain the mind in the computer security (Hacker-Challenge)
- The Open Web Application Security Project - tools, documentation, community
- The Center for Education and Research in Information Assurance and Security
- SANS Institute - Computer security training and free resources.
- InfoSec Institute
- CERT
- Planet Security in German
- Comodo
- securegg.com - An IT Security News Aggregator
- MySecureCyberspace: a resource for home users created by Carnegie Mellon CyLab
- Computer Security News
- Computer Security Threats Be realistic about PC security.