For software entities that act as users (e.g., proxy agents, Web services, peer processes), the ability to record and track security-relevant actions of the software-as-user, with attribution of responsibility.
Identification and elimination of ambiguities in the software architecture and design due to ambiguous requirements or insufficiently specified architecture and design.
A high-level evaluation of a software system involving (1) characterization of the system to clearly understand its nature; (2) the identification of potential threats to the system; (3) an assessment of the system’s vulnerability to attack; (4) an estimate of the likelihood of potential threats; (5) identification of the assets at risk and the potential impact if threats are realized; and (6) risk mitigation planning.
A structured set of arguments and a corresponding body of evidence demonstrating that a system satisfies specific claims with respect to its security, safety, or reliability properties.
A pattern abstraction describing common approaches that attackers might use to attack certain kinds of software for a certain purpose. It is used to capture and represent the attacker’s perspective in software security engineering.
The ability to recover from failures that result from successful attacks by resuming operation at or above some predefined minimum acceptable level of service in a timely manner.
The ability of the software to prevent the capability of an attacker to execute an attack against it.
The process of examining software architecture and design for common weaknesses that might lead to vulnerabilities and for susceptibility to common attack patterns.
The set of ways (functionalities, APIs, interfaces, resources, data stores, etc.) in which an attacker can attempt to enter and potentially cause damage to a system. The larger the attack surface, the more insecure the system [Manadhata 2007].
The ability of software to “tolerate” errors and failures that result from successful attacks and, in effect, continue to operate as if the attacks had not occurred.
A representation of the ways that an attacker could cause an event to occur that could significantly harm a system’s mission. Each path through an attack tree represents a unique intrusion.
The process of determining whether someone or something (such as a computer or software process) is, in fact, who or what it is declared to be. Methods for human authentication typically include something you know (a password), something you have (a token), or something you are (fingerprint).
The extent to which a software component, product, or system is operational and accessible to its intended, authorized users (humans and processes) whenever it is needed and, at the same time, when availability is considered as a property of software security, its functionality and privileges are inaccessible to unauthorized users (humans and processes) at all times.
The most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people [http://en.wikipedia.org/wiki/Best_practice].
Software testing using methods that do not require access to source code. Such testing usually focuses on the externally visible behavior of the software, such as requirements, protocol specifications, and interfaces.
A number of Internet computers that, although their owners are unaware of it, have rogue software that forwards transmissions (including spam or viruses) to other computers on the Internet. Any such computer is referred to as a zombie—in effect, it is a computer “robot” or “bot” that serves the wishes of some master spam or virus originator [http://searchsecurity.techtarget.com].
An attack that targets improper or missing bounds checking on buffer operations, typically triggered by input injected by an attacker. As a consequence, an attacker is able to write past the boundaries of allocated buffer regions in memory, causing a program crash or redirection of execution [http://capec.mitre.org/data/definitions/100.html].
A software security defect that is introduced during software implementation and can be detected locally through static and manual analysis [BSI 48].
A technique that tricks a domain name server (DNS server) into believing it has received authentic information when, in reality, it has not. If the server does not correctly validate DNS responses to ensure that they have come from an authoritative source, the server will end up caching the incorrect entries locally and serve them to users that make the same request [http://en.wikipedia.org/wiki/DNS_cache_poisoning].
The extent to which the characteristics of a software component, product, or system—including its relationships with its execution environment and its users, its managed assets, and its content—are obscured or hidden from unauthorized entities.
Planned, systematic, and multidisciplinary activities that ensure software components, products, and systems conform to requirements and applicable standards and procedures for specified uses.
The property of software behaving exactly as specified.
An attack in which an attacker embeds malicious scripts in content that will be served to Web browsers. The goal of the attack is for the target software (i.e., the client-side browser) to execute the script with the user’s privilege level [http://capec.mitre.org/data/definitions/63.html].
A software fault, typically either a bug or a flaw.
Using multiple types and layers of security to defend an application or system so as to avoid having a single point of failure.
An attempt to make a computer resource unavailable to its intended users, usually to prevent an Internet site or service from functioning efficiently or at all, either temporarily or indefinitely [http://en.wikipedia.org/wiki/Denial_of_service].
The property of software that ensures the software always operates as intended.
Analysis of the vulnerabilities and associated risk present in the underlying software platforms, operating systems, frameworks, and libraries that the software under analysis relies on in its operational environment. The software you are writing almost never exists in total isolation.
See cache poisoning.
A situation in which a user obtains privileges that he or she is not authorized to have.
For a software system, an internal state leading to failure if the system does not handle the situation correctly.
A piece of software, a chunk of data, or sequence of commands that takes advantage of a bug, glitch, or vulnerability in an effort to cause unintended or unanticipated behavior to occur [http://en.wikipedia.org/wiki/Exploit_%28computer_security%29].
A structured process set forth by M. E. Fagan [Fagan 1999] for trying to find defects in development documents such as programming code, specifications, designs, and others during various phases of the software development life cycle [http://en.wikipedia.org/wiki/Fagan_inspection].
For a software system, a situation in which the system does not deliver its expected service as specified or desired. Such a failure is externally observable.
The cause of an error, which may lead to a failure.
A software security defect that originates at the architecture or design level and is instantiated in the code [BSI 48].
Securing a system to defend it against attackers by, for example, removing unnecessary usernames or logins and removing or disabling unnecessary services [http://en.wikipedia.org/wiki/Hardening].
A situation in which one person or program successfully masquerades as another by falsifying data and thereby gains an illegitimate advantage [http://en.wikipedia.org/wiki/Spoofing_attack].
A situation in which a software function returns a pointer to memory outside the bounds of the buffer to be searched. This can occur when an attacker controls the contents of the buffer to be searched or an attacker controls the value for which to search [http://www.owasp.org/index.php/Illegal_Pointer_Value].
An attack that forces an integer variable to go out of range, leading to unexpected program behavior and possibly execution of malware by the attacker [CAPEC 2007].
The extent to which the code, managed assets, configuration, and behavior of a software component, product, or system are resistant and resilient to unauthorized modification by authorized entities or any modification by unauthorized entities.
Malicious software (e.g., viruses, worms, and Trojan horses) that is created to do intentional harm to a computer system.
Descriptive statements of the undesirable, nonstandard conditions that software is likely to face during its operation from either unintentional misuse or intentional and malicious misuse or abuse.
An action that can be taken to reduce the likelihood and/or impact of a risk to a software system.
For software entities that act as users (e.g., proxy agents, Web services, peer processes), the ability to prevent the software-as-user from disproving or denying responsibility for actions it has performed.
An attempt to acquire sensitive information criminally and fraudulently, such as usernames, passwords, and credit card details, by masquerading as a trustworthy entity in an electronic communication [http://en.wikipedia.org/wiki/Phishing].
Justifiable confidence that the software, when executed, functions as intended. The ability of malicious input to alter the execution or outcome in a way favorable to the attacker is significantly reduced or eliminated.
In the Common Criteria, a set of security requirements that a product can be evaluated and certified against.
The amount of assurance needed that security requirements have been met given a specific perceived threat, the consequences of a security breach, and the costs of security measures.
As a property that can be used to measure software security, the ability of a software component or system to identify known attack patterns.
As a property that can be used to measure software security, the ability of a software component or system to isolate, contain, and limit the damage resulting from any failures caused by attack-triggered faults that it was unable to resist or tolerate and to resume operation as quickly as possible.
In the software security engineering context, red teaming is creative software penetration testing in which the test team takes a defined adversarial role and uses doctrine, tactics, techniques, and procedures appropriate to that role.
A form of network attack in which a valid data transmission is maliciously or fraudulently repeated or delayed. It is carried out either by the originator or by an adversary who intercepts the data and retransmits it [http://en.wikipedia.org/wiki/Replay_attack].
The ability of an attacker to deny performing some malicious activity because the system does not have sufficient proof otherwise [Howard 2002].
As a property that can be used to measure software security, the ability of a software component or system to prevent the capability of an attacker to execute an attack against it.
A framework that enables the interoperability of security features such as access control, permissions, and cryptography and integrates them with the broader software architecture.
Directing and controlling an organization to establish and sustain a culture of security in the organization’s conduct and treating adequate security as a non-negotiable requirement of being in business.
The degree to which software meets its security requirements.
A collection of techniques used to manipulate people into performing actions or divulging confidential information, typically for information gathering or computer system access [http://en.wikipedia.org/wiki/Social_engineering_%28security%29].
The level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle, and that the software functions in the intended manner [CNSS 2006].
The probability of failure-free (or otherwise satisfactory) software operation for a specified/expected period/interval of time, or for a specified/expected number of operations, in a specified/expected environment under specified/expected operating conditions [Goertzel 2006, 94].
Persistence of dependability in the face of accidents or mishaps—that is, unplanned events that result in death, injury, illness, damage to or loss of property, or environmental harm [Goertzel 2006, 94].
Engineering software so that it is as vulnerability and defect free as possible and continues to function correctly in spite of attack or misuse.
An attack in which the identity of a person or resource is impersonated.
An attack exploiting software that constructs SQL statements based on user input. An attacker crafts input strings so that when the target software constructs SQL statements based on the input, the resulting SQL statement performs actions other than those the application intended [http://capec.mitre.org/data/definitions/66.html].
Causing a stack in a computer application or operating system to overflow, which makes it possible to subvert the program or system or cause it to crash. The stack is a form of buffer that holds the intermediate results of an operation or data that is awaiting processing. If the stack receives more data than it can hold, the excess data is lost [http://searchsecurity.techtarget.com].
Modification of data within a system to achieve a malicious goal [Howard 2002].
An actor or agent that is a source of danger, capable of violating the confidentiality, integrity, and availability of information assets and security policy.
The identification of relevant threats for a specific architecture, functionality, and configuration.
Combining software characterization, threat analysis, vulnerability analysis, and likely attack analysis to develop a gestalt picture of the risk posed to the software under analysis by anticipated threats.
As a property that can be used to measure software security, the ability of a software component or system to withstand the errors and failure that result from successful attacks and, in effect, to continue to operate as if the attacks had not occurred.
Lightweight software security best practice activities that are applied to various software artifacts, such as requirements and code [McGraw 2006].
The boundaries between system zones of trust (areas of the system that share a common level and management mechanism of privilege—Internet, dmz, hosting LAN, host system, application server, database host, and so forth). These trust boundaries are often ripe with vulnerabilities because systems fail to properly segregate and manage differing levels of privilege.
A situation in which the number of exploitable vulnerabilities in a software product is intentionally minimized to the greatest extent possible. The goal is no exploitable vulnerabilities.
In requirements elicitation, a description of a complete transaction between one or more actors and a system in normal, expected use of the system.
A software defect that an attacker can exploit.
Performing security analysis of software, including its deployed environment, with knowledge of the architecture, design, and implementation of the software.
A list of all known good inputs that a system is permitted to accept.
Elements of a software system that share a specific level and management mechanism of privilege.
3.138.169.40