Sunday, December 9, 2012

Security: Security Architecture and design

   Trusted computing base (TCB) is defined as the total combination of protection mechanisms within a computer system. The TCB includes hardware, software, and firmware. These components will enforce the security policy and not violate it.

    If the TCB is enabled, then the system has a trusted path, a trusted shell, and system integrity-checking capabilities. The TCB provides protection resources to ensure the trusted path (channel between the user, or program, and the kernel) cannot be compromised in any way. A trusted shell prevents "bust out of it," and "bust into it" phenomenon.

    The four basic functions of the TCB are process activation, execution domain switching, memory protection, and I/O operations.
Execution domain switching refers to when the CPU has to go from executing instructions in user mode to privileged mode and back.

The reference monitor
is an abstract machine that mediates all access subjects have to objects, both to ensure that the subjects have the necessary access rights and to protect the objects from unauthorized access and destructive modification.

The security kernel is made up of hardware, software, and firmware components that fall within the TCB, and it implements and enforces the reference monitor concept.

The reference monitor is a concept in which an abstract machine mediates all access to objects by subjects. The security kernel is the hardware, firmware, and software of a TCB that implements this concept. The TCB is the totality of protection mechanisms within a computer system that work together to enforce a security policy. The TCB contains the security kernel and all other security protection mechanisms.

Security model
Models such as the Bell-LaPadula model, enforce rules to provide confidentiality protection. Other models, such as the Biba model, enforce rules to provide integrity protection. Formal security models, such as Bell-LaPadula and
Biba, are used to provide high assurance in security. Informal models, such as Clark-Wilson, are used more as a framework to describe how security policies should be expressed and executed.

The Bell-LaPadula model 

is a state machine model that enforces the confidentiality aspects of access control. A matrix and security levels are used to determine if subjects can
access different objects. The subject's clearance is compared to the object's classification and then specific rules are applied to control how subject-to-object interactions can take place. It is a subject-to-object model.

  • The simple security rule states that a subject at a given security level cannot read data that reside at a higher security level.
  • The *-property rule (star property rule) states that a subject in a given security level cannot write information to a lower security level. 
  • The simple security rule is referred to as the "no read up" rule, and the *-property rule is referred to as the "no write down" rule. 
  • The third rule, the strong star property rule, states that a subject that has read and write capabilities can only perform those functions at the same security level, nothing higher and nothing lower.
Basic Security Theorem used in computer science, which states that if a system initializes in a secure state and all allowed state transitions are secure, then every
subsequent state will be secure no matter what inputs occur.

The tranquility principle, which is also used in this model, means that subjects and objects cannot change their security levels once they have been instantiated

Mandatory access control (MAC) systems are based on the Bell-LaPadula model, because it allows for multilevel security to be integrated into the code.

Bell-LaPadula has a rule called Discretionary Security Property (ds-property). It specifies that specific permissions allow a subject to pass on permissions at its own discretion. These permissions are stored in an access matrix. Basically implying that mandatory and discretionary access control mechanisms can be implemented in one operating system.

Biba Model

It addresses the integrity of data within applications. The Biba model is not concerned with security levels and confidentiality, so it does not base access decisions upon this type of lattice. The Biba model uses a lattice of integrity
levels.
Biba model prevents data from any integrity level from flowing to a higher
integrity level. Biba has three main rules:
  • *-integrity axiom A subject cannot write data to an object at a higher integrity level (referred to as "no write up").
  • Simple integrity axiom A subject cannot read data from a lower integrity level (referred to as "no read down").
  • Invocation property A subject cannot request service (invoke) to subjects of higher integrity.

Bell-LaPadula vs. Biba

The Bell-LaPadula and Biba models are informational flow models but the Bell-LaPadula model is used to provide confidentiality while the Biba model is used to provide integrity.
Bell-LaPadula uses security levels, and Biba uses integrity levels.
Their rules sound very similar: simple and * rules—one writing one way and one reading another way. A tip for how to remember them is that if the word "simple" is used, the rule is talking about reading. If the rule uses * or "star," it is talking about writing.

The Clark-Wilson Model

The Clark-Wilson model was developed after Biba and takes some different approaches to protecting the integrity of information. This model uses the following elements:
Users Active agents
Transformation procedures (TPs) Programmed abstract operations, such as read, write, and modify
Constrained data items (CDIs) Can be manipulated only by TPs
Unconstrained data items (UDIs) Can be manipulated by users via primitive read and write operations
Integrity verification procedures (IVPs) Check the consistency of CDIs with external reality
When an application uses the Clark-Wilson model, it separates data into one subset that needs to be highly protected, which is referred to as a constrained data item (CDI), and another subset that does not require a high level of protection, which is called an unconstrained data item (UDI). Users cannot modify critical data (CDI) directly. Instead, the subject (user) must be authenticated to a piece of software, and the software procedures (TPs) will carry out the operations on behalf of the user.

Goals of Integrity Models
  • The following are the three main goals of integrity models:
  • Prevent unauthorized users from making modifications
  • Prevent authorized users from making improper modifications (separation of duties)
  • Maintain internal and external consistency (well-formed transaction)

Clark-Wilson addresses each of these goals in its model. Biba only addresses the first goal.

Covert channels are of two types: storage and timing. In a covert storage channel, processes are able to communicate through some type of storage space on the system. In a covert timing channel, one process relays information to another by modulating its use of system resources.

The Noninterference Model

This concept is implemented to ensure any actions that take place at a higher security level do not affect, or interfere with, actions that take place at a lower level.

The Lattice Model

A lattice is a mathematical construct that is built upon the notion of a group.The most common definition of the lattice model is "a structure consisting of a finite partially ordered set together with least upper and greatest lower bound
operators on the set."

The Brewer and Nash model

Also called the Chinese Wall model, was created to provide access controls that can change dynamically depending upon a user's previous actions. The main goal of the model is to protect against conflicts of
interest by users' access attempts.
Sort of like multi-tenancy.

Graham-Denning model 

Bell-LaPadula and Biba don't define how the security and integrity ratings are defined and modified, nor do they provide a way to delegate or transfer access rights. G-D model addresses these issues and defines a set of basic rights in terms of commands that a specific subject can execute on an object. This model has eight rules:
  • How to securely create an object, Subject(2 rules)
  • How to securely delete an object , Subject (2)
  • How to securely provide the read, grant, delete,transfer access rights (4)

The Harrison-Ruzzo-Ullman Model

The Harrison-Ruzzo-Ullman (HRU) model deals with access rights of subjects and the integrity of those rights. A subject can carry out only a finite set of operations on an object.

The Access Control Matrix Model 

This is a model in which access decisions are based on objects' ACLs and
subjects' capability tables.

Security Modes 

  • Dedicated Security Mode: All users must have …
    • Proper clearance for all information on the system
    • Formal access approval for all information on the system
    • A signed NDA for all information on the system
    • A valid need to know for all information on the system
    • All users can access all data.
  • System High-Security Mode: All users must have …
    • Proper clearance for all information on the system
    • Formal access approval for all information on the system
    • A signed NDA for all information on the system
    • A valid need to know for some information on the system
    • All users can access some data, based on their need to know.
  • Compartmented Security Mode: All users must have …
    • Proper clearance for the highest level of data classification on the system
    • Formal access approval for all information they will access on the system
    • A signed NDA for all information they will access on the system
    • A valid need to know for some of the information on the system
    • All users can access some data, based on their need to know and formal access approval.
  • Multilevel Security Mode: All users must have …
    • Proper clearance for all information they will access on the system
    • Formal access approval for all information they will access on the system
    • A signed NDA for all information they will access on the system
    • A valid need to know for some of the information on the system
    • All users can access some data, based on their need to know, clearance, and formal access approval.
In the United States, the National Computer Security Center (NCSC) is an organization within the National Security Agency (NSA) that is responsible for evaluating computer systems and products. It has a group, called the Trusted Product Evaluation Program (TPEP), that oversees the testing by approved evaluation entities of commercial products against a specific set of criteria.
The product is assigned an assurance rating based on the Orange Book (moving toward the Common Criteria).

The Orange Book

The U.S. Department of Defense developed the Trusted Computer System Evaluation Criteria (TCSEC), which is used to evaluate operating systems, applications, and different products.

TCSEC provides a classification system that is divided into hierarchical divisions of assurance levels:
  • A. Verified protection
  • B. Mandatory protection
  • C. Discretionary protection
  • D. Minimal security

Each division can have one or more numbered classes with a corresponding set of requirements that must be met for a system to achieve that particular rating. The classes with higher numbers offer a greater degree of trust and assurance. So
B2 would offer more trust than B1, and C2 would offer more trust than C1.

Seven different areas:
  • Security policy The policy must be explicit and well defined and enforced by the mechanisms within the system.
  • Identification Individual subjects must be uniquely identified.
  • Labels Access control labels must be associated properly with objects.
  • Documentation Documentation must be provided, including test, design, and specification documents, user guides, and manuals.
  • Accountability Audit data must be captured and protected to enforce accountability.
  • Life-cycle assurance Software, hardware, and firmware must be able to be tested individually to ensure that each enforces the security policy in an effective manner throughout their lifetimes.
  • Continuous protection The security mechanisms and the system as a whole must perform predictably and acceptably in different situations continuously.
These categories are evaluated independently, but the rating is a sum total of these items.

Security Levels
  • Division D: Minimal Protection
Systems that have been evaluated but fail to meet the criteria and requirements of the higher divisions.
  • Division C: Discretionary Protection
    • C1: Discretionary Security Protection
      • separation of users and information
      • access control
      • protected execution domain
      • validating the system's operational integrity. 
      • design documentation, test documentation, a facility manual, and user manuals.
    • C2: Controlled Access Protection
      • Users need to be identified individually
      • Security relevant events are audited, and records protected
      • provide resource, or object, isolation
      • object reuse concept must be clean. 
      • cannot guarantee it will not be compromised, but make it harder.
      • C2, overall, is seen as the most reasonable class for commercial applications, but the level of protection is still relatively weak.
  • Division B: Mandatory Protection
Mandatory access control is enforced by the use of security labels. The architecture is based on the Bell-LaPadula security model, and evidence of reference monitor enforcement must be available.
    • B1: Labeled Security
      • data object must contain a classification label and each subject must have a clearance label. 
      • system must compare the subject's and object's security labels 
      • The security policy is based on an informal statement
      • design specifications are reviewed and verified.
      • This rating is intended for environments that require systems to handle classified data.
    • B2: Structured Protection
      • security policy is clearly defined and documented
      • system design and implementation reviewed/tested 
      • requires more stringent authentication mechanisms 
      • Subjects and devices require labels, and the system must not allow covert channels. 
      • trusted path for logon and authentication
      • The type of environment that would require B2 systems is one that processes sensitive data that require a higher degree of security. 
    • B3: Security Domains
Note Security labels are not required until security rating B; thus, C2 does not require security labels but B1 does.
      • more granularity is provided in each protection mechanism
      • design and implementation should not provide too much complexity
      • reference monitor components must be small enough to test properly and be tamperproof. 
      • security administrator role is clearly defined, and the system must be able to recover from failures without its security level being compromised including system startup.
      • The type of environment that requires B3 systems is a highly secured environment that processes very sensitive information. It requires systems that are highly resistant to penetration.
  • Division A: Verified Protection
    • The security mechanisms between B3 and A1 are not very different, but the way the system was designed and developed is evaluated in a much more structured and stringent procedure.
    • A1: Verified Design
      • assurance of an A1 system is higher than a B3 system because of the formality in the way the A1 system was designed
      • The type of environment that would require A1 systems is the most secure of secured environments. This type of environment deals with top-secret information and cannot adequately trust anyone using the systems without strict authentication, restrictions, and auditing.
TCSEC addresses confidentiality, but not integrity. Functionality of the security mechanisms and the assurance of those mechanisms are not evaluated separately, but rather are combined and rated as a whole.

The Orange Book focuses on the operating system and the rainbow series on the other aspects.

The Trusted Network Interpretation (TNI), also called the Red Book because of the color of its cover, addresses security evaluation topics for networks and
network components.

Information Technology Security Evaluation Criteria (ITSEC)

It replaced TCSEC in 2000. ITSEC evaluates two main attributes of a system's protection mechanisms:
  • functionality
The services that are provided to the subjects (access control mechanisms, auditing, authentication, and so on) are examined and measured.When functionality is evaluated, it is tested to see if the system's protection mechanisms deliver what its vendor says they deliver.
  • Assurance 
is the degree of confidence in the protection mechanisms, and their effectiveness and capability to perform consistently. Assurance is generally tested by examining development practices, documentation, configuration management, and testing mechanisms.

Differences between ITSEC and TCSEC:
  • TCSEC bundles functionality and assurance into one rating, whereas ITSEC evaluates these two attributes separately. 
  • ITSEC was developed to provide more flexibility than TCSEC, and ITSEC addresses integrity, availability, and confidentiality, whereas TCSEC addresses only confidentiality. 
  • ITSEC also addresses networked systems, whereas TCSEC deals with stand-alone systems.
Mapping:

E0 = D
F1 + E1 = C1
F2 + E2 = C2
F3 + E3 = B1
F4 + E4 = B2
F5 + E5 = B3
F5 + E6 = A1
F6 = Systems that provide high integrity
F7 = Systems that provide high availability
F8 = Systems that provide data integrity during communication
F9 = Systems that provide high confidentiality (like cryptographic devices)
F10 = Networks with high demands on confidentiality and integrity

The Common Criteria 

Provides more flexibility by evaluating a product against a protection profile, which is structured to address a real-world security need. So while the Orange Book says, "Everyone march in this direction in this form using this path," the Common Criteria asks, "Okay, what are the threats we are facing today and what are the best ways of battling them?"
  • assigned an Evaluation Assurance Level (EAL). 
    • EAL1 Functionally tested
    • EAL2 Structurally tested
    • EAL3 Methodically tested and checked
    • EAL4 Methodically designed, tested, and reviewed
    • EAL5 Semiformally designed and tested
    • EAL6 Semiformally verified design and tested
    • EAL7 Formally verified design and tested
The protection profile provides a means for a consumer, or others, to identify specific security needs; this is the security problem to be conquered. The protection profile goes on to provide the necessary goals and protection mechanisms to achieve the required level of security.


Like other evaluation criteria before it, the Common Criteria works to answer two basic questions about products being evaluated: what does its security mechanisms do (functionality), and how sure are you of that (assurance)?

Certification is the comprehensive technical evaluation of the security components and their compliance for the purpose of accreditation. The goal of a certification process is to ensure that a system, product, or network is right for the customer's purposes.

Accreditation is the formal acceptance of the adequacy of a system's overall security and functionality by management. The certification information is presented to management and once satisfied with the system's overall security management makes a formal accreditation statement.

How to set up a security program:

  • Plan and Organize
    • Establish management commitment
    • Establish oversight steering committee
    • Assess business drivers
    • Carry out a threat profile on the organization
    • Carry out a risk assessment
    • Develop security architectures at an organizational, application, network, and component level
    • Identify solutions per architecture level
    • Obtain management approval to move forward
  • Implement
    • Assign roles and responsibilities
    • Develop and implement security policies, procedures, standards, baselines, and guidelines
    • Identify sensitive data at rest and in transit
    • Implement the following blueprints:
      • Asset identification and management
      • Risk management
      • Vulnerability management
      • Compliance
      • Identity management and access control
      • Change control
      • Software development life cycle
      • Business continuity planning
      • Awareness and training
      • Physical security
      • Incident response
    • Implement solutions (administrative, technical, physical) per blueprint
      • Develop auditing and monitoring solutions per blueprint
      • Establish goals, service level agreements (SLAs), and metrics per blueprint
  • Operate and Maintain
    • Follow procedures to ensure all baselines are met in each implemented blueprint
    • Carry out internal and external audits
    • Carry out tasks outlined per blueprint
    • Manage service level agreements per blueprint
  • Monitor and Evaluate
    • Review logs, audit results, collected metric values, and SLAs per blueprint
    • Assess goal accomplishments per blueprint
    • Carry out quarterly meetings with steering committee
    • Develop improvement steps and integrate into the Plan and Organize phase

The Zachman framework is a two-dimensional model that uses six basic communication interrogatives (What, How, Where,Who, When, and Why) intersecting with different levels (Planner, Owner, Designer, Builder, Implementer, and Worker) to give a holistic view of the enterprise.

Here is a helpful hint for row Descriptions ( = means is equal to :-) ):
  • Scope=Contextual = Planner
  • Business Model= Conceptual = Owner
  • System Model = Logical = Designer
  • Technology Model = Physical = Builder
  • Detailed Representations = Detailed = Sub Contractor
Column Descrriptions

What(data), Where(Network), When(time), Why(Motivation), Who(People) and How(Function).

See my Enterprise Architecture blog post for a discussion of the frameworks including Zachman.

When an enterprise security architecture is being developed, the following items must be understood and followed:
  • strategic alignment, process enhancement, business enablement, and security effectiveness.
Strategic alignment means the business drivers and the regulatory and legal requirements are being met by the security architecture.



SABSA
A group developed the Sherwood Applied Business Security Architecture (SABSA), as shown in the following table, which is based on the Zachman framework. When building a security architecture, you can visit www.sabsainstitute.org/ home.aspx to learn more about this approach.

A Few Threats

  • Maintenance Hooks
  • Time-of-Check/Time-of-Use Attacks
Specific attacks can take advantage of the way a system processes requests and performs tasks. A time-of-check/timeof-use (TOC/TOU) attack deals with the sequence of steps a system uses to complete a task. This type of attack takes
advantage of the dependency on the timing of events that take place in a multitasking operating system. An example of a TOC/TOU attack is if process 1 validates the authorization of a user to open a noncritical text file and
process 2 carries out the open command. If the attacker can change out this noncritical text file with a password file while process 1 is carrying out its task, she has just obtained access to this critical file. (It is a flaw within the code that allows this type of compromise to take place.)

  • Buffer Overflows


Tips
- Two systems can have the exact same hardware, software components, and applications, but provide different levels of protection because of the different security policies and security models the two systems were built upon.
-  A CPU contains a control unit, which controls the timing of the execution of instructions and data, and an ALU, which performs mathematical functions and logical operations.
-  Most systems use protection rings. The more privileged processes run in the lower-numbered rings and have access to all or most of the system resources. Applications run in higher-numbered rings and have access to a smaller amount of resources.
-  Operating system processes are executed in privileged or supervisor mode, and applications are executed in user mode, also known as "problem state."
-  Secondary storage is nonvolatile and can be a hard drive, CD-ROM drive, floppy drive, tape backup, or a jump drive.
-  Virtual storage combines RAM and secondary storage so the system seems to have a larger bank of memory.
-  A deadlock situation occurs when two processes are trying to access the same resource at the same time.
-  Security mechanisms can focus on different issues, work at different layers, and vary in complexity.
-  The more complex a security mechanism is, the less amount of assurance it can usually provide.
-  Not all system components fall under the trusted computing base (TCB), which includes only those system components that enforce the security policy directly and protect the system. These components are within the security perimeter.
-  Components that make up the TCB are hardware, software, and firmware that provide some type of security protection.
-  A security perimeter is an imaginary boundary that has trusted components within it (those that make up the TCB) and untrusted components outside it.
-  The reference monitor concept is an abstract machine that ensures all subjects have the necessary access rights before accessing objects. Therefore, it mediates all accesses to objects by subjects.
-  The security kernel is the mechanism that actually enforces the rules of the reference monitor concept.
-  The security kernel must isolate processes carrying out the reference monitor concept, must be tamperproof, must be invoked for each access attempt, and must be small enough to be properly tested.
-  A security domain is all the objects available to a subject.
-  Processes need to be isolated, which can be done through segmented memory addressing, encapsulation of objects, time multiplexing of shared resources, naming distinctions, and virtual mapping.
-  The level of security a system provides depends upon how well it enforces the security policy.
-  A multilevel security system processes data at different classifications (security levels), and users with different clearances (security levels) can use the system.
-  Processes should be assigned least privilege so they have just enough system privileges to fulfill their tasks and no more.
-  Some systems provide security at different layers of their architectures, which is called layering. This separates the processes and provides more protection for them individually.
-  Data hiding occurs when processes work at different layers and have layers of access control between them. Processes need to know how to communicate only with each other's interfaces.
-  A security model maps the abstract goals of a security policy to computer system terms and concepts. It gives the
security policy structure and provides a framework for the system.
-  A closed system is often proprietary to the manufacturer or vendor, whereas the open system allows for more interoperability.
-  The Bell-LaPadula model deals only with confidentiality, while the Biba and Clark-Wilson models deal only with integrity.
-  A state machine model deals with the different states a system can enter. If a system starts in a secure state, all state transitions take place securely, and the system shuts down and fails securely, the system will never end up in an insecure state.
-  A lattice model provides an upper bound and a lower bound of authorized access for subjects.
-  An information flow security model does not permit data to flow to an object in an insecure manner.
-  The Bell-LaPadula model has a simple security rule, which means a subject cannot read data from a higher level (no read up). The *-property rule means a subject cannot write to an object at a lower level (no write down). The strong star property rule dictates that a subject can read and write to objects at its own security level.
-  The Biba model does not let subjects write to objects at a higher integrity level (no write up), and it does not let subjects read data at a lower integrity level (no read down). This is done to protect the integrity of the data.
-  The Bell-LaPadula model is used mainly in military systems. The Biba and Clark-Wilson models are used in the commercial sector.
-  The Clark-Wilson model dictates that subjects can only access objects through applications. This model also illustrates how to provide functionality for separation of duties and requires auditing tasks within software.
-  If a system is working in a dedicated security mode, it only deals with one level of data classification, and all users must have this level of clearance to be able to use the system.
-  Compartmented and multilevel security modes enable the system to process data classified at different classification levels.
-  Trust means that a system uses all of its protection mechanisms properly to process sensitive data for many types of users. Assurance is the level of confidence you have in this trust and that the protection mechanisms behave properly in all circumstances predictably.
-  The Orange Book, also called Trusted Computer System Evaluation Criteria (TCSEC), was developed to evaluate systems built to be used mainly by the military. Its use was expanded to evaluate other types of products.
-  In the Orange Book, D classification means a system provides minimal protection and is used for systems that were evaluated but failed to meet the criteria of higher divisions.
-  In the Orange Book, the C division deals with discretionary protection, and the B division deals with mandatory protection (security labels).
-  In the Orange Book, the A classification means the system's design and level of protection are verifiable and provide the highest level of assurance and trust.
-  In the Orange Book, C2 requires object reuse protection and auditing.
-  In the Orange Book, B1 is the first rating that requires security labels.
-  In the Orange Book, B2 requires security labels for all subjects and devices, the existence of a trusted path, routine covert channel analysis, and the provision of separate administrator functionality.
-  The Orange Book deals mainly with stand-alone systems, so a range of books were written to cover many other topics in security. These books are called the Rainbow Series.
-  ITSEC evaluates the assurance and functionality of a system's protection mechanisms separately, whereas TCSEC combines the two into one rating.
-  The Common Criteria was developed to provide globally recognized evaluation criteria and is in use today. It combines sections of TCSEC, ITSEC, CTCPEC, and the Federal Criteria.
-  The Common Criteria uses protection profiles and ratings from EAL1 to EAL7.
-  Certification is the technical evaluation of a system or product and its security components. Accreditation is management's formal approval and acceptance of the security provided by a system.
-  A covert channel is an unintended communication path that transfers data in a way that violates the security policy. There are two types: timing and storage covert channels.
-  A covert timing channel enables a process to relay information to another process by modulating its use of system resources.
-  A covert storage channel enables a process to write data to a storage medium so another process can read it.
-  A maintenance hook is developed to let a programmer into the application quickly for maintenance. This should be removed before the application goes into production or it can cause a serious security risk.
-  An execution domain is where instructions are executed by the CPU. The operating system's instructions are executed in a privileged mode, and applications' instructions are executed in user mode.
-  Process isolation ensures that multiple processes can run concurrently and the processes will not interfere with each other or affect each other's memory segments.
-  The only processes that need complete system privileges are located in the system's kernel.
-  TOC/TOU stands for time-of-check/time-of-use. This is a class of asynchronous attacks.
-  The Biba model addresses the first goal of integrity, which is to prevent unauthorized users from making modifications.
-  The Clark-Wilson model addresses all three integrity goals: prevent unauthorized users from making modifications, prevent authorized users from making improper modifications, and maintain internal and external consistency.
-  In the Clark-Wilson model, users can only access and manipulate objects through programs. It uses access triple,which is subject-program-object.


These are notes collected from various sources.