Академический Документы
Профессиональный Документы
Культура Документы
November 1, 2004
Slide #1-1
Chapter 1: Introduction
Components of computer security Threats Policies and mechanisms The role of trust Assurance Operational Issues Human Issues
Introduction to Computer Security 2004 Matt Bishop Slide #1-2
November 1, 2004
Basic Components
Confidentiality
Keeping data and resources hidden
Integrity
Data integrity (integrity) Origin integrity (authentication)
Availability
Enabling access to data and resources
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-3
Classes of Threats
Disclosure
Snooping
Deception
Modification, spoofing, repudiation of origin, denial of receipt
Disruption
Modification
Usurpation
Modification, spoofing, delay, denial of service
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-4
November 1, 2004
Slide #1-5
Goals of Security
Prevention
Prevent attackers from violating security policy
Detection
Detect attackers violation of security policy
Recovery
Stop attack, assess and repair damage Continue to function correctly even if attack succeeds
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-6
Mechanisms
Assumed to enforce policy Support mechanisms work correctly
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-7
Types of Mechanisms
secure
precise
broad
Assurance
Specification
Requirements analysis Statement of desired functionality
Design
How system will meet specification
Implementation
Programs/systems that carry out design
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-9
Operational Issues
Cost-Benefit Analysis
Is it cheaper to prevent or recover?
Risk Analysis
Should we protect something? How much should we protect this thing?
Human Issues
Organizational Problems
Power and responsibility Financial benefits
People problems
Outsiders and insiders Social engineering
November 1, 2004
Slide #1-11
Tying Together
Threats Policy Specification Design Implementation Operation
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-12
Key Points
Policy defines security, and mechanisms enforce security
Confidentiality Integrity Availability
November 1, 2004
Slide #1-14
Overview
Protection state of system
Describes current settings, values of system relevant to protection
Description
objects (entities) o1 om s1 sn subjects s1 s2 sn
November 1, 2004
Subjects S = { s1,,sn } Objects O = { o1,,om } Rights R = { r1,,rk } Entries A[si, oj] R A[si, oj] = { rx, , ry } means subject si has rights rx, , ry over object oj
Slide #1-16
Example 1
Processes p, q Files f, g Rights r, w, x, a, o f g p rwo r q a ro
November 1, 2004
p rwxo r
q w rwxo
Slide #1-17
Example 2
Procedures inc_ctr, dec_ctr, manage Variable counter Rights +, , call counter inc_ctr dec_ctr inc_ctr + dec_ctr manage call call
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop
manage
call
Slide #1-18
State Transitions
Change the protection state of system | represents transition
Xi | Xi+1: command moves system from state Xi to Xi+1 Xi | * Xi+1: a sequence of commands moves system from state Xi to Xi+1
Primitive Operations
create subject s; create object o
Creates new row, column in ACM; creates new column in ACM
November 1, 2004
Slide #1-20
Creating File
Process p creates file f with r and w permission
co m m and create f i l e(p ,f ) create object f ; enter own in to A[p, f ] ; enter rin t o A[p,f ] ; enter w int o A[p,f ] ; end
November 1, 2004
Slide #1-21
Mono-Operational Commands
Make process p the owner of file g
co m m and makeowner ( p, g ) enter own in to A[p, g] ; end
Mono-operational command
Single primitive operation in this command
November 1, 2004
Slide #1-22
Conditional Commands
Let p give q r rights over f, if p owns f
co m m and grant read f i l e1(p,f ,q ) i f own in A[p, f ] then enter rin t o A[q,f ] ; end
Mono-conditional command
Single condition in this command
November 1, 2004
Slide #1-23
Multiple Conditions
Let p give q r and w rights over f, if p owns f and p has c rights over q
co m m and grant read f i l e2(p,f ,q ) i f own in A[p, f ] and c in A[p, q] then enter rin t o A[q,f ] ; enter w int o A[q,f ] ; end
November 1, 2004
Slide #1-24
Key Points
Access control matrix simplest abstraction mechanism for representing protection state Transitions alter protection state 6 primitive operations alter matrix
Transitions can be expressed as commands composed of these operations and, possibly, conditions
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-25
November 1, 2004
Slide #1-26
Overview
Safety Question HRU Model
November 1, 2004
Slide #1-27
What Is Secure?
Adding a generic right r where there was not one is leaking If a system S, beginning in initial state s0, cannot leak right r, it is safe with respect to the right r.
November 1, 2004
Slide #1-28
Safety Question
Does there exist an algorithm for determining whether a protection system S with initial state s0 is safe with respect to a generic right r?
Here, safe = secure for an abstract model
November 1, 2004
Slide #1-29
Mono-Operational Commands
Answer: yes Sketch of proof:
Consider minimal sequence of commands c1, , ck to leak the right. Can omit delete, destroy Can merge all creates into one Worst case: insert every right into every entry; with s subjects and o objects initially, and n rights, upper bound is k n(s+1)(o+1)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-30
General Case
Answer: no Sketch of proof:
Reduce halting problem to safety problem Turing Machine review: Infinite tape in one direction States K, symbols M; distinguished blank b Transition function (k, m) = (k, m, L) means in state k, symbol m on tape location replaced by symbol m, head moves to left one square, and enters state k Halting state is qf; TM halts when it enters this state
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-31
Mapping
1 2 3 4
s1 A
s2 own B
s3
s4
November 1, 2004
Slide #1-32
Mapping
1 2 3 4
B X D head s1 s2 s3 s4
s1 A
s2 own B
s3
s4
After (k, C) = (k1, X, R) where k is the current state and k1 the next state
November 1, 2004
Slide #1-33
Command Mapping
(k, C) = (k1, X, R) at intermediate becomes co m m and ck,C(s3 ,s4) i f own in A[s3, s4] and k i n A[s3,s3 ] and C in A[s3,s3] then ,s3] ; delete k f r o m A[ s3 delete C f r o m A[ s3 ,s3] ; enter X in t o A[s3,s3 ] ; enter k1 into A [ s4,s4 ] ; end
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-34
Mapping
1 2 3 4 5
B X Y
b head s1 s2 s3 s4 s5
s1 A
s2 own B
s3
s4
s5
After (k1, D) = (k2, Y, R) where k1 is the current state and k2 the next state
November 1, 2004
Command Mapping
(k1, D) = (k2, Y, R) at end becomes s5) co m m and cr i gh tmos tk,C(s4, s4 ] and k1 in A [s4 ,s4] i f end in A [s4, s4 ] and D in A [s4, then ] ; delete end f r o m A[s4,s4 ; create subject s5 s5 ] ; enter own into A [s4, s5 ] ; enter end into A [s5, rom A[s4, s4 ] ; delete k1 f s4 ] ; delete D f rom A[s4, ; enter Y into A[s4,s4] [s5 ,s5] ; enter k2 into A end
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-36
Rest of Proof
Protection system exactly simulates a TM
Exactly 1 end right in ACM 1 right in entries corresponds to state Thus, at most 1 applicable command
If TM enters state qf, then right has leaked If safety question decidable, then represent TM as above and determine if qf leaks
Implies halting problem decidable
Other Results
Set of unsafe systems is recursively enumerable Delete create primitive; then safety question is complete in P-SPACE Delete destroy, delete primitives; then safety question is undecidable Systems are monotonic Safety question for monoconditional, monotonic protection systems is decidable Safety question for monoconditional protection systems with create, enter, delete (and no destroy) is decidable.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-38
Key Points
Safety problem undecidable Limiting scope of systems can make problem decidable
November 1, 2004
Slide #1-39
Underlying both
Trust
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-40
Overview
Overview Policies Trust Nature of Security Mechanisms Example Policy
November 1, 2004
Slide #1-41
Security Policy
Policy partitions system states into:
Authorized (secure)
These are states the system can enter
Unauthorized (nonsecure)
If the system enters any of these states, its a security violation
Secure system
Starts in authorized state Never enters unauthorized state
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-42
Confidentiality
X set of entities, I information I has confidentiality property with respect to X if no x X can obtain information from I I can be disclosed to others Example:
X set of students I final exam answer key I is confidential with respect to X if students cannot obtain final exam answer key
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-43
Integrity
X set of entities, I information I has integrity property with respect to X if all x X trust information in I Types of integrity:
trust I, its conveyance and protection (data integrity) I information about origin of something or an identity (origin integrity, authentication) I resource: means resource functions as it should (assurance)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-44
Availability
X set of entities, I resource I has availability property with respect to X if all x X can access I Types of availability:
traditional: x gets access or not quality of service: promised a level of access (for example, a specific level of bandwidth) and not meet it, even though some access is achieved
November 1, 2004
Slide #1-45
Policy Models
Abstract description of a policy or class of policies Focus on points of interest in policies
Security levels in multilevel security models Separation of duty in Clark-Wilson model Conflict of interest in Chinese Wall model
November 1, 2004
Slide #1-46
Confidentiality policy
Policy protecting only confidentiality
Integrity policy
Policy protecting only integrity
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-47
Trust
Administrator installs patch 1. Trusts patch came from vendor, not tampered with in transit 2. Trusts vendor tested patch thoroughly 3. Trusts vendors test environment corresponds to local environment 4. Trusts patch is installed correctly
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-49
2. Preconditions hold in environment in which S is to be used 3. S transformed into executable S whose actions follow source code
Compiler bugs, linker/loader/library problems Hardware bugs (Pentium f00fbug, for example)
Introduction to Computer Security 2004 Matt Bishop Slide #1-51
November 1, 2004
Question
Policy disallows cheating
Includes copying homework, with or without permission
CS class has students do homework on computer Anne forgets to read-protect her homework file Bill copies it Who cheated?
Anne, Bill, or both?
November 1, 2004
Slide #1-53
Answer Part 1
Bill cheated
Policy forbids copying homework assignment Bill did it System entered unauthorized state (Bill having a copy of Annes assignment)
Answer Part 2
Anne didnt protect her homework
Not required by security policy
She didnt breach security If policy said students had to read-protect homework files, then Anne did breach security
She didnt do this
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-55
Mechanisms
Entity or procedure that enforces some part of the security policy
Access controls (like bits to prevent someone from reading a homework file) Disallowing people from bringing CDs and floppy disks into a computer facility to control what is placed on systems
November 1, 2004
Slide #1-56
November 1, 2004
Slide #1-58
November 1, 2004
Slide #1-59
Summary
Warns that electronic mail not private
Can be read during normal system administration Can be forged, altered, and forwarded
Summary
What users should and should not do
Think before you send Be courteous, respectful of others Dont nterfere with others use of email
November 1, 2004
Slide #1-61
Full Policy
Context
Does not apply to Dept. of Energy labs run by the university Does not apply to printed copies of email
Other policies apply here
November 1, 2004
Slide #1-62
Uses of E-mail
Anonymity allowed
Exception: if it violates laws or other policies
Security of E-mail
University can read e-mail
Wont go out of its way to do so Allowed for legitimate business purposes Allowed to keep e-mail robust, reliable
Implementation
Adds campus-specific requirements and procedures
Example: incidental personal use not allowed if it benefits a non-university organization Allows implementation to take into account differences between campuses, such as self-governance by Academic Senate
Key Points
Policies describe what is allowed Mechanisms control how policies are enforced Trust underlies everything
November 1, 2004
Slide #1-66
Bell-LaPadula Model
General idea Informal description of rules
November 1, 2004
Slide #1-67
Overview
Goals of Confidentiality Model Bell-LaPadula Model
Informally Example Instantiation
November 1, 2004
Slide #1-68
Confidentiality Policy
Goal: prevent the unauthorized disclosure of information
Deals with information flow Integrity incidental
Example
security level Top Secret Secret Confidential Unclassified subject Tamara Samuel Claire Ulaley object Personnel Files E-Mail Files Activity Logs Telephone Lists
Tamara can read all files Claire cannot read Personnel or E-Mail Files Ulaley can only read Telephone Lists
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-71
Reading Information
Information flows up, not down
Reads up disallowed, reads down allowed
Writing Information
Information flows up, not down
Writes up allowed, writes down disallowed
*-Property (Step 1)
Subject s can write object o iff L(s) L(o) and s has permission to write o
Note: combines mandatory control (relationship of security levels) and discretionary control (the required permission)
November 1, 2004
Slide #1-74
Let C be set of classifications, K set of categories. Set of security levels L = C K, dom form lattice
lub(L) = (max(A), C) glb(L) = (min(A), )
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-76
November 1, 2004
Slide #1-77
Reading Information
Information flows up, not down
Reads up disallowed, reads down allowed
Writing Information
Information flows up, not down
Writes up allowed, writes down disallowed
*-Property (Step 2)
Subject s can write object o iff L(o) dom L(s) and s has permission to write o
Note: combines mandatory control (relationship of security levels) and discretionary control (the required permission)
Problem
Colonel has (Secret, {NUC, EUR}) clearance Major has (Secret, {EUR}) clearance
Major can talk to colonel (write up or read down) Colonel cannot talk to major (read up or write down)
Clearly absurd!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-81
Solution
Define maximum, current levels for subjects
maxlevel(s) dom curlevel(s)
Example
Treat Major as an object (Colonel is writing to him/her) Colonel has maxlevel (Secret, { NUC, EUR }) Colonel sets curlevel to (Secret, { EUR }) Now L(Major) dom curlevel(Colonel)
Colonel can write to Major without violating no writes down
DG/UX System
Provides mandatory access controls
MAC label identifies security level Default labels, but can define others
Initially
Subjects assigned MAC label of parent
Initial label assigned to user, kept in Authorization and Authentication database
MAC Regions
A&A database, audit Hierarchy levels VP1 VP2 VP3 VP4 VP5 User data and applications Site executables Trusted data Virus Prevention Region Executables not part of the TCB Executables part of theTCB Reserved for future use Categories Administrative Region User Region
IMPL_HI is maximum (least upper bound) of all levels IMPL_LO is minimum (greatest lower bound) of all levels
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-84
Directory Problem
Process p at MAC_A tries to create file /tmp/x /tmp/x exists but has MAC label MAC_B
Assume MAC_B dom MAC_A
Create fails
Now p knows a file named x with a higher label exists
Fix: only programs with same MAC label as directory can create files in the directory
Now compilation wont work, mail cant be delivered
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-85
Multilevel Directory
Directory with a set of subdirectories, one per label
Not normally visible to user p creating /tmp/x actually creates /tmp/d/x where d is directory corresponding to MAC_A All ps references to /tmp go to /tmp/d
Object Labels
Requirement: every file system object must have MAC label 1. Roots of file systems have explicit MAC labels
If mounted file system has no label, it gets label of mount point
Object Labels
Problem: object has two names
/x/y/z, /a/b/c refer to same object y has explicit label IMPL_HI b has explicit label IMPL_B
Case 1: hard link created while file system on DG/UX system, so 3. Creating hard link requires explicit label
If implicit, label made explicit Moving a file makes label explicit
Introduction to Computer Security 2004 Matt Bishop Slide #1-88
November 1, 2004
Object Labels
Case 2: hard link exists when file system mounted
No objects on paths have explicit labels: paths have same implicit labels An object on path acquires an explicit label: implicit label of child must be preserved
so 4. Change to directory label makes child labels explicit before the change
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-89
Object Labels
Symbolic links are files, and treated as such, so 5. When resolving symbolic link, label of object is label of target of the link
System needs access to the symbolic link itself
November 1, 2004
Slide #1-90
November 1, 2004
Slide #1-91
MAC Tuples
Up to 3 MAC ranges (one per region) MAC range is a set of labels with upper, lower bound
Upper bound must dominate lower bound of range
Examples
1. [(Secret, {NUC}), (Top Secret, {NUC})] 2. [(Secret, ), (Top Secret, {NUC, EUR, ASI})] 3. [(Confidential, {ASI}), (Secret, {NUC, ASI})]
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-92
MAC Ranges
1. 2. 3. [(Secret, {NUC}), (Top Secret, {NUC})] [(Secret, ), (Top Secret, {NUC, EUR, ASI})] [(Confidential, {ASI}), (Secret, {NUC, ASI})] (Top Secret, {NUC}) in ranges 1, 2 (Secret, {NUC, ASI}) in ranges 2, 3 [(Secret, {ASI}), (Top Secret, {EUR})] not valid range
as (Top Secret, {EUR}) dom (Secret, {ASI})
Introduction to Computer Security 2004 Matt Bishop Slide #1-93
November 1, 2004
Example
Paper has MAC range: [(Secret, {EUR}), (Top Secret, {NUC, EUR})]
November 1, 2004
Slide #1-94
MAC Tuples
Process can read object when:
Object MAC range (lr, hr); process MAC label pl pl dom hr
Process MAC label grants read access to upper bound of range
Example
Peter, with label (Secret, {EUR}), cannot read paper
(Top Secret, {NUC, EUR}) dom (Secret, {EUR})
Paul, with label (Top Secret, {NUC, EUR, ASI}) can read paper
(Top Secret, {NUC, EUR, ASI}) dom (Top Secret, {NUC, EUR})
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-95
MAC Tuples
Process can write object when:
Object MAC range (lr, hr); process MAC label pl pl (lr, hr)
Process MAC label grants write access to any label in range
Example
Peter, with label (Secret, {EUR}), can write paper
(Top Secret, {NUC, EUR}) dom (Secret, {EUR}) and (Secret, {EUR}) dom (Secret, {EUR})
Paul, with label (Top Secret, {NUC, EUR, ASI}), cannot read paper
(Top Secret, {NUC, EUR, ASI}) dom (Top Secret, {NUC, EUR})
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-96
Key Points
Confidentiality models restrict flow of information Bell-LaPadula models multilevel security
Cornerstone of much work in computer security
November 1, 2004
Slide #1-97
November 1, 2004
Slide #1-98
Overview
Requirements
Very different than confidentiality policies
November 1, 2004
Slide #1-99
Requirements of Policies
1. 2. Users will not write their own programs, but will use existing production programs and databases. Programmers will develop and test programs on a non-production system; if they need access to actual data, they will be given production data via a special process, but will use it on their development system. A special process must be followed to install a program from the development system onto the production system. The special process in requirement 3 must be controlled and audited. The managers and auditors must have access to both the system state and the system logs that are generated.
3. 4. 5.
November 1, 2004
Slide #1-100
Note relationship between integrity and trustworthiness Important point: integrity levels are not security levels
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-102
Bibas Model
Similar to Bell-LaPadula model
1. 2. 3. s S can read o O iff i(s) i(o) s S can write to o O iff i(o) i(s) s1 S can execute s2 S iff i(s2) i(s1)
Add compartments and discretionary controls to get full dual of Bell-LaPadula model Information flow result holds
Different proof, though
Example: Bank
D todays deposits, W withdrawals, YB yesterdays balance, TB todays balance Integrity constraint: D + YB W
Well-formed transaction move system from one consistent state to another Issue: who examines, certifies transactions done correctly?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-105
Entities
CDIs: constrained data items
Data subject to integrity controls
November 1, 2004
Slide #1-108
Logging
CR4 All TPs must append enough information to reconstruct the operation to an append-only CDI.
This CDI is the log Auditor needs to be able to determine what happened during reviews of transactions
November 1, 2004
Slide #1-110
Comparison to Biba
Biba
No notion of certification rules; trusted subjects ensure actions obey rules Untrusted data examined before being made trusted
Clark-Wilson
Explicit requirements that actions must meet Trusted entity must certify method to upgrade untrusted data (and not certify the data itself)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-115
Key Points
Integrity policies deal with trust
As trust is hard to quantify, these policies are hard to evaluate completely Look for assumptions and trusted users to find possible weak points in their implementation
Biba based on multilevel integrity Clark-Wilson focuses on separation of duty and transactions
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-116
Overview
Chinese Wall Model
Focuses on conflict of interest
CISS Policy
Combines integrity and confidentiality
ORCON
Combines mandatory, discretionary access controls
RBAC
Base controls on job function
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-118
Conflict of interest to accept, because his advice for either bank would affect his advice to the other bank
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-119
Organization
Organize entities into conflict of interest classes Control subject accesses to each class Control writing to all classes to ensure information is not passed along in violation of rules Allow sanitized data to be viewed by everyone
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-120
Definitions
Objects: items of information related to a company Company dataset (CD): contains objects related to a single company
Written CD(O)
Example
Bank COI Class Bank of America Citibank Bank of the West Gasoline Company COI Class Shell Oil Union 76 Standard Oil ARCO
November 1, 2004
Slide #1-122
Temporal Element
If Anthony reads any CD in a COI, he can never read another CD in that COI
Possible that information learned earlier may allow him to make decisions later Let PR(S) be set of objects that S has already read
November 1, 2004
Slide #1-123
2.
Ignores sanitized data (see below) Initially, PR(s) = , so initial read request granted
Introduction to Computer Security 2004 Matt Bishop Slide #1-124
November 1, 2004
Sanitization
Public information may belong to a CD
As is publicly available, no conflicts of interest arise So, should not affect ability of analysts to read Typically, all sensitive data removed from such information before it is released publicly (called sanitization)
Writing
Anthony, Susan work in same trading house Anthony can read Bank 1s CD, Gas CD Susan can read Bank 2s CD, Gas CD If Anthony could write to Gas CD, Susan can read it
Hence, indirectly, she can read information from Bank 1s CD, a clear conflict of interest
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-126
CW-*-Property
s can write to o iff both of the following hold:
1. The CW-simple security condition permits s to read o; and 2. For all unsanitized objects o, if s can read o, then CD(o) = CD(o)
Says that s can write to an object if all the (unsanitized) objects it can read are in the same dataset
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-127
Compare to Bell-LaPadula
Fundamentally different
CW has no security labels, B-LP does CW has notion of past accesses, B-LP does not
Subjects assigned clearance for compartments without multiple categories corresponding to CDs in same COI class
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-128
Compare to Bell-LaPadula
Bell-LaPadula cannot track changes over time
Susan becomes ill, Anna needs to take over
C-W history lets Anna know if she can No way for Bell-LaPadula to capture this
Compare to Clark-Wilson
Clark-Wilson Model covers integrity, so consider only access control aspects If subjects and processes are interchangeable, a single person could use multiple processes to violate CW-simple security condition
Would still comply with Clark-Wilson Model
If subject is a specific person and includes all processes the subject executes, then consistent with Clark-Wilson Model
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-130
Entities:
Patient: subject of medical records (or agent) Personal health information: data about patients health or treatment enabling identification of patient Clinician: health-care professional with access to personal health information while doing job
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-131
Principles derived from medical ethics of various societies, and from practicing clinicians
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-132
Access
Principle 1: Each medical record has an access control list naming the individuals or groups who may read and append information to the record. The system must restrict access to those identified on the access control list.
Idea is that clinicians need access, but no-one else. Auditors get access to copies, so they cannot alter records
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-133
Access
Principle 2: One of the clinicians on the access control list must have the right to add other clinicians to the access control list.
Called the responsible clinician
November 1, 2004
Slide #1-134
Access
Principle 3: The responsible clinician must notify the patient of the names on the access control list whenever the patients medical record is opened. Except for situations given in statutes, or in cases of emergency, the responsible clinician must obtain the patients consent.
Patient must consent to all treatment, and must know of violations of security
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-135
Access
Principle 4: The name of the clinician, the date, and the time of the access of a medical record must be recorded. Similar information must be kept for deletions.
This is for auditing. Dont delete information; update it (last part is for deletion of records after death, for example, or deletion of information when required by statute). Record information about all accesses.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-136
Creation
Principle: A clinician may open a record, with the clinician and the patient on the access control list. If a record is opened as a result of a referral, the referring clinician may also be on the access control list.
Creating clinician needs access, and patient should get it. If created from a referral, referring clinician needs access to get results of referral.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-137
Deletion
Principle: Clinical information cannot be deleted from a medical record until the appropriate time has passed.
This varies with circumstances.
November 1, 2004
Slide #1-138
Confinement
Principle: Information from one medical record may be appended to a different medical record if and only if the access control list of the second record is a subset of the access control list of the first.
This keeps information from leaking to unauthorized users. All users have to be on the access control list.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-139
Aggregation
Principle: Measures for preventing aggregation of patient data must be effective. In particular, a patient must be notified if anyone is to be added to the access control list for the patients record and if that person has access to a large number of medical records.
Fear here is that a corrupt investigator may obtain access to a large number of records, correlate them, and discover private information about individuals which can then be used for nefarious purposes (such as blackmail)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-140
Enforcement
Principle: Any computer system that handles medical records must have a subsystem that enforces the preceding principles. The effectiveness of this enforcement must be subject to evaluation by independent auditors.
This policy has to be enforced, and the enforcement mechanisms must be auditable (and audited)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-141
Compare to Bell-LaPadula
Confinement Principle imposes lattice structure on entities in model
Similar to Bell-LaPadula
CISS focuses on objects being accessed; BLP on the subjects accessing the objects
May matter when looking for insiders in the medical environment
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-142
Compare to Clark-Wilson
CDIs are medical records TPs are functions updating records, access control lists IVPs certify:
A person identified as a clinician is a clinician; A clinician validates, or has validated, information in the medical record; When someone is to be notified of an event, such notification occurs; and When someone must give consent, the operation cannot proceed until the consent is obtained
Auditing (CR4) requirement: make all records appendonly, notify patient when access control list changed
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-143
ORCON
Problem: organization creating document wants to control its dissemination
Example: Secretary of Agriculture writes a memo for distribution to her immediate subordinates, and she must give permission for it to be disseminated further. This is originator controlled (here, the originator is a person).
November 1, 2004
Slide #1-144
Requirements
Subject s S marks object o O as ORCON on behalf of organization X. X allows o to be disclosed to subjects acting on behalf of organization Y with the following restrictions:
1. 2. o cannot be released to subjects acting on behalf of other organizations without Xs permission; and Any copies of o must have the same restrictions placed on it.
November 1, 2004
Slide #1-145
DAC Fails
Owner can set any desired permissions
This makes 2 unenforceable
November 1, 2004
Slide #1-146
MAC Fails
First problem: category explosion
Category C contains o, X, Y, and nothing else. If a subject y Y wants to read o, x X makes a copy o. Note o has category C. If y wants to give z Z a copy, z must be in Yby definition, its not. If x wants to let w W see the document, need a new category C containing o, X, W.
Combine Them
The owner of an object cannot change the access controls of the object. When an object is copied, the access control restrictions of that source are copied and bound to the target of the copy.
These are MAC (owner cant control them)
The creator (originator) can alter the access control restrictions on a per-subject and per-object basis.
This is DAC (owner can control it)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-148
RBAC
Access depends on function, not identity
Example:
Allison, bookkeeper for Math Dept, has access to financial records. She leaves. Betty hired as the new bookkeeper, so she now has access to those records
The role of bookkeeper dictates access, not the identity of the individual.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-149
Definitions
Role r: collection of job functions
trans(r): set of authorized transactions for r
Axioms
Let S be the set of subjects and T the set of transactions. Rule of role assignment: (s S)(t T) [canexec(s, t) actr(s) ].
If s can execute a transaction, it has a role This ties transactions to roles
Axiom
Rule of transaction authorization: (s S)(t T) [canexec(s, t) t trans(actr(s))].
If a subject s can execute a transaction, then the transaction is an authorized one for the role s has assumed
November 1, 2004
Slide #1-152
Containment of Roles
Trainer can do all transactions that trainee can do (and then some). This means role r contains role r (r > r). So:
(s S)[ r authr(s) r > r r authr(s) ]
November 1, 2004
Slide #1-153
Separation of Duty
Let r be a role, and let s be a subject such that r auth(s). Then the predicate meauth(r) (for mutually exclusive authorizations) is the set of roles that s cannot assume because of the separation of duty requirement. Separation of duty:
(r1, r2 R) [ r2 meauth(r1) [ (s S) [ r1 authr(s) r2 authr(s) ] ] ]
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-154
Key Points
Hybrid policies deal with both confidentiality and integrity
Different combinations of these
November 1, 2004
Slide #1-156
Overview
Classical Cryptography
Csar cipher Vignere cipher DES
Cryptographic Checksums
HMAC
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-157
Cryptosystem
Quintuple (E, D, M, K, C)
M set of plaintexts K set of keys C set of ciphertexts E set of encryption functions e: M K C D set of decryption functions d: C K M
November 1, 2004
Slide #1-158
Example
Example: Csar cipher
M = { sequences of letters } K = { i | i is an integer and 0 i 25 } E = { Ek | k K and for all letters m, Ek(m) = (m + k) mod 26 } D = { Dk | k K and for all letters c, Dk(c) = (26 + c k) mod 26 } C=M
November 1, 2004
Slide #1-159
Attacks
Opponent whose goal is to break cryptosystem is the adversary
Assume adversary knows algorithm used, but not key
Statistical attacks
Make assumptions about the distribution of letters, pairs of letters (digrams), triplets of letters (trigrams), etc.
Called models of the language
Classical Cryptography
Sender, receiver share common key
Keys may be the same, or trivial to derive from one another Sometimes called symmetric cryptography
Transposition Cipher
Rearrange letters in plaintext to produce ciphertext Example (Rail-Fence Cipher)
Plaintext is HELLO W O RLD Rearrange as HL O OL EL W R D Ciphertext is HLO O L ELW R D
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-163
November 1, 2004
Slide #1-164
Example
Ciphertext: HLO OLEL W R D Frequencies of 2-grams beginning with H
HE 0.0305 HO 0.0043 HL, HW, HR, HD < 0.0010
Implies E follows H
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-165
Example
Arrange so the H and E are adjacent
HE LL OW OR LD
Substitution Ciphers
Change characters in plaintext to produce ciphertext Example (Csar cipher)
Plaintext is HELLO W O RLD Change each letter to the third letter following it (X goes to A, Y to B, Z to C)
Key is 3, usually written as letter D
Ciphertext is KH O O R ZRUO G
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-167
Statistical analysis
Compare to 1-gram model of English
November 1, 2004
Slide #1-168
Statistical Attack
Compute frequency of each letter in ciphertext:
G 0.1 R 0.2 H 0.1 U 0.1 K 0.1 Z 0.1 O 0.3
Character Frequencies
a b c d e f g 0.080 0.015 0.030 0.040 0.130 0.020 0.015
Introduction to Computer Security 2004 Matt Bishop
h i j k l m
n o p q r s
t u v x y z
w 0.015
November 1, 2004
Statistical Analysis
f(c) frequency of character c in ciphertext (i) correlation of frequency of letters in ciphertext with corresponding letters in English, assuming key is i
(i) = 0 c 25 f(c)p(c i) so here, (i) = 0.1p(6 i) + 0.1p(7 i) + 0.1p(10 i) + 0.3p(14 i) + 0.2p(17 i) + 0.1p(20 i) + 0.1p(25 i)
p(x) is frequency of character x in English
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-171
November 1, 2004
The Result
Most probable keys, based on :
i = 6, (i) = 0.0660
plaintext EBI I L TLOLA
i = 3, (i) = 0.0575
plaintext HELLO W O RLD
Csars Problem
Key is too short
Can be found by exhaustive search Statistical frequencies not concealed well
They look too much like regular English letters
So make it longer
Multiple letters in key Idea is to smooth the statistical frequencies to make cryptanalysis harder
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-174
Vignere Cipher
Like Csar cipher, but use a phrase Example
Message THE BOY HAS THE BALL Key VIG Encipher using Csar cipher for each letter: key V I G VIG VIG VIG VIG V p la in THEB O YHASTHEBALL c ipher OPKW W ECIY OPK WIR G
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-175
November 1, 2004
Slide #1-176
Useful Terms
period: length of key
In earlier example, period is 3
Establish Period
Kaskski: repetitions in the ciphertext occur when characters of the key appear over the same characters in the plaintext Example:
key V IGVIGVIGVIG VIGV p la in THEB OYHASTHEBALL c ipher OPK W W ECIYO PKWIRG Note the key and plaintext line up over the repetitions (underlined). As distance between repetitions is 9, the period is a factor of 9 (that is, 1, 3, or 9)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-180
Repetitions in Example
Letters MI OO OEQ O O G FV AA MOC QO PC NE SV CH
November 1, 2004
Start 5 22 24 39 43 50 56 69 77 94 118
Distance
Factors
10 2, 5 5 5 30 2, 3, 5 24 2, 2, 2, 3 44 2, 2, 11 72 2, 2, 2, 3, 3 49 7, 7 48 2, 2, 2, 2, 3 6 2, 3 3 3 6 2, 3
Slide #1-181
Estimate of Period
OEQOOG is probably not a coincidence
Its too long for that Period may be 1, 2, 3, 5, 6, 10, 15, or 30
Most others (7/10) have 2 in their factors Almost as many (6/10) have 3 in their factors Begin with period of 2 3 = 6
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-182
Check on Period
Index of coincidence is probability that two randomly chosen letters from ciphertext will be the same Tabulated for different periods:
1 0.066 3 0.047 2 0.052 4 0.045 Large 0.038
November 1, 2004
5 10
0.044 0.041
Slide #1-183
Compute IC
IC = [n (n 1)]1 0i25 [Fi (Fi 1)]
where n is length of ciphertext and Fi the number of times character i occurs in ciphertext
Here, IC = 0.043
Indicates a key of slightly more than 5 A statistical measure, so it can be in error, but it agrees with the previous estimate (which was 6)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-184
Frequency Examination
ABCDEFG HIJKLM N O P Q R S TUVW X YZ 1 31004011301001300112000000 2 10022210013010000010404000 3 12000000201140004013021000 4 21102201000010431000000211 5 10500021200000500030020000 6 01110022311012100000030101 Letter frequencies are (H high, M medium, L low): H M M M H M M H H M M M M H H MLH H H MLLLLL
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-186
Begin Decryption
First matches characteristics of unshifted alphabet Third matches if I shifted to A Sixth matches if V shifted to A Substitute into ciphertext (bold are substitutions) A DIYS RIUK B O C K K L MIGH K A Z O TO EIOO L I FT AG PA U E F VAT A S CIIT W E O C N O EIOO L B M T FV E G G O P CNE K I HS SE W N E C SE D D AA A R W C XS A N S N P H HE U L Q O N O F EEG O S W L PC M AJ E O C MI U A X
Introduction to Computer Security 2004 Matt Bishop Slide #1-187
November 1, 2004
Next Alphabet
MICAX in last line suggests mical (a common ending for an adjective), meaning fourth alphabet maps O into A: ALIM S RICKP O C KSL A IGHS A N O T O MIC O L I NT O G PA CET VATIS QIITE EC C N O MIC O L B UTT V E G O O D CNESI VSSEE NS C SE LD O A A RE C L S A N A N D H HECL E O N O N ES G O S ELD C M A REC C MICAL
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-189
Got It!
QI means that U maps into I, as Q is always followed by U:
ALIM E RICKP A C K SL A U G H S ANAT O MICAL I NT O S PA CET HATIS QUITE EC O N O MICAL B UTTH E G O O D ONESI VESEE NS O SE LD O M A RE C LE A N A N D THECL EA N O N ESSO S ELDO M A REC O MICAL
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-190
One-Time Pad
A Vigenre cipher with a random key at least as long as the message
Provably unbreakable Why? Look at ciphertext DXQ R . Equally likely to correspond to plaintext D OIT (key AJIY) and to plaintext DO NT (key AJDY ) and any other 4 letters Warning: keys must be random, or you can attack the cipher by trying to regenerate the key
Approximations, such as using pseudorandom number generators to generate keys, are not random
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-191
A product cipher
basic unit is the bit performs both substitution and transposition (permutation) on the bits
Cipher consists of 16 rounds (iterations) each with a round key generated from the user-supplied key
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-192
C0
LSH
LSH PC-2 K1
C1 LSH
D1 LSH
PC-2
K16
Slide #1-193
November 1, 2004
Encipherment
input IP
L0
R0 f R1 = L0 f(R0 , K1) K1
L1 = R 0
L16 = R15
IP1
output
November 1, 2004
Slide #1-194
The f Function
R i1 (32 bits) Ki (48 bits) E
R i1 (48 bits) S1 S2 S3 S4
S5 S6
Controversy
Considered too weak
Diffie, Hellman said in a few years technology would allow DES to be broken in days
Design using 1999 technology published
November 1, 2004
Slide #1-196
Undesirable Properties
4 weak keys
They are their own inverses
12 semi-weak keys
Each has another semi-weak key as inverse
Complementation property
DESk(m) = c DESk(m) = c
Differential Cryptanalysis
A chosen ciphertext attack
Requires 247 plaintext, ciphertext pairs
DES Modes
Electronic Code Book Mode (ECB)
Encipher each block independently
c = DESk(DESk1(DESk(m)))
DES
DES
c1
c2
sent
sent
November 1, 2004
Slide #1-200
DES
DES
m1
m2
November 1, 2004
Slide #1-201
Self-Healing Property
Initial message
3231343336353837 3231343336353837 3231343336353837 3231343336353837
Which decrypts to
efca61e19f4836f1 3231333336353837 3231343336353837 3231343336353837
Idea
Confidentiality: encipher using public key, decipher using private key Integrity/authentication: encipher using private key, decipher using public one
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-204
Requirements
1. It must be computationally easy to encipher or decipher a message given the appropriate key 2. It must be computationally infeasible to derive the private key from the public key 3. It must be computationally infeasible to determine the private key from a chosen plaintext attack
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-205
RSA
Exponentiation cipher Relies on the difficulty of determining the number of numbers relatively prime to a large integer n
November 1, 2004
Slide #1-206
Background
Totient function (n)
Number of positive integers less than n and relatively prime to n
Relatively prime means with no factors in common with n
Example: (10) = 4
1, 3, 7, 9 are relatively prime to 10
Example: (21) = 12
1, 2, 4, 5, 8, 10, 11, 13, 16, 17, 19, 20 are relatively prime to 21
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-207
Algorithm
Choose two large prime numbers p, q
Let n = pq; then (n) = (p1)(q1) Choose e < n such that e is relatively prime to (n). Compute d such that ed mod (n) = 1
Public key: (e, n); private key: d Encipher: c = me mod n Decipher: m = cd mod n
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-208
Example: Confidentiality
Take p = 7, q = 11, so n = 77 and (n) = 60 Alice chooses e = 17, making d = 53 Bob wants to send Alice secret message HELLO (07 04 11 11 14)
0717 mod 77 = 28 0417 mod 77 = 16 1117 mod 77 = 44 1117 mod 77 = 44 1417 mod 77 = 42
Introduction to Computer Security 2004 Matt Bishop Slide #1-209
Bob sends 28 16 44 44 42
November 1, 2004
Example
Alice receives 28 16 44 44 42 Alice uses private key, d = 53, to decrypt message:
2853 mod 77 = 07 1653 mod 77 = 04 4453 mod 77 = 11 4453 mod 77 = 11 4253 mod 77 = 14
Example: Integrity/Authentication
Take p = 7, q = 11, so n = 77 and (n) = 60 Alice chooses e = 17, making d = 53 Alice wants to send Bob message HELLO (07 04 11 11 14) so Bob knows it is what Alice sent (no changes in transit, and authenticated)
0753 mod 77 = 35 0453 mod 77 = 09 1153 mod 77 = 44 1153 mod 77 = 44 1453 mod 77 = 49
Alice sends 35 09 44 44 49
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-211
Example
Bob receives 35 09 44 44 49 Bob uses Alices public key, e = 17, n = 77, to decrypt message:
3517 mod 77 = 07 0917 mod 77 = 04 4417 mod 77 = 11 4417 mod 77 = 11 4917 mod 77 = 14
November 1, 2004
Slide #1-212
Example: Both
Alice wants to send Bob message HELLO both enciphered and authenticated (integrity-checked)
Alices keys: public (17, 77); private: 53 Bobs keys: public: (37, 77); private: 13
Alice sends 07 37 44 44 14
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-213
Security Services
Confidentiality
Only the owner of the private key knows it, so text enciphered with public key cannot be read by anyone except the owner of the private key
Authentication
Only the owner of the private key knows it, so text enciphered with private key must have been generated by the owner
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-214
Non-Repudiation
Message enciphered with private key came from someone who knew it
November 1, 2004
Slide #1-215
Warnings
Encipher message in blocks considerably larger than the examples here
If 1 character per block, RSA can be broken using statistical attacks (just like classical cryptosystems) Attacker cannot alter letters, but can rearrange them and alter message meaning
Example: reverse enciphered message of text ON to get NO
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-216
Cryptographic Checksums
Mathematical function to generate a set of k bits from a set of n bits (where k n).
k is smaller then n except in unusual circumstances
Example Use
Bob receives 10111101 as bits.
Sender is using even parity; 6 1 bits, so character was received correctly
Note: could be garbled, but 2 bits would need to have been changed to preserve parity
Sender is using odd parity; even number of 1 bits, so character was not received correctly
November 1, 2004
Slide #1-218
Definition
Cryptographic checksum h: AB:
1. 2. 3. For any x A, h(x) is easy to compute For any y B, it is computationally infeasible to find x A such that h(x) = y It is computationally infeasible to find two inputs x, x A such that x x and h(x) = h(x)
Alternate form (stronger): Given any x A, it is computationally infeasible to find a different x A such that h(x) = h(x).
November 1, 2004
Slide #1-219
Collisions
If x x and h(x) = h(x), x and x are a collision
Pigeonhole principle: if there are n containers for n+1 objects, then at least one container will have 2 objects in it. Application: if there are 32 files and 8 possible cryptographic checksum values, at least one value corresponds to at least 4 files
November 1, 2004
Slide #1-220
Keys
Keyed cryptographic checksum: requires cryptographic key
DES in chaining mode: encipher message, use last n bits. Requires a key to encipher, so it is a keyed cryptographic checksum.
HMAC
Make keyed cryptographic checksums from keyless cryptographic checksums h keyless cryptographic checksum function that takes data in blocks of b bytes and outputs blocks of l bytes. k is cryptographic key of length b bytes
If short, pad with 0 bytes; if long, hash to length b
ipad is 00110110 repeated b times opad is 01011100 repeated b times HMAC-h(k, m) = h(k opad || h(k ipad || m))
exclusive or, || concatenation
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-222
Key Points
Two main types of cryptosystems: classical and public key Classical cryptosystems encipher and decipher using the same key
Or one key is easily derived from the other
November 1, 2004
Slide #1-224
Overview
Key exchange
Session vs. interchange keys Classical, public key methods
Key storage
Key revocation
Digital signatures
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-225
Notation
X Y : { Z || W } kX,Y
X sends Y the message produced by concatenating Z and W enciphered by key kX,Y, which is shared by users X and Y
A T : { Z } kA || { W } kA,T
A sends T a message consisting of the concatenation of Z enciphered using kA, As key, and W enciphered using kA,T, the key shared by A and T
kB enciphers all session keys Alice uses to communicate with Bob Called an interchange key
Alice sends { m } ks { ks } kB
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-227
Benefits
Limits amount of traffic enciphered with single key
Standard practice, to decrease the amount of traffic an attacker can obtain
Alice, Bob may trust third party All cryptosystems, protocols publicly known
Only secret data is the keys, ancillary information known only to Alice and Bob needed to derive keys Anything transmitted is assumed known to attacker
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-229
Simple Protocol
Alice { request for session key to Bob } kA Cathy
Alice
{ ks } kA || { ks } kB
Cathy
Alice
November 1, 2004
{ ks } kB
Bob
Slide #1-231
Problems
How does Bob know he is talking to Alice?
Replay attack: Eve records message from Alice to Bob, later replays it; Bob may think hes talking to Alice, but he isnt Session key reuse: Eve replays message from Alice to Bob, so Bob re-uses session key
Needham-Schroeder
Alice Alice Alice Alice Alice
November 1, 2004
Third message
Alice knows only Bob can read it
As only Bob can derive session key from message
Fourth message
Uses session key to determine if it is replay from Eve
If not, Alice will respond correctly in fifth message If so, Eve cant decipher r2 and so cant respond, or responds incorrectly
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-235
Denning-Sacco Modification
Assumption: all keys are secret Question: suppose Eve can obtain session key. How does that affect protocol?
In what follows, Eve knows ks { Alice || ks } kB Eve Eve Eve
November 1, 2004
{ r2 } ks { r2 1 } ks
Introduction to Computer Security 2004 Matt Bishop
Solution
In protocol above, Eve impersonates Alice Problem: replay in third step
First in previous slide
Solution: use time stamp T to detect replay Weakness: if clocks not synchronized, may either reject valid messages or accept replays
Parties with either slow or fast clocks vulnerable to replay Resetting clock does not eliminate vulnerability
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-237
Otway-Rees Protocol
Corrects problem
That is, Eve replaying the third message in the protocol
The Protocol
Alice n || Alice || Bob || { r1 || n || Alice || Bob } kA Bob Bob
Bob
Alice
November 1, 2004
Bob
Slide #1-240
November 1, 2004
Slide #1-241
November 1, 2004
Slide #1-242
Replay Attack
Eve acquires old ks, message in third step
n || { r1 || ks } kA || { r2 || ks } kB
Kerberos
Authentication system
Based on Needham-Schroeder with Denning-Sacco modification Central server plays role of trusted third party (Cathy)
Ticket
Issuer vouches for identity of requester of service
Authenticator
Identifies sender
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-244
Idea
User u authenticates to Kerberos server
Obtains ticket Tu,TGS for ticket granting service (TGS)
Details follow
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-245
Ticket
Credential saying issuer has identified ticket requester Example ticket issued to user u for service s
Tu,s = s || { u || us address || valid time || ku,s } ks where: ku,s is session key for user and service Valid time is interval for which ticket valid us address may be IP address or something else
Note: more fields, but not relevant here
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-246
Authenticator
Credential containing identity of sender of ticket
Used to confirm sender is entity to which ticket was issued
Protocol
user Cathy user user user user
November 1, 2004
user || TGS { ku,TGS } ku || Tu,TGS service || Au,TGS || Tu,TGS user || { ku,s } ku,TGS || Tu,s Au,s || Tu,s { t + 1 } ku,s
Introduction to Computer Security 2004 Matt Bishop
Analysis
First two steps get user ticket to use TGS
User u can obtain session key only if u knows key shared with Cathy
Next four steps show how u gets and uses ticket for service s
Service s validates request by checking sender (using Au,s) is same as entity ticket issued to Step 6 optional; used when u requests confirmation
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-249
Problems
Relies on synchronized clocks
If not synchronized and old tickets, authenticators not cached, replay is possible
Simple protocol
ks is desired session key { ks } eB
Introduction to Computer Security 2004 Matt Bishop
Alice
November 1, 2004
Bob
Slide #1-251
Alice
November 1, 2004
{ { ks } dA } eB
Introduction to Computer Security 2004 Matt Bishop
Bob
Slide #1-252
Notes
Can include message enciphered with ks Assumes Bob has Alices public key, and vice versa
If not, each must get it from public server If keys not bound to identity of owner, attacker Eve can launch a man-in-the-middle attack (next slide; Cathy is public server providing public keys)
Solution to this (binding identity to keys) discussed later as public key infrastructure (PKI)
November 1, 2004
Slide #1-253
Man-in-the-Middle Attack
Alice send Bobs public key Eve Eve Alice Alice eE { ks } eE Eve Eve intercepts message Eve
November 1, 2004
Bob Bob
{ ks } eB
Slide #1-254
Certificates
Create token (message) containing
Identity of principal (here, Alice) Corresponding public key Timestamp (when issued) Other information (perhaps identity of signer)
Use
Bob gets Alices certificate
If he knows Cathys public key, he can decipher the certificate
When was certificate issued? Is the principal Alice?
Validate
Obtain issuers public key Decipher enciphered hash Recompute hash from certificate and compare
X.509 Chains
Some certificate components in X.509v3:
Version Serial number Signature algorithm identifier: hash algorithm Issuers name; uniquely identifies issuer Interval of validity Subjects name; uniquely identifies subject Subjects public key Signature: enciphered hash
Introduction to Computer Security 2004 Matt Bishop Slide #1-259
November 1, 2004
Decipher signature
Gives hash of certificate
Issuers
Certification Authority (CA): entity that issues certificates
Multiple issuers pose validation problem Alices CA is Cathy; Bobs CA is Don; how can Alice validate Bobs certificate? Have Cathy and Don cross-certify
Each issues certificate for the other
November 1, 2004
Slide #1-261
PGP Chains
OpenPGP certificates structured into packets
One public key packet Zero or more signature packets
Signing
Single certificate may have multiple signatures Notion of trust embedded in each signature
Range from untrusted to ultimate trust Signer defines meaning of trust level (no standards!)
November 1, 2004
Slide #1-265
Validating Certificates
Alice needs to validate Bobs OpenPGP cert
Does not know Fred, Giselle, or Ellen
Arrows show signatures Self signatures not shown Jack Henry Irene Ellen Giselle Fred Bob
Slide #1-266
Storing Keys
Multi-user or networked systems: attackers may defeat access control mechanisms
Encipher file containing key
Attacker can monitor keystrokes to decipher files Key will be resident in memory that attacker may be able to read
Key Revocation
Certificates invalidated before expiration
Usually due to compromised key May be due to change in circumstance (e.g., someone leaving company)
Problems
Entity revoking certificate authorized to do so Revocation information circulates to everyone fast enough
Network delays, infrastructure problems may delay information
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-268
CRLs
Certificate revocation list lists certificates that are revoked X.509: only certificate issuer can revoke certificate
Added to CRL
PGP: signers can revoke signatures; owners can revoke certificates, or allow others to do so
Revocation message placed in PGP packet and signed Flag marks it as revocation message
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-269
Digital Signature
Construct that authenticated origin, contents of message in a manner provable to a disinterested third party (judge) Sender cannot deny having sent message (service is nonrepudiation)
Limited to technical proofs
Inability to deny ones cryptographic key was used to sign
Common Error
Classical: Alice, Bob share key k
Alice sends m || { m } k to Bob
To resolve dispute, judge gets { m } kAlice, { m } kBob, and has Cathy decipher them; if messages matched, contract was signed Alice Cathy Cathy
November 1, 2004
Key points:
Never sign random documents, and when signing, always sign hash and never document
Mathematical properties can be turned against signer
Attack #1
Example: Alice, Bob communicating
nA = 95, eA = 59, dA = 11 nB = 77, eB = 53, dB = 17
26 contracts, numbered 00 to 25
Alice has Bob sign 05 and 17: Alice computes 0517 mod 77 = 08; corresponding signature is 0319 mod 77 = 57; claims Bob signed 08 Judge computes ceB mod nB = 5753 mod 77 = 08
Signature validated; Bob is toast
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-275
Key Points
Key management critical to effective use of cryptosystems
Different levels of keys (session vs. interchange)
November 1, 2004
Slide #1-278
Overview
Problems
What can go wrong if you naively use ciphers
Cipher types
Stream or block ciphers?
Networks
Link vs end-to-end use
Examples
Privacy-Enhanced Electronic Mail (PEM) Security at the Network Layer (IPsec)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-279
Problems
Using cipher requires knowledge of environment, and threats in the environment, in which cipher will be used
Is the set of possible messages small? Do the messages exhibit regularities that remain after encipherment? Can an active wiretapper rearrange or change parts of the message?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-280
Example
Cathy knows Alice will send Bob one of two messages: enciphered BUY, or enciphered SELL Using public key eBob, Cathy precomputes m1 = { BUY } eBob, m2 = { SELL } eBob Cathy sees Alice send Bob m2 Cathy knows Alice sent SELL
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-282
November 1, 2004
Slide #1-283
Misordered Blocks
Alice sends Bob message
nBob = 77, eBob = 17, dBob = 53 Message is LIVE (11 08 21 04) Enciphered message is 44 57 21 16
Notes
Digitally signing each block wont stop this attack Two approaches:
Cryptographically hash the entire message and sign it Place sequence numbers in each block of message, so recipient can tell intended order
Then you sign each block
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-285
Statistical Regularities
If plaintext repeats, ciphertext may too Example using DES:
input (in hex):
3231 3433 3635 3837 3231 3433 3635 3837
November 1, 2004
Slide #1-286
November 1, 2004
Block cipher
Ek(m) = Ek(b1)Ek(b2)
Stream cipher
k = k1k2 Ek(m) = Ek1(b1)Ek2(b2) If k1k2 repeats itself, cipher is periodic and the kength of its period is one cycle of k1k2
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-288
Examples
Vigenre cipher
bi = 1 character, k = k1k2 where ki = 1 character Each bi enciphered using ki mod length(k) Stream cipher
DES
bi = 64 bits, k = 56 bits Each bi enciphered separately using k Block cipher
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-289
Stream Ciphers
Often (try to) implement one-time pad by xoring each bit of key with one bit of message
Example: m = 00101 k = 10010 c = 10111
Operation
r0 rn1 bi ci r0 rn1 ri = ri1, 0<in
r0t0 + + rn1tn1
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-292
Example
4-stage LFSR; t = 1001
r ki new bit computation new r 0010 0 01001001 = 0 0001 0001 1 01000011 = 1 1000 1000 0 11000001 = 1 1100 1100 0 11100001 = 1 1110 1110 0 11101001 = 1 1111 1111 1 11101011 = 0 0111 1110 0 11101011 = 1 1011 Key sequence has period of 15 (010001111010110)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-293
NLFSR
n-stage Non-Linear Feedback Shift Register: consists of
n bit register r = r0rn1 Use:
Use rn1 as key bit Compute x = f(r0, , rn1); f is any function Shift r one bit to right, dropping rn1, x becomes r0
Note same operation as LFSR but more general bit replacement function
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-294
Example
4-stage NLFSR; f(r0, r1, r2, r3) = (r0 & r2) | r3
r
1100 0110 0011 1001 1100 0110 0011
ki
0 0 1 1 0 0 1
new r
Eliminating Linearity
NLFSRs not common
No body of theory about how to design them to have long period
Variant: use a counter that is incremented for each encipherment rather than a register
Take rightmost bit of Ek(i), where i is number of encipherment
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-296
Problem:
Statistical regularities in plaintext show in key Once you get any part of the message, you can decipher more
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-297
Another Example
Take key from ciphertext (autokey) Example: Vigenre, key drawn from ciphertext
key plaintext ciphertext X QXBC Q O VVN G N RTT THEB OYHASTHEBA G Q XBC Q O VVNG N R TT M
Problem:
Attacker gets key along with ciphertext, so deciphering is trivial
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-298
Variant
Cipher feedback mode: 1 bit of ciphertext fed into n bit register
Self-healing property: if ciphertext bit received incorrectly, it and next n bits decipher incorrectly; but after that, the ciphertext bits decipher correctly Need to know k, E to decipher ciphertext
k E
Ek(r)
mi ci
November 1, 2004
Slide #1-299
Block Ciphers
Encipher, decipher multiple bits at once Each block enciphered independently Problem: identical plaintext blocks produce identical ciphertext blocks
Example: two database records
ME M BER: H OLLY INCO M E $100,000 ME M BER: HEIDI INCOM E $100,000
Encipherment:
ABCQZ R M E GHQ M R S IB CTXUVYSS RM G R P FQ N ABCQZ R M E ORM P ABRZ CTXUVYSS RM G R P FQ N
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-300
Solutions
Insert information about blocks position into the plaintext block, then encipher Cipher block chaining:
Exclusive-or current plaintext block with previous ciphertext block:
c0 = Ek(m0 I) ci = Ek(mi ci1) for i > 0
Multiple Encryption
Double encipherment: c = Ek(Ek(m))
Effective key length is 2n, if k, k are length n Problem: breaking it requires 2n+1 encryptions, not 22n encryptions EDE mode: c = Ek(Dk(Ek(m))
Triple encipherment:
Problem: chosen plaintext attack takes O(2n) time using 2n ciphertexts Best attack requires O(22n) time, O(2n) memory
Introduction to Computer Security 2004 Matt Bishop Slide #1-302
November 1, 2004
November 1, 2004
Slide #1-303
November 1, 2004
Slide #1-304
Encryption
Link encryption
Each host enciphers message so host at next hop can read it Message can be read at intermediate hosts
End-to-end encryption
Host enciphers message so host at other end of communication can read it Message cannot be read at intermediate hosts
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-305
Examples
TELNET protocol
Messages between client, server enciphered, and encipherment, decipherment occur only at these hosts End-to-end protocol
Link protocol
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-306
Cryptographic Considerations
Link encryption
Each host shares key with neighbor Can be set on per-host or per-host-pair basis
Windsor, stripe, seaview each have own keys One key for (windsor, stripe); one for (stripe, seaview); one for (windsor, seaview)
End-to-end
Each host shares key with destination Can be set on per-host or per-host-pair basis Message cannot be read at intermediate nodes
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-307
Traffic Analysis
Link encryption
Can protect headers of packets Possible to hide source and destination
Note: may be able to deduce this from traffic flows
End-to-end encryption
Cannot hide packet headers
Intermediate nodes need to route packet
Example Protocols
Privacy-Enhanced Electronic Mail (PEM)
Applications layer protocol
IP Security (IPSec)
Network layer protocol
November 1, 2004
Slide #1-309
Goals of PEM
1. Confidentiality
Only sender and recipient(s) can read message Identify the sender precisely Any changes in message are easy to detect Whenever possible
Introduction to Computer Security 2004 Matt Bishop Slide #1-310
November 1, 2004
MTA
MTA
MTA
November 1, 2004
Slide #1-311
Design Principles
Do not change related existing protocols
Cannot alter SMTP
November 1, 2004
Slide #1-313
Alice
Bob
November 1, 2004
Slide #1-314
Non-repudiation: if kA is Alices private key, this establishes that Alices private key was used to sign the message
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-315
Alice
November 1, 2004
Slide #1-316
Practical Considerations
Limits of SMTP
Only ASCII characters, limited length lines
2. Compute and encipher MIC over the canonical format; encipher message if needed 3. Map each 6 bits of result into a character; insert newline after every 64th character 4. Add delimiters around this ASCII message
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-317
Problem
Recipient without PEM-compliant software cannot read it
If only integrity and authentication used, should be able to read it
IPsec
Network layer security
Provides confidentiality, integrity, authentication of endpoints, replay detection
security gateway
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-320
IP header
Encapsulate IP packet data area Use IP to send IPsec-wrapped data packet Note: IP header not protected
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-321
IP header
Encapsulate IP packet (IP header and IP data) Use IP to send IPsec-wrapped packet Note: IP header protected
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-322
IPsec Protocols
Authentication Header (AH)
Message integrity Origin authentication Anti-replay
IPsec Architecture
Security Policy Database (SPD)
Says how to handle messages (discard them, add security services, forward message unchanged) SPD associated with network interface SPD determines appropriate entry from packet attributes
Including source, destination, transport protocol
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-324
Example
Goals
Discard SMTP packets from host 192.168.2.9 Forward packets from 192.168.19.7 without change
SPD entries
src 192.168.2 .9 , dest 10 .1 .2.3 to 10 .1.2 .103, por t 25 ,d iscard src 192.168.19 .7 , dest 10 .1 . 2.3 to 10 . 1.2 .103, por t 25 , bypass dest 10 .1 . 2.3 to 10 . 1.2 .103, por t 25 , app ly IPsec
IPsec Architecture
Security Association (SA)
Association between peers for security services
Identified uniquely by dest address, security protocol (AH or ESP), unique 32-bit number (security parameter index, or SPI)
Unidirectional
Can apply different services in either direction
SA Database (SAD)
Entry describes SA; some fields for all packets:
AH algorithm identifier, keys
When SA uses AH
SA lifetime (time for deletion or max byte count) IPsec mode (tunnel, transport, either)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-327
SAD Fields
Antireplay (inbound only)
When SA uses antireplay feature
Aging variables
Used to detect time-outs
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-328
IPsec Architecture
Packet arrives Look in SPD
Find appropriate entry Get dest address, security protocol, SPI
November 1, 2004
Slide #1-330
Inner tunnel: a SA between the hosts of the two groups Outer tunnel: the SA between the two gateways
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-331
Example: Systems
gwA.A.org
hostA.A.org
hostB.B.org
gwB.B.org
Slide #1-332
Example: Packets
IP header from gwA AH header from gwA ESP header from gwA IP header from hostA AH header from hostA ESP header from hostA IP header from hostA Transport layer headers, data
Packet generated on hostA Encapsulated by hostAs IPsec mechanisms Again encapsulated by gwAs IPsec mechanisms
Above diagram shows headers, but as you go left, everything to the right would be enciphered and authenticated, etc.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-333
AH Protocol
Parameters in AH header
Length of header SPI of SA applying protocol Sequence number (anti-replay) Integrity value check
Two steps
Check that replay is not occurring Check authentication data
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-334
Sender
Check sequence number will not cycle Increment sequence number Compute IVC of packet
Includes IP header, AH header, packet data
IP header: include all fields that will not change in transit; assume all others are 0 AH header: authentication data field set to 0 for this Packet data includes encapsulated data, higher level protocol data
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-335
Recipient
Assume AH header found Get SPI, destination address Find associated SA in SAD
If no associated SA, discard packet
current window
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-337
AH Miscellany
All implementations must support: HMAC_MD5 HMAC_SHA-1 May support other algorithms
November 1, 2004
Slide #1-338
ESP Protocol
Parameters in ESP header
SPI of SA applying protocol Sequence number (anti-replay) Generic payload data field Padding and length of padding
Contents depends on ESP services enabled; may be an initialization vector for a chaining cipher, for example Used also to pad packet to length required by cipher
Sender
Add ESP header
Includes whatever padding needed
Encipher result
Do not encipher SPI, sequence numbers
If authentication desired, compute as for AH protocol except over ESP header, payload and not encapsulating IP header
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-340
Recipient
Assume ESP header found Get SPI, destination address Find associated SA in SAD
If no associated SA, discard packet
If authentication used
Do IVC, antireplay verification as for AH
Only ESP, payload are considered; not IP header Note authentication data inserted after encipherment, so no deciphering need be done
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-341
Recipient
If confidentiality used
Decipher enciphered portion of ESP heaser Process padding Decipher payload If SA is transport mode, IP header and payload treated as original IP packet If SA is tunnel mode, payload is an encapsulated IP packet and so is treated as original IP packet
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-342
ESP Miscellany
Must use at least one of confidentiality, authentication services Synchronization material must be in payload
Packets may not arrive in order, so if not, packets following a missing packet may not be decipherable
If endpoint is host, IPsec sufficient; if endpoint is user, application layer mechanism such as PEM needed
November 1, 2004
Slide #1-345
Key Points
Key management critical to effective use of cryptosystems
Different levels of keys (session vs. interchange)
November 1, 2004
Overview
Basics Passwords
Storage Selection Breaking them
Basics
Authentication: binding of identity to subject
Identity is that of external entity (my identity, Matt, etc.) Subject is computer entity (process, etc.)
November 1, 2004
Slide #1-349
Establishing Identity
One or more of the following
What entity knows (eg. password) What entity has (eg. badge, smart card) What entity is (eg. fingerprints, retinal characteristics) Where entity is (eg. In front of a particular terminal)
November 1, 2004
Slide #1-350
Authentication System
(A, C, F, L, S)
A information that proves identity C information stored on computer and used to validate authentication information F complementation function; f : A C L functions that prove identity S functions enabling entity to create, alter information in A or C
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-351
Example
Password system, with passwords stored on line in clear text
A set of strings making up passwords C=A F singleton set of identity function { I } L single equality test function { eq } S function to set/change password
Introduction to Computer Security 2004 Matt Bishop Slide #1-352
November 1, 2004
Passwords
Sequence of characters
Examples: 10 digits, a string of letters, etc. Generated randomly, by user, by computer with user input
Sequence of words
Examples: pass-phrases
Algorithms
Examples: challenge-response, one-time passwords
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-353
Storage
Store as cleartext
If password file compromised, all passwords revealed
Encipher file
Need to have decipherment, encipherment keys in memory Reduces to previous problem
Example
UNIX system standard hash function
Hashes password into 11 char string using one of 4096 hash functions
As authentication system:
A = { strings of 8 chars or less } C = { 2 char hash id || 11 char hash } F = { 4096 versions of modified DES } L = { login, su, } S = { passwd, nispasswd, passwd+, }
Introduction to Computer Security 2004 Matt Bishop Slide #1-355
November 1, 2004
Anatomy of Attacking
Goal: find a A such that:
For some f F, f(a) = c C c is associated with entity
November 1, 2004
Slide #1-356
Preventing Attacks
How to prevent this:
Hide one of a, f, or c
Prevents obvious attack from above Example: UNIX/Linux shadow password files
Hides cs
Dictionary Attacks
Trial-and-error from a list of potential passwords
Off-line: know f and cs, and repeatedly try different guesses g A until the list is done or passwords guessed
Examples: crack, john-the-ripper
On-line: have access to functions in L and try guesses g until some l(g) succeeds
Examples: trying to log in by guessing a password
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-358
Using Time
Andersons formula: P probability of guessing a password in specified period of time G number of guesses tested in 1 time unit T number of time units N number of possible passwords (|A|) Then P TG/N
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-359
Example
Goal
Passwords drawn from a 96-char alphabet Can test 104 guesses per second Probability of a success to be 0.5 over a 365 day period What is minimum password length?
Solution
N TG/P = (365246060)104/0.5 = 6.311011 Choose s such that sj=0 96j N So s 6, meaning passwords must be at least 6 chars long
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-360
November 1, 2004
Slide #1-361
Pronounceable Passwords
Generate phonemes randomly
Phoneme is unit of sound, eg. cv, vc, cvc, vcv Examples: helgoret, juttelon are; przbqxdfl, zxrptglfn are not
November 1, 2004
Slide #1-362
User Selection
Problem: people pick easy to guess passwords
Based on account names, user names, computer names, place names Dictionary words (also reversed, odd capitalizations, control characters, elite-speak, conjugations or declensions, swear words, Torah/Bible/Koran/ words) Too short, digits only, letters only License plates, acronyms, social security numbers Personal characteristics or foibles (pet names, nicknames, job characteristics, etc.
November 1, 2004
Slide #1-363
OoHeO/FSK
Second letter of each word of length 4 or more in third line of third verse of Star-Spangled Banner, followed by /, followed by authors initials
November 1, 2004
Slide #1-364
Example: OPUS
Goal: check passwords against large dictionaries quickly
Run each word of dictionary through k different hash functions h1, , hk producing values less than n Set bits h1, , hk in OPUS dictionary To check new proposed word, generate bit vector and see if all corresponding bits set
If so, word is in one of the dictionaries to some degree of probability If not, it is not in the dictionaries
November 1, 2004
Slide #1-366
Example: passwd+
Provides little language to describe proactive checking
test length($p) < 6
If password under 6 characters, reject it
November 1, 2004
Slide #1-367
Salting
Goal: slow dictionary attacks Method: perturb hash function so that:
Parameter controls which hash function is used Parameter differs for each password So given n password hashes, and therefore n salts, need to hash guess n
November 1, 2004
Slide #1-368
Examples
Vanilla UNIX method
Use DES to encipher 0 message with password as key; iterate 25 times Perturb E table in DES in one of 4096 ways
12 bit salt flips entries 111 with entries 2536
Alternate methods
Use salt as first part of input to hash function
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-369
Guessing Through L
Cannot prevent these
Otherwise, legitimate users cannot log in
Jailing
Allow in, but restrict activities
November 1, 2004
Slide #1-370
Password Aging
Force users to change passwords after some time has expired
How do you force users not to re-use passwords?
Record previous passwords Block changes for a period of time
Challenge-Response
User, system share a secret function f (in practice, f is a known function with unknown parameters, such as a cryptographic key)
request to authenticate random message r (the challenge) f(r) (the response)
Introduction to Computer Security 2004 Matt Bishop
Pass Algorithms
Challenge-response with the function f itself a secret
Example:
Challenge is a random string of characters such as abcdefg, ageksido Response is some function of that string such as bdf, gkip
One-Time Passwords
Password that can be used exactly once
After use, it is immediately invalidated
Challenge-response mechanism
Challenge is number of authentications; response is password for that particular number
Problems
Synchronization of user, system Generation of good random passwords Password distribution problem
November 1, 2004
Slide #1-374
S/Key
One-time password scheme based on idea of Lamport h one-way hash function (MD5 or SHA-1, for example) User chooses initial seed k System calculates:
h(k) = k1, h(k1) = k2, , h(kn1) = kn
S/Key Protocol
System stores maximum number of authentications n, number of next authentication i, last correctly supplied password pi1. user user user
{ name } {i} { pi }
System computes h(pi) = h(kni+1) = kni = pi1. If match with what is stored, system replaces pi1 with pi and increments i.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-376
Hardware Support
Token-based
Used to compute response to challenge
May encipher or hash challenge May require PIN from user
Temporally-based
Every minute (or so) different number shown
Computer knows what number to expect when
November 1, 2004
Slide #1-379
EKE Protocol
Alice Alice
Alice || Es(p) Es(Ep(k))
Bob Bob
Now Alice, Bob share a randomly generated secret session key k Alice Alice Alice
November 1, 2004
Biometrics
Automated measurement of biological, behavioral features that identify a person
Fingerprints: optical or electrical techniques
Maps fingerprint into a graph, then compares with database Measurements imprecise, so approximate matching algorithms used
November 1, 2004
Slide #1-381
Other Characteristics
Can use several other characteristics
Eyes: patterns in irises unique
Measure patterns, determine if differences are random; or correlate images using statistical tests
Cautions
These can be fooled!
Assumes biometric device accurate in the environment it is being used in! Transmission of data to validator is tamperproof, correct
November 1, 2004
Slide #1-383
Location
If you know where user is, validate identity by seeing if person is where the user is
Requires special-purpose hardware to locate user
GPS (global positioning system) device gives location signature of entity Host uses LSS (location signature sensor) to get signature for entity
November 1, 2004
Slide #1-384
Multiple Methods
Example: where you are also requires entity to have LSS and GPS, so also what you have Can assign different methods to different tasks
As users perform more and more sensitive tasks, must authenticate in more and more ways (presumably, more stringently) File describes authentication required
Also includes controls on access (time of day, etc.), resources, and requests to change passwords
November 1, 2004
Slide #1-385
PAM
Idea: when program needs to authenticate, it checks central repository for methods to use Library call: pam_authenticate
Accesses file with name of program in /etc/pam_d
November 1, 2004
Slide #1-386
For ftp: 1. If user anonymous, return okay; if not, set PAM_AUTHTOK to password, PAM_RUSER to name, and fail 2. Now check that password in PAM_AUTHTOK belongs to that of user in PAM_RUSER; if not, fail 3. Now see if user in PAM_RUSER named in /etc/ftpusers; if so, fail; if error or not found, succeed
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-387
Key Points
Authentication is not cryptography
You have to consider system components
November 1, 2004
Slide #1-388
November 1, 2004
Overview
Simplicity
Less to go wrong Fewer possible inconsistencies Easy to understand
Restriction
Minimize access Inhibit communication
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-390
Least Privilege
A subject should be given only those privileges necessary to complete its task
Function, not identity, controls Rights added as needed, discarded after use Minimal protection domain
November 1, 2004
Slide #1-391
Fail-Safe Defaults
Default action is to deny access If action fails, system as secure as when action began
November 1, 2004
Slide #1-392
Economy of Mechanism
Keep it as simple as possible
KISS Principle
November 1, 2004
Slide #1-393
Complete Mediation
Check every access Usually done once, on first action
UNIX: access checked on open, not checked thereafter
November 1, 2004
Slide #1-394
Open Design
Security should not depend on secrecy of design or implementation
Popularly misunderstood to mean that source code should be public Security through obscurity Does not apply to information such as passwords or cryptographic keys
November 1, 2004
Slide #1-395
Separation of Privilege
Require multiple conditions to grant privilege
Separation of duty Defense in depth
November 1, 2004
Slide #1-396
Isolation
Virtual machines Sandboxes
November 1, 2004
Slide #1-397
Psychological Acceptability
Security mechanisms should not add to difficulty of accessing resource
Hide complexity introduced by security mechanisms Ease of installation, configuration, use Human factors critical here
November 1, 2004
Slide #1-398
Key Points
Principles of secure design underlie all security-related mechanisms Require:
Good understanding of goal of mechanism and environment in which it is to be used Careful analysis and design Careful implementation
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-399
November 1, 2004
Slide #1-400
Overview
Files and objects Users, groups, and roles Certificates and names Hosts and domains State and cookies Anonymity
Introduction to Computer Security 2004 Matt Bishop Slide #1-401
November 1, 2004
Identity
Principal: a unique entity Identity: specifies a principal Authentication: binding of a principal to a representation of identity internal to the system
All access, resource allocation decisions assume binding is correct
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-402
November 1, 2004
Slide #1-403
More Names
Different names for one context
Human: aliases, relative vs. absolute path names Kernel: deleting a file identified by name can mean two things:
Delete the object that the name identifies Delete the name given, and do not delete actual object until all names have been deleted
Users
Exact representation tied to system Example: UNIX systems
Login name: used to log in to system
Logging usually uses this name
Multiple Identities
UNIX systems again
Real UID: user identity at login, but changeable Effective UID: user identity used for access control
Setuid changes effective UID
Groups
Used to share access privileges First model: alias for set of principals
Processes assigned to groups Processes stay in those groups for their lifetime
Roles
Group with membership tied to function
Rights given are consistent with rights needed to perform function
November 1, 2004
November 1, 2004
Disambiguating Identity
Include ancillary information in names
Enough to identify principal uniquely X.509v3 Distinguished Names do this
Will Certs-from-Us issue this Matt Bishop a certificate once he is suitably authenticated?
CAs issuance policy says to which principals the CA will issue certificates
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-413
Example
University of Valmont issues certificates to students, staff
Students must present valid reg cards (considered low assurance) Staff must present proof of employment and fingerprints, which are compared to those taken when staff member hired (considered high assurance)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-417
student
staff
Slide #1-419
Certificate Differences
Student, staff certificates signed using different private keys (for different CAs)
Students signed by key corresponding to low assurance certificate signed by first PCA Staffs signed by key corresponding to high assurance certificate signed by second PCA
Types of Certificates
Organizational certificate
Issued based on principals affiliation with organization Example Distinguished Name /O=University of Valmont/OU=Computer Science Department/CN=Marsha Merteuille/
Residential certificate
Issued based on where principal lives No affiliation with organization implied Example Distinguished Name /C=US/SP=Louisiana/L=Valmont/PA=1 Express Way/CN=Marsha Merteuille/
Introduction to Computer Security 2004 Matt Bishop
November 1, 2004
Slide #1-421
Distinguished Name /O=University of Valmont/OU=Office of the Big Bucks/RN=Comptroller where RN is role name; note the individual using the certificate is not named, so no CN
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-422
Meaning of Identity
Authentication validates identity
CA specifies type of authentication If incorrect, CA may misidentify entity unintentionally
Persona Certificate
Certificate with meaningless Distinguished Name
If DN is /C=US/O=Microsoft Corp./CN=Bill Gates/ the real subject may not (or may) be Mr. Gates Issued by CAs with persona policies under a PCA with policy that supports this
Example
Government requires all citizens with gene X to register
Anecdotal evidence people with this gene become criminals with probability 0.5. Law to be made quietly, as no scientific evidence supports this, and government wants no civil rights fuss
Example
Employee gets persona certificate, sends copy of plan to media
Media knows message unchanged during transit, but not who sent it Government denies plan, changes it
Trust
Goal of certificate: bind correct identity to DN Question: what is degree of assurance? X.509v3, certificate hierarchy
Depends on policy of CA issuing certificate Depends on how well CA follows that policy Depends on how easy the required authentication can be spoofed
November 1, 2004
Slide #1-428
PGP Certificates
Level of trust in signature field Four levels
Generic (no trust assertions made) Persona (no verification) Casual (some verification) Positive (substantial verification)
Host Identity
Bound up to networking
Not connected: pick any name Connected: one or more names depending on interfaces, network structure, context
Example
Layered network
MAC layer
Ethernet address: 00:05:02:6B:A8:21 AppleTalk address: network 51, node 235
Network layer
IP address: 192.168.35.89
Transport layer
Host name: cherry.orchard.chekhov.ru
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-432
Danger!
Attacker spoofs identity of another host
Protocols at, above the identity being spoofed will fail They rely on spoofed, and hence faulty, information
Weak authentication
Not cryptographically based Various techniques used, such as reverse domain name lookup
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-434
Dynamic Identifiers
Assigned to principals for a limited time
Server maintains pool of identifiers Client contacts server using local identifier
Only client, server need to know this identifier
Example: DHCP
DHCP server has pool of IP addresses Laptop sends DHCP server its MAC address, requests IP address
MAC address is local identifier IP address is global identifier
Laptop accepts IP address, uses that to communicate with hosts other than server
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-437
Example: Gateways
Laptop wants to access host on another network
Laptops address is 10.1.3.241
Weak Authentication
Static: host/name binding fixed over time Dynamic: host/name binding varies over time
Must update reverse records in DNS
Otherwise, the reverse lookup technique fails
Cannot rely on binding remaining fixed unless you know the period of time over which the binding persists
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-439
November 1, 2004
Slide #1-440
Attacks
Change records on server Add extra record to response, giving incorrect name/IP address association
Called cache poisoning
Cookies
Token containing information about state of transaction on network
Usual use: refers to state of interaction between web browser, client Idea is to minimize storage requirements of servers, and put information on clients
Example
Caroline puts 2 books in shopping cartcart at books.com
Cookie: name bought, value BK=234&BK=8753, domain .books.com
November 1, 2004
Slide #1-445
Example: anon.penet.fi
Offered anonymous email service
Sender sends letter to it, naming another destination Anonymizer strips headers, forwards message
Assigns an ID (say, 1234) to sender, records real sender and ID in database Letter delivered as if from anon1234@anon.penet.fi
November 1, 2004
Slide #1-448
Problem
Anonymizer knows who sender, recipient really are Called pseudo-anonymous remailer or pseudonymous remailer
Keeps mappings of anonymous identities and associated identities
If you can get the mappings, you can figure out who sent what
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-449
More anon.penet.fi
Material claimed to be copyrighted sent through site Finnish court directed owner to reveal mapping so plaintiffs could determine sender Owner appealed, subsequently shut down site
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-450
Cypherpunk Remailer
Remailer that deletes header of incoming message, forwards body to destination Also called Type I Remailer No record kept of association between sender address, remailers user name
Prevents tracing, as happened with anon.penet.fi
Encipher message Add destination header Add header for remailer n Add header for remailer 2
Slide #1-452
November 1, 2004
Weaknesses
Attacker monitoring entire network
Observes in, out flows of remailers Goal is to associate incoming, outgoing messages
Attacks
If remailer forwards message before next message arrives, attacker can match them up
Hold messages for some period of time, greater than the message interarrival time Randomize order of sending messages, waiting until at least n messages are ready to be forwarded
Note: attacker can force this by sending n1 messages into queue
November 1, 2004
Slide #1-454
Attacks
As messages forwarded, headers stripped so message size decreases
Pad message with garbage at each step, instructing next remailer to discard it
Mixmaster Remailer
Cypherpunk remailer that handles only enciphered mail and pads (or fragments) messages to fixed size before sending them
Also called Type II Remailer Designed to hinder attacks on Cypherpunk remailers
Messages uniquely numbered Fragments reassembled only at last remailer for sending to recipient
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-456
November 1, 2004
Slide #1-457
Anonymity Itself
Some purposes for anonymity
Removes personalities from debate With appropriate choice of pseudonym, shapes course of debate by implication Prevents retaliation
Privacy
Anonymity protects privacy by obstructing amalgamation of individual records Important, because amalgamation poses 3 risks:
Incorrect conclusions from misinterpreted data Harm from erroneous information Not being let alone
Also hinders monitoring to deter or prevent crime Conclusion: anonymity can be used for good or ill
Right to remain anonymous entails responsibility to use that right wisely
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-459
Key Points
Identity specifies a principal (unique entity)
Same principal may have many different identities
Function (role) Associated principals (group) Individual (user/host)
November 1, 2004
Slide #1-460
November 1, 2004
Slide #1-461
Overview
Access control lists Capability lists Locks and keys Rings-based access control Propagated access control lists
November 1, 2004
Slide #1-462
Default Permissions
Normal: if not named, no rights over file
Principle of Fail-Safe Defaults
Abbreviations
ACLs can be long so combine users
UNIX: 3 classes of users: owner, group, rest rwx rwx rwx rest group owner Ownership assigned based on creating process
Some systems: if directory has setgid permission, file group owned by group of directory (SunOS, Solaris)
November 1, 2004
Slide #1-465
ACLs + Abbreviations
Augment abbreviated lists with ACLs
Intent is to shorten ACL
November 1, 2004
Slide #1-466
ACL Modification
Who can do this?
Creator is given own right that allows this System R provides a grant modifier (like a copy flag) allowing a right to be transferred, so ownership not needed
Transferring right to another modifies ACL
November 1, 2004
Slide #1-468
Privileged Users
Do ACLs apply to privileged users (root)?
Solaris: abbreviated lists do not, but full-blown ACL entries do Other vendors: varies
November 1, 2004
Slide #1-469
line adds write permission for heidi when in that group UNICOS:
holly : gleep : r
user holly in group gleep can read file
holly : * : r
user holly in any group can read file
* : gleep : r
any user in group gleep can read file
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-470
Conflicts
Deny access if any entry would deny access
AIX: if any entry denies access, regardless or rights given so far, access is denied
November 1, 2004
Slide #1-471
Revocation Question
How do you remove subjects rights to a file?
Owner deletes subjects entries from ACL, or rights from subjects entry in ACL
Windows NT ACLs
Different sets of rights
Basic: read, write, execute, delete, change permission, take ownership Generic: no access, read (read/execute), change (read/write/execute/delete), full control (all), special access (assign any of the basics) Directory: no access, read (read/execute files in directory), list, add, add and read, change (create, add, read, execute, write files; delete subdirectories), full control, special access
November 1, 2004
Slide #1-474
Accessing Files
User not in files ACL nor in any group named in files ACL: deny access ACL entry denies user access: deny access Take union of rights of all ACL entries giving user access: user has this set of rights over file
November 1, 2004
Slide #1-475
Capability Lists
Rows of access control matrix file1 file2 file3 Andy rx r rwo Betty rwxo r Charlie rx rwo w C-Lists: Andy: { (file1, rx) (file2, r) (file3, rwo) } Betty: { (file1, rwxo) (file2, r) } Charlie: { (file1, rx) (file2, rwo) (file3, w) }
November 1, 2004 Slide #1-476
Semantics
Like a bus ticket
Mere possession indicates rights that subject has over object Object identified by capability (as part of the token)
Name may be a reference, location, or something else
Implementation
Tagged architecture
Bits protect individual words
B5700: tag was 3 bits and indicated how word was to be treated (pointer, type, descriptor, etc.)
Paging/segmentation protections
Like tags, but put capabilities in a read-only segment or page
CAP system did this
Implementation (cont)
Cryptography
Associate with each capability a cryptographic checksum enciphered using a key known to OS When process presents capability, OS validates checksum Example: Amoeba, a distributed capability-based system
Capability is (name, creating_server, rights, check_field) and is given to owner of object check_field is 48-bit random number; also stored in table corresponding to creating_server To validate, system compares check_field of capability with that stored in creating_server table Vulnerable if capability disclosed to another process
November 1, 2004
Slide #1-479
Amplifying
Allows temporary increase of privileges Needed for modular programming
Module pushes, pops data onto stack
module s tack endmodule.
November 1, 2004
Slide #1-480
Examples
HYDRA: templates
Associated with each procedure, function in module Adds rights to process capability while the procedure or function is being executed Rights deleted on exit
Revocation
Scan all C-lists, remove relevant capabilities
Far too expensive!
Use indirection
Each object has entry in a global object table Names in capabilities name the entry, not the object
To revoke, zap the entry in the table Can have multiple entries for a single object to allow control of different sets of rights and/or groups of users for each object
Example: Amoeba: owner requests server change random number in server table
All capabilities for that object now invalid
November 1, 2004
Slide #1-482
Limits
Problems if you dont control copying of capabilities
Heidi (High) C-List r*lough Lough (Low) rw*lough Lou (Low) C-List rw*lough Lou (Low) Heidi (High) C-List r*lough Lough (Low) rw*lough rw*lough C-List rw*lough
The capability to write file lough is Low, and Heidi is High so she reads (copies) the capability; now she can write to a Low file, violating the *-property!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-483
Remedies
Label capability itself
Rights in capability depends on relation between its compartment and that of object to which it refers
In example, as as capability copied to High, and High dominates object compartment (Low), write right removed
Suggested that the second question, which in the past has been of most interest, is the reason ACLbased systems more common than capability-based systems
As first question becomes more important (in incident response, for example), this may change
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-485
November 1, 2004
Slide #1-486
Cryptographic Implementation
Enciphering key is lock; deciphering key is key
Encipher object o; store Ek(o) Use subjects key k to compute Dk(Ek(o)) Any of n can access o: store o = (E1(o), , En(o)) Requires consent of all n to access o: store o = (E1(E2((En(o))))
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-487
Example: IBM
IBM 370: process gets access key; pages get storage key and fetch bit
Fetch bit clear: read access only Fetch bit set, access key 0: process can write to (any) page Fetch bit set, access key matches storage key: process can write to page Fetch bit set, access key non-zero and does not match storage key: no access allowed
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-488
Type Checking
Lock is type, key is operation
Example: UNIX system call write cant work on directory object but does work on file Example: split I&D space of PDP-11 Example: countering buffer overflow attacks on the stack by putting stack on non-executable pages/segments
Then code uploaded to buffer wont execute Does not stop other forms of this attack, though
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-490
More Examples
LOCK system:
Compiler produces data Trusted process must change this type to executable becore program can be executed
Sidewinder firewall
Subjects assigned domain, objects assigned type
Example: ingress packets get one type, egress packets another
All actions controlled by type, so ingress packets cannot masquerade as egress packets (and vice versa)
November 1, 2004
Slide #1-491
Privileges increase
0 1
November 1, 2004
Reading/Writing/Appending
Procedure executing in ring r Data segment with access bracket (a1, a2) Mandatory access rule
r a1 allow access a1 < r a2 allow r access; not w, a access a2 < r deny all access
November 1, 2004
Slide #1-493
Executing
Procedure executing in ring r Call procedure in segment with access bracket (a1, a2) and call bracket (a2, a3)
Often written (a1, a2 , a3 )
November 1, 2004
Versions
Multics
8 rings (from 0 to 7)
Older systems
2 levels of privilege: user, supervisor
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-495
PACLs
Propagated Access Control List
Implements ORGON
Notation: PACLs means s created object; PACL(e) is PACL associated with entity e
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-496
Multiple Creators
Betty reads Anns file dates
PACL(Betty) = PACLBetty PACL(dates) = PACLBetty PACLAnn
PACLBetty allows Char to access objects, but PACLAnn does not; both allow June to access objects
June can read dc Char cannot read dc
November 1, 2004
Slide #1-497
Key Points
Access control mechanisms provide controls for users accessing files Many different forms
ACLs, capabilities, locks and keys
Type checking too
November 1, 2004
Slide #1-499
Overview
Basics and background Compiler-based mechanisms Execution-based mechanisms Examples
Security Pipeline Interface Secure Network Server Mail Guard
November 1, 2004
Slide #1-500
Basics
Bell-LaPadula Model embodies information flow policy
Given compartments A, B, info can flow from A to B iff B dom A
Information Flow
Idea: info flows from x to y as a result of a sequence of commands c if you can deduce information about x before c from the value in y after c
November 1, 2004
Slide #1-502
Example 1
Command is x := y + z; where:
0 y 7, equal probability z = 1 with prob. 1/2, z = 2 or 3 with prob. 1/4 each
If you know final value of x, initial value of y can have at most 3 values, so information flows from y to x
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-503
Example 2
Command is
if x = 1 then y := 0 else y := 1;
where:
x, y equally likely to be either 0 or 1
But if x = 1 then y = 0, and vice versa, so value of y depends on x So information flowed from x to y
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-504
Notation
x means class of x
In Bell-LaPadula based system, same as label of security compartment to which x belongs
Compiler-Based Mechanisms
Detect unauthorized information flows in a program during compilation Analysis not precise, but secure
If a flow could violate policy (but may not), it is unauthorized No unauthorized path along which information could flow remains undetected
Set of statements certified with respect to information flow policy if flows in set of statements do not violate that policy
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-507
Example
i fx= 1 t hen y := a; else y := b;
Info flows from x and a to y, or from x and b to y Certified only if x y and a y and b y
Note flows for both branches must be true unless compiler can determine that one branch will never be taken
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-508
Declarations
Notation:
x : int c lass { A, B }
means x is an integer variable with security class at least lub{ A, B }, so lub{ A, B } x Distinguished classes Low, High
Constants are always Low
November 1, 2004
Slide #1-509
Input Parameters
Parameters through which data passed into procedure Class of parameter is class of actual argument
i :type c l ass { i p p}
November 1, 2004
Slide #1-510
Output Parameters
Parameters through which data passed out of procedure
If data passed in, called input/output parameter
As information can flow from input parameters to output parameters, class must include this: ype class { r1, , rn } op: t where ri is class of ith input or input/output argument
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-511
Example
proc sum (x :i nt c l ass { A } ; var out :in t class { A ,B } ) ; begin out : = out + x ; end;
November 1, 2004
Slide #1-512
Array Elements
Information flowing out:
:= a [ i ]
Value of i, a[i] both affect result, so class is lub{ a[i], i } Information flowing in:
a[ i ]: =
Assignment Statements
x := y + z;
November 1, 2004
Slide #1-514
Compound Statements
x := y + z; a := b * c x;
First statement: lub{ y, z } x Second statement: lub{ b, c, x } a So, both must hold (i.e., be secure) More generally:
S1; S n;
Conditional Statements
i f x + y < z then a := b e lse d := b * c x ; end
The statement executed reveals information about x, y, z, so lub{ x, y, z } glb{ a, d } More generally: i ff (x1, , xn) then S1 el se S2; end S1, S2 must be secure lub{ x1, , xn } glb{y | y target of assignment in S1, S2 }
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-516
Iterative Statements
whi le i< n do beg in a[ i ]:= b[ i ] ;i: =i+ 1 ; end
Loop must terminate; S must be secure lub{ x1, , xn } glb{y | y target of assignment in S }
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-517
Iterative Statements
whi le i< n do beg in a[ i ]:= b [ i ] ;i:= i+ 1 ; end
Loop must terminate; S must be secure lub{ x1, , xn } glb{y | y target of assignment in S }
November 1, 2004
Slide #1-518
Goto Statements
No assignments
Hence no explicit flows
Need to detect implicit flows Basic block is sequence of statements that have one entry point and one exit point
Control in block always flows from entry point to exit point
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-519
Example Program
proc t m(x: ar ray [1 . .10 ] [1 . . 10] o fi nt c lass {x } ; var y :a r ray [1 . .10] [1 . .10 ]o fin tc lass { y} ) ; var i ,j :in t{ i } ; beg in b1 i:= 1; b2 L2 : i fi > 10 go to L7; b3 j:= 1; b4 L4 : i fj > 10 then goto L6; b5 y [ j ] [ i ]:= x [ i ] [ j ] ;j:= j + 1; go to L4 ; b6 L6 : i:= i + 1 ; go t o L2 ; b7 L7 : end;
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-520
Flow of Control
b1 b6
j>n
b2
i>n in
b7 b3
b4
jn
b5
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-521
IFDs
Idea: when two paths out of basic block, implicit flow occurs
Because information says which path to take
Immediate forward dominator of basic block b (written IFD(b)) is first basic block lying on all paths of execution passing through b
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-522
IFD Example
In previous procedure:
IFD(b1) = b2 IFD(b2) = b7 IFD(b3) = b4 IFD(b4) = b6 IFD(b5) = b4 IFD(b6) = b2 one path b2b7 or b2b3b6b2b7 one path b4b6 or b4b5b6 one path one path
Introduction to Computer Security 2004 Matt Bishop Slide #1-523
November 1, 2004
Requirements
Bi is set of basic blocks along an execution path from bi to IFD(bi)
Analogous to statements in conditional statement
xi1, , xin variables in expression selecting which execution path containing basic blocks in Bi used
Analogous to conditional expression
Example of Requirements
Within each basic block:
b1: Low i b3: Low j b6: lub{ Low, i } i b5: lub{ x[i][j], i, j } y[j][i] }; lub{ Low, j } j Combining, lub{ x[i][j], i, j } y[j][i] } From declarations, true when lub{ x, i } y
Example (continued)
B4 = { b 5 }
Assignments to j, y[j][i]; conditional is j 10 Requires j glb{ j, y[j][i] } From declarations, means i y
Result:
Combine lub{ x, i } y; i y; i y Requirement is lub{ x, i } y
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-526
Procedure Calls
tm(a , b) ;
From previous slides, to be secure, lub{ x, i } y must hold In call, x corresponds to a, y to b Means that lub{ a, i } b, or a b More generally:
proc pn( i1 , ,im :in t ; var o1, , on: in t ) beg in S end;
S must be secure For all j and k, if ij ok, then xj yk For all j and k, if oj ok, then yj yk
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-527
Exceptions
proc copy(x :in tc l ass { x } ; var y :in tc lass Low) var sum: in tc lass { x } ; z :i nt c lass Low; beg in y: = z := sum : =0 ; whi l e z = 0 do beg in sum : = sum + x ; y := y + 1 ; end end
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-528
Exceptions (cont)
When sum overflows, integer overflow trap
Procedure exits Value of x is MAXINT/y Info flows from y to x, but x y never checked
Now info flows from sum to z, meaning sum z This is false (sum = { x } dominates z = Low)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-529
Infinite Loops
proc copy(x :in t0 . . 1c lass { x } ; var y :in t0 . .1 cl ass Low) beg in y: =0 ; whi l e x = 0 do ( * no t h ing * ) ; y: =1 ; end
If x = 0 initially, infinite loop If x = 1 initially, terminates with y set to 1 No explicit flows, but implicit flow from x to y
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop
Slide #1-530
Semaphores
Use these constructs:
wai t (x ) : i f x = 0 then b lock unt i l x > 0; x := x 1 ; s igna l ( x) : x := x + 1 ;
Consider statement
wai t ( sem) ; x := x + 1 ;
Flow Requirements
Semaphores in signal irrelevant
Dont affect information flow in that process
Statement S is a wait
shared(S): set of shared variables read
Idea: information flows out of variables in shared(S)
November 1, 2004
Example
beg in x := y + z ; ( * S1 * ) ) wa i t (sem ) ; ( * S2 * ) a: = b *c x ; ( * S3 * end
Requirements:
lub{ y, z } x lub{ b, c, x } a sem a
Because fglb(S2) = a and shared(S2) = sem
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-533
Concurrent Loops
Similar, but wait in loop affects all statements in loop
Because if flow of control loops, statements in loop before wait may be executed after wait
Requirements
Loop terminates All statements S1, , Sn in loop secure lub{ shared(S1), , shared(Sn) } glb(t1, , tm)
Where t1, , tm are variables assigned to in loop
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-534
Loop Example
whi le i < n do beg in a [ i ]:= i tem; ( * S1 * ) wai t ( sem); ( * S2 * ) i:= i + 1 ; ( * S3 * ) end
November 1, 2004
cobegin/coend
cobeg in x: = y +z ; ( * S1 * ) a: = b *c y ; ( * S2 * ) coend
Soundness
Above exposition intuitive Can be made rigorous:
Express flows as types Equate certification to correct use of types Checking for valid information flows same as checking types conform to semantics imposed by security policy
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-537
Execution-Based Mechanisms
Detect and stop flows of information that violate policy
Done at run time, not compile time
November 1, 2004
Slide #1-538
Instruction Description
skip means instruction not executed push(x, x) means push variable x and its security class x onto program stack pop(x, x) means pop top value and security class from program stack, assign them to variable x and its security class x respectively
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-540
Instructions
x := x + 1 (increment) Same as: i f PC x then x := x + 1 e lse sk ip i f x = 0 then goto n e lse x := x 1
on stack)
Same as: i f x = 0 then beg in push(PC, PC); PC := lub{PC, x} ; PC : =n ; end el se i f PC x t hen x := x - 1 e lse sk ip ;
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-541
More Instructions
i f x = 0 then goto n el se x := x 1 (branch
November 1, 2004
Slide #1-542
More Instructions
re turn
Same as: ; pop(PC, PC) ha l t (stop) Same as: i fp rogram s tack empty then ha l t Note stack empty to prevent user obtaining information from it after halting
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-543
Example Program
1 2 3 4 5 6 7 i f x = 0 then got o4e l se x : = x -1 i f z = 0 then got o6e l se z : = z -1 ha l t z := z - 1 re tu r n y := y - 1 re tu r n Initially x = 0 or x = 1, y = 0, z = 0 Program copies value of x to y
Introduction to Computer Security 2004 Matt Bishop Slide #1-544
November 1, 2004
Example Execution
x 1 0 0 0 0 y 0 0 0 1 1 z 0 0 0 0 0 PC 1 2 6 7 3 PC Low Low z z Low stack (3, Low) (3, Low) check Low x PC y
November 1, 2004
Slide #1-545
Handling Errors
Ignore statement that causes error, but continue execution
If aborted or a visible exception taken, user could deduce information Means errors cannot be reported unless user has clearance at least equal to that of the information causing the error
November 1, 2004
Slide #1-546
Variable Classes
Up to now, classes fixed
Check relationships on assignment, etc.
Example Program
( * Copy va lue f rom x to y *In i t ial l y , xi s0o r1* ) proc copy(x :i n tc lass { x } ; var y :i nt c lass { y } ) var z :i nt c lass var iab le { Low } ; beg in y := 0 ; z := 0 ; i f x = 0 then z := 1 ; i f z = 0 then y := 1 ; end;
Analysis of Example
x=0
z := 0 sets z to Low i f x = 0 then z := 1 sets z to 1 and z to x So on exit, y = 0
x=1
z := 0 sets z to Low i f z = 0 then y := 1 sets y to 1 and checks that lub{Low, z} y So on exit, y = 1
November 1, 2004
Slide #1-550
November 1, 2004
Slide #1-552
November 1, 2004
Slide #1-554
Use
Store files on first disk Store corresponding crypto checksums on second disk Host requests file from first disk
SPI retrieves file, computes crypto checksum SPI retrieves files crypto checksum from second disk If a match, file is fine and forwarded to host If discrepency, file is compromised and host notified
filters out in
MTA
UNCLASSIFIED computer
Key Points
Both amount of information, direction of flow important
Flows can be explicit or implicit
Compiler-based checks flows at compile time Execution-based checks flows at run time
November 1, 2004
Slide #1-557
November 1, 2004
Slide #1-558
Overview
The confinement problem Isolating entities
Virtual machines Sandboxes
Covert channels
Detecting them Analyzing them Mitigating them
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-559
Example Problem
Server balances bank accounts for clients Server security issues:
Record correctly who used it Send only balancing info to client
Generalization
Client sends request, data to server Server performs some function on data Server returns result to client Access controls:
Server must ensure the resources it accesses on behalf of client include only resources client is authorized to access Server must ensure it does not reveal clients data to any entity not authorized to see the clients data
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-561
Confinement Problem
Problem of preventing a server from leaking information that the user of the service considers confidential
November 1, 2004
Slide #1-562
Total Isolation
Process cannot communicate with any other process Process cannot be observed Impossible for this process to leak information
Not practical as process uses observable resources such as CPU, secondary storage, networks, etc.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-563
Example
Processes p, q not allowed to communicate
But they share a file system!
Communications protocol:
p sends a bit by creating a file called 0 or 1, then a second file called send
p waits until send is deleted before repeating to send another bit
q waits until file send exists, then looks for file 0 or 1; whichever exists is the bit
q then deletes 0, 1, and send and waits until send is recreated before repeating to read another bit
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-564
Covert Channel
A path of communication not designed to be used for communication In example, file system is a (storage) covert channel
November 1, 2004
Slide #1-565
November 1, 2004
Slide #1-566
Lipners Notes
All processes can obtain rough idea of time
Read system clock or wall clock time Determine number of instructions executed
November 1, 2004
Slide #1-567
Kochers Attack
This computes x = az mod n, where z = z0 zk1
x := 1; a tmp := a; fo ri: = 0 to k1 do beg in i fz i = 1 then x := ( x*a t mp) mod n; atmp := (a t mp * atmp) mod n; end resu l t:= x ;
Isolation
Virtual machines
Emulate computer Process cannot access underlying computer system, anything not part of that computer system
Sandboxing
Does not emulate computer Alters interface between computer, process
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-569
Example: KVM/370
Security-enhanced version of IBM VM/370 VMM Goals
Provide virtual machines for users Prevent VMs of different security classes from communicating
Sandbox
Environment in which actions of process are restricted according to security policy
Can add extra security-checking mechanisms to libraries, kernel
Program to be executed is not altered
Java VM
Restricts set of files that applet can access and hosts to which applet can connect
November 1, 2004
Slide #1-574
Two components
Framework does run-time checking Modules determine which accesses allowed
Janus Implementation
System calls to be monitored defined in modules On system call, Janus framework invoked
Validates system call with those specific parameters are allowed If not, sets process environment to indicate call failed If okay, framework gives control back to process; on return, framework invoked to update state
Covert Channels
Channel using shared resources as a communication path Covert storage channel uses attribute of shared resource Covert timing channel uses temporal or ordering relationship among accesses to shared resource
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-578
q waits until file send exists, then looks for file 0 or 1; whichever exists is the bit
q then deletes 0, 1, and send and waits until send is recreated before repeating to read another bit
Disk scheduler uses SCAN algorithm Low process seeks to cylinder 150 and relinquishes CPU
Now we know where the disk head is
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-581
Example (cont)
High wants to send a bit
To send 1 bit, High seeks to cylinder 140 and relinquish CPU To send 0 bit, High seeks to cylinder 160 and relinquish CPU
Covert timing channel: uses ordering relationship among accesses to transmit information
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-582
Noise
Noiseless covert channel uses shared resource available to sender, receiver only Noisy covert channel uses shared resource available to sender, receive, and others
Need to minimize interference enough so that message can be read in spite of others use of channel
November 1, 2004 Slide #1-583
Key Properties
Existence
Determining whether the covert channel exists
Bandwidth
Determining how much information can be sent over the channel
November 1, 2004
Slide #1-584
Detection
Covert channels require sharing Manner of sharing controls which subjects can send, which subjects can receive information using that shared resource Porras, Kemmerer: model flow of information through shared resources with a tree
Called covert flow trees
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-585
Constructing Tree
Example: files in file system have 3 attributes
locked: true when file locked isopen: true when file opened inuse: set containing PID of processes having file open
Functions:
read_access(p, f): true if p has read rights over file f empty(s): true if set s is empty random: returns one of its arguments chosen at random
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-588
November 1, 2004
Slide #1-589
November 1, 2004
Slide #1-590
Tree Construction
This is for attribute locked
Goal state: covert storage channel via attribute locked Type of goal controls construction
First Step
Covert storage channel via attribute locked Modification of attribute locked Recognition of attribute locked
Put and node under goal Put children under and node
November 1, 2004
Slide #1-592
Second Step
Modification of attribute locked +
Lockfile
Unlockfile
November 1, 2004
Slide #1-593
Third Step
Recognition of attribute locked + Direct recognition of attribute locked + Indirect recognition of attribute locked + Infer attribute locked via attribute inuse
Recognition had direct, inferred recognition children Direct recognition child: and node with Filelocked child
Filelocked returns value of locked
Filelocked
November 1, 2004
Fourth Step
Infer attribute locked via attribute inuse Openfile Recognition of attribute inuse
November 1, 2004
Slide #1-595
Fifth Step
Recognition of attribute inuse + Direct recognition of attribute inuse + Indirect recognition of attribute inuse +
Recognize-new-state node
Direct recognition node: or child, Fileopened node beneath (recognizes change in inuse directly) Inferred recognition node: or child, FALSE node beneath (nothing recognizes change in inuse indirectly)
Fileopened
FALSE
November 1, 2004
Slide #1-596
Final Tree
November 1, 2004
Slide #1-597
November 1, 2004
Slide #1-598
November 1, 2004
Slide #1-599
Mitigation
Goal: obscure amount of resources a process uses
Receiver cannot determine what part sender is using and what part is obfuscated
How to do this?
Devote uniform, fixed amount of resources to each process Inject randomness into allocation, use of resources
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-600
Example: Pump
communications buffer holds n items Low buffer High buffer
Low process
High process
November 1, 2004
Slide #1-601
Slide #1-602
How to Fix
Assume: Low process, pump can process messages faster than High process Case 1: High process handles messages more quickly than Low process gets acknowledgements
Pump artificially delaying ACKs
Low process waits for ACK regardless of whether buffer is full
Conclusion: pump substantially reduces capacity of covert channel between High, Low processes when compared with direct connection
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-605
Key Points
Confinement problem: prevent leakage of information
Solution: separation and/or isolation
Shared resources offer paths along which information can be transferred Covert channels difficult if not impossible to eliminate
Bandwidth can be greatly reduced, however!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-606
November 1, 2004
Slide #1-607
Overview
Trust Problems from lack of assurance Types of assurance Life cycle and assurance Waterfall life cycle model Other life cycle models Adding security afterwards
Introduction to Computer Security 2004 Matt Bishop Slide #1-608
November 1, 2004
Trust
Trustworthy entity has sufficient credible evidence leading one to believe that the system will meet a set of requirements Trust is a measure of trustworthiness relying on the evidence Assurance is confidence that an entity meets its security requirements based on evidence provided by applying assurance techniques
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-609
Relationships
Policy Statement of requirements that explicitly defines the security expectations of the mechanism(s) Provides justification that the mechanism meets policy through assurance evidence and approvals based on evidence Executable entities that are designed and implemented to meet the requirements of the policy
Assurance Mechanisms
November 1, 2004
Slide #1-610
Problem Sources
1. 2. 3. 4. 5. 6. 7. 8. 9. Requirements definitions, omissions, and mistakes System design flaws Hardware implementation flaws, such as wiring and chip flaws Software implementation errors, program bugs, and compiler bugs System use and operation errors and inadvertent mistakes Willful system misuse Hardware, communication, or other equipment malfunction Environmental problems, natural causes, and acts of God Evolution, maintenance, faulty upgrades, and decommissions
November 1, 2004
Slide #1-611
Examples
Challenger explosion
Sensors removed from booster rockets to meet accelerated launch schedule
Role of Requirements
Requirements are statements of goals that must be met
Vary from high-level, generic issues to lowlevel, concrete issues
Security objectives are high-level security issues Security requirements are specific, concrete issues
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-613
Types of Assurance
Policy assurance is evidence establishing security requirements in policy is complete, consistent, technically sound Design assurance is evidence establishing design sufficient to meet requirements of security policy Implementation assurance is evidence establishing implementation consistent with security requirements of security policy
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-614
Types of Assurance
Operational assurance is evidence establishing system sustains the security policy requirements during installation, configuration, and day-to-day operation
Also called administrative assurance
November 1, 2004
Slide #1-615
Life Cycle
Security requirements 2 Assurance justification 4 1 3 Design and implementation refinement
Design
Implementation
November 1, 2004
Slide #1-616
Life Cycle
Conception Manufacture Deployment Fielded Product Life
November 1, 2004
Slide #1-617
Conception
Idea
Decisions to pursue it
Proof of concept
See if idea has merit
November 1, 2004
Slide #1-618
Manufacture
Develop detailed plans for each group involved
May depend on use; internal product requires no sales
Deployment
Delivery
Assure that correct masters are delivered to production and protected Distribute to customers, sales organizations
System and software design Implementation and unit testing Integration and system testing Operation and maintenance
Introduction to Computer Security 2004 Matt Bishop Slide #1-622
November 1, 2004
Relationship of Stages
Requirements definition and analysis
November 1, 2004
Slide #1-623
Models
Exploratory programming
Develop working system quickly Used when detailed requirements specification cannot be formulated in advance, and adequacy is goal No requirements or design specification, so low assurance
Prototyping
Objective is to establish system requirements Future iterations (after first) allow assurance techniques
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-624
Models
Formal transformation
Create formal specification Translate it into program using correctness-preserving transformations Very conducive to assurance methods
Models
Extreme programming
Rapid prototyping and best practices Project driven by business decisions Requirements open until project complete Programmers work in teams Components tested, integrated several times a day Objective is to get system into production as quickly as possible, then enhance it Evidence adduced after development needed for assurance
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-626
Examples
Security kernel combines hardware and software to implement reference monitor Trusted computing base (TCB) is all protection mechanisms within a system responsible for enforcing security policy
Includes hardware and software Generalizes notion of security kernel
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-629
Adding On Security
Key to problem: analysis and testing Designing in mechanisms allow assurance at all levels
Too many features adds complexity, complicates analysis
Example
2 AT&T products
Add mandatory controls to UNIX system SV/MLS
Add MAC to UNIX System V Release 3.2
SVR4.1ES
Re-architect UNIX system to support MAC
November 1, 2004
Slide #1-631
Comparison
Architecting of System
SV/MLS: used existing kernel modular structure; no implementation of least privilege SVR4.1ES: restructured kernel to make it highly modular and incorporated least privilege
November 1, 2004
Slide #1-632
Comparison
File Attributes (inodes)
SV/MLS added separate table for MAC labels, DAC permissions
UNIX inodes have no space for labels; pointer to table added Problem: 2 accesses needed to check permissions Problem: possible inconsistency when permissions changed Corrupted table causes corrupted permissions
November 1, 2004
Slide #1-633
Key Points
Assurance critical for determining trustworthiness of systems Different levels of assurance, from informal evidence to rigorous mathematical evidence Assurance needed at all stages of system life cycle Building security in is more effective than adding it later
November 1, 2004
Slide #1-634
Overview
Goals
Why evaluate?
Evaluation criteria
TCSEC (aka Orange Book) FIPS 140 Common Criteria SSE-CMM
Introduction to Computer Security 2004 Matt Bishop Slide #1-636
November 1, 2004
Goals
Show that a system meets specific security requirements under specific conditions
Called a trusted system Based on specific assurance evidence
Evaluation Methodology
Provides set of requirements defining security functionality for system Provides set of assurance requirements delineating steps for establishing that system meets its functional requirements Provides methodology for determining that system meets functional requirements based on analysis of assurance evidence Provides measure of result indicating how trustworthy system is with respect to security functional requirements
Called level of trust
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-638
Why Evaluate?
Provides an independent assessment, and measure of assurance, by experts
Includes assessment of requirements to see if they are consistent, complete, technically sound, sufficient to counter threats Includes assessment of administrative, user, installation, other documentation that provides information on proper configuration, administration, use of system
Independence critical
Experts bring fresh perspectives, eyes to assessment
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-639
Bit of History
Government, military drove early evaluation processes
Their desire to use commercial products led to businesses developing methodologies for evaluating security, trustworthiness of systems
TCSEC: 19831999
Trusted Computer System Evaluation Criteria
Also known as the Orange Book Series that expanded on Orange Book in specific areas was called Rainbow Series Developed by National Computer Security Center, US Dept. of Defense
Heavily influenced by Bell-LaPadula model and reference monitor concept Emphasizes confidentiality
Integrity addressed by *-property
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-641
Functional Requirements
Discretionary access control requirements
Control sharing of named objects Address propagation of access rights, ACLs, granularity of controls
Functional Requirements
Mandatory access control requirements (B1 up)
Simple security condition, *-property Description of hierarchy of labels
Functional Requirements
Audit requirements
Define what audit records contain, events to be recorded; set increases as other requirements increase
November 1, 2004
Functional Requirements
Trusted facility management (B2 up)
Separation of operator, administrator roles
Assurance Requirements
Configuration management requirements (B2 up)
Identify configuration items, consistent mappings among documentation and code, tools for generating TCB
November 1, 2004
Slide #1-646
Assurance Requirements
Design specification, verification requirements
B1: informal security policy model shown to be consistent with its axioms B2: formal security policy model proven to be consistent with its axioms, descriptive top-level specification (DTLS) B3: DTLS shown to be consistent with security policy model A1: formal top-level specification (FTLS) shown consistent with security policy model using approved formal methods; mapping between FTLS, source code
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-647
Assurance Requirements
Testing requirements
Address conformance with claims, resistance to penetration, correction of flaws Requires searching for covert channels for some classes
November 1, 2004
Slide #1-650
Evaluation Process
Run by government, no fee to vendor 3 stages
Application: request for evaluation
May be denied if govt didnt need product
Evaluation phase
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-651
Evaluation Phase
3 parts; results of each presented to technical review board composed of senior evaluators not on evaluating team; must approve that part before moving on to next part
Design analysis: review design based on documentation provided; developed initial product assessment report
Source code not reviewed
RAMP
Ratings Maintenance Program goal: maintain assurance for new version of evaluated product Vendor would update assurance evidence Technical review board reviewed vendors report and, on approval, assigned evaluation rating to new version of product Note: major changes (structural, addition of some new functions) could be rejected here and a full new evaluation required
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-653
Impact
New approach to evaluating security
Based on analyzing design, implementation, documentation, procedures Introduced evaluation classes, assurance requirements, assurance-based evaluation High technical standards for evaluation Technical depth in evaluation procedures
Some problems
Evaluation process difficult, lacking in resources Mixed assurance, functionality together Evaluations only recognized in US
Introduction to Computer Security 2004 Matt Bishop
November 1, 2004
Slide #1-654
Scope Limitations
Written for operating systems
NCSC introduced interpretations for other things such as networks (Trusted Network Interpretation, the Red Book), databases (Trusted Database Interpretation, the Purple or Lavender Book)
Process Limitations
Criteria creep (expansion of requirements defining classes)
Criteria interpreted for specific product types Sometimes strengthened basic requirements over time Good for community (learned more about security), but inconsistent over time
Contributions
Heightened awareness in commercial sector to computer security needs Commercial firms could not use it for their products
Did not cover networks, applications Led to wave of new approaches to evaluation Some commercial firms began offering certifications
Basis for several other schemes, such as Federal Criteria, Common Criteria
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-657
Requirements
Four increasing levels of security FIPS 140-1 covers basic design, documentation, roles, cryptographic key management, testing, physical security (from electromagnetic interference), etc. FIPS 140-2 covers specification, ports & interfaces; finite state model; physical security; mitigation of other attacks; etc.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-659
Security Level 1
Encryption algorithm must be FIPSapproved algorithm Software, firmware components may be executed on general-purpose system using unevaluated OS No physical security beyond use of production-grade equipment required
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-660
Security Level 2
More physical security
Tamper-proof coatings or seals or pick-resistent locks
Role-based authentication
Module must authenticate that operator is authorized to assume specific role and perform specific services
Software, firmware components may be executed on multiuser system with OS evaluated at EAL2 or better under Common Criteria
Must use one of specified set of protection profiles
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-661
Security Level 3
Enhanced physical security
Enough to prevent intruders from accessing critical security parameters within module
Identity-based authentication Strong requirements for reading, altering critical security parameters Software, firmware components require OS to have EAL3 evaluation, trusted path, informal security policy model
Can use equivalent evaluated trusted OS instead
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-662
Security Level 4
Envelope of protection around module that detects, responds to all unauthorized attempts at physical access
Includes protection against environmental conditions or fluctuations outside modules range of voltage, temperatures
Software, firmware components require OS meet functional requirements for Security Level 3, and assurance requirements for EAL4
Equivalent trusted operating system may be used
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-663
Impact
By 2002, 164 modules, 332 algorithms tested
About 50% of modules had security flaws More than 95% of modules had documentation errors About 25% of algorithms had security flaws More than 65% had documentation errors
November 1, 2004
Slide #1-664
Evaluation Methodology
CC documents
Overview of methodology, functional requirements, assurance requirements
CC Terms
Target of Evaluation (TOE): system or product being evaluated TOE Security Policy (TSP): set of rules regulating how assets managed, protected, distributed within TOE TOE Security Functions (TSF): set consisting of all hardware, software, firmware of TOE that must be relied on for correct enforcement of TSP
Generalization of TCB
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-667
Protection Profiles
CC Protection Profile (PP): implementationindependent set of security requirements for category of products or systems meeting specific consumer needs
Includes functional requirements
Chosen from CC functional requirements by PP author
PPs for firewalls, desktop systems, etc. Evolved from ideas in earlier criteria
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-668
Form of PP
1. Introduction
PP Identification and PP Overview
Form of PP (cont)
4. Security Objectives
Trace security objectives for product back to aspects of identified threats and/or policies Trace security objectives for environment back to threats not completely countered by product or systemand/or policies or assumptions not completely met by product or system
5. IT Security Requirements
Security functional requirements drawn from CC Security assurance requirements based on an EAL
November 1, 2004
Form of PP (cont)
6. Rationale
Security Objectives Rationale demonstrates stated objectives traceable to all assumptions, threats, policies Security Requirements Rationale demonstrates requirements for product or system and for environment traceable to objectives and meet them This section provides assurance evidence that PP is complete, consistent, technically sound
November 1, 2004
Slide #1-671
Security Target
CC Security Target (ST): set of security requirements and specifications to be used as basis for evaluation of identified product or system
Can be derived from a PP, or directly from CC
If from PP, ST can reference PP directly
How It Works
Find appropriate PP and develop appropriate ST based upon it
If no PP, use CC to develop ST directly
Form of ST
1. Introduction
ST Identification, ST Overview CC Conformance Claim
Part 2 (or part 3) conformant if all functional requirements are from part 2 (or part 3) of CC Part 2 (or part 3) extended if uses extended requirements defined by vendor as well
Form of ST (cont)
3.Product or System Family Security Environment 4.Security Objectives 5.IT Security Requirements
These are the same as for a PP
November 1, 2004
Slide #1-675
Form of ST (cont)
6. Product or System Summary Specification
Statement of security functions, description of how these meet functional requirements Statement of assurance measures specifying how assurance requirements met
7. PP Claims
Claims of conformance to (one or more) PP requirements
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-676
Form of ST (cont)
8. Rationale
Security objectives rationale demonstrates stated objectives traceable to assumptions, threats, policies Security requirements rationale demonstrates requirements for TOE and environment traceable to objectives and meets them TOE summary specification rationale demonstrates how TOE security functions and assurance measures meet security requirements Rationale for not meeting all dependencies PP claims rationale explains differences between the ST objectives and requirements and those of any PP to which conformance is claimed
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-677
CC Requirements
Both functional and assurance requirements EALs built from assurance requirements Requirements divided into classes based on common purpose Classes broken into smaller groups (families) Families composed of components, or sets of definitions of detailed requirements, dependent requirements and definition of hierarchy of requirements
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-678
November 1, 2004
Slide #1-679
SSE-CMM: 1997Present
Based on Software Engineering Capability Maturity Model (SE-CMM or just CMM) Defines requirements for process of developing secure systems, not for systems themselves
Provides maturity levels, not levels of trust Used to evaluate an organizations security engineering
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-680
SSE-CMM Model
Process capability: range of expected results that can be achieved by following process
Predictor of future project outcomes
Process performance: measure of actual results Process maturity: extent to which a process explicitly defined, managed, measured, controlled, and is effective Divides process into 11 areas, and 11 more for project and organizational practices
Each process area contains a goal, set of base processes
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-681
Process Areas
Process areas:
Administer security controls Assess impact, security risk, threat, vulnerability Build assurance argument Coordinate security Monitor system security posture Provide security input Specify security needs Verify, validate security
November 1, 2004
Practices:
Ensure quality Manage configuration, project risk Monitor, control technical effort Plan technical effort Define, improve organizations systems engineering process Manage product line evolution Provide ongoing skills, knowledge Coordinate with suppliers
Slide #1-682
November 1, 2004
Key Points
First public, widely used evaluation methodology was TCSEC (Orange Book)
Criticisms led to research and development of other methodologies
Evolved into Common Criteria Other methodologies used for special environments
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-686
November 1, 2004
Slide #1-687
Overview
Defining malicious logic Types
Trojan horses Computer viruses and worms Other types
Defenses
Properties of malicious logic Trust
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-688
Malicious Logic
Set of instructions that cause site security policy to be violated
November 1, 2004
Slide #1-689
Example
Shell script on a UNIX system:
cp /b i n/sh / tmp/ . xyzzy chmod u+s,o+x / tmp/ . xyzzy rm . / l s l s $*
Place in program called ls and trick someone into executing it You now have a setuid-to-them shell!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-690
Trojan Horse
Program with an overt purpose (known to user) and a covert purpose (unknown to user)
Often called a Trojan Named by Dan Edwards in Anderson Report
Example: NetBus
Designed for Windows NT system Victim uploads and installs this
Usually disguised as a game program, or in one
Hard to detect
1976: Karger and Schell suggested modifying compiler to include Trojan horse that copied itself into specific programs including later version of the compiler 1980s: Thompson implements this
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-693
Thompson's Compiler
Modify the compiler so that when it compiles login , login accepts the user's correct password or a fixed password (the same one for all users) Then modify the compiler again, so when it compiles a new version of the compiler, the extra code to do the first step is automatically inserted Recompile the compiler Delete the source containing the modification and put the undoctored source back
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-694
login executable
login executable
logged in
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-695
The Compiler
login source compiler source
correct compiler
login source
compiler source
doctored compiler
compiler executable
Comments
Great pains taken to ensure second version of compiler never released
Finally deleted when a new compiler executable from a different system overwrote the doctored compiler
The point: no amount of source-level verification or scrutiny will protect you from using untrusted code
Also: having source code helps, but does not ensure youre safe
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-697
Computer Virus
Program that inserts itself into one or more files and performs some action
Insertion phase is inserting itself into file Execution phase is performing some (possibly null) action
Pseudocode
beg inv i rus : i f sp read-cond i t ion then beg in fo r some se to f ta rge tf i l es do beg in i fta rge ti s no ti nfec ted then beg in determine where to p l ace v i r us ins t r uct ions copy ins t ruc t ions f rom beg invi rusto endv i rus i n to ta r get a l te r ta r get to execute added i nst ruc t i ons end; end; end; per fo r m some ac t ion(s) goto beg inn ing o fin fec ted program endv i rus :
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-699
No
Overt purpose = virus actions (infect, execute) Covert purpose = none
History
Programmers for Apple II wrote some
Not called viruses; very experimental
Fred Cohen
Graduate student who described them Teacher (Adleman) named it computer virus Tested idea on UNIX systems and UNIVAC 1108 system
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-701
Cohens Experiments
UNIX systems: goal was to get superuser privileges
Max time 60m, min time 5m, average 30m Virus small, so no degrading of response time Virus tagged, so it could be removed quickly
First Reports
Brain (Pakistani) virus (1986)
Written for IBM PCs Alters boot sectors of floppies, spreads to other floppies
More Reports
Duffs experiments (1987)
Small virus placed on UNIX system, spread to 46 systems in 8 days Wrote a Bourne shell script virus
Types of Viruses
Boot sector infectors Executable infectors Multipartite viruses TSR viruses Stealth viruses Encrypted viruses Polymorphic viruses Macro viruses
Introduction to Computer Security 2004 Matt Bishop Slide #1-705
November 1, 2004
Executable Infectors
Header 0 Header 0 100 Virus code 100 200 Executable code and data First program instruction to be e xecuted 1000 Executable code and data 1000 1100
Checks date
If not 1987 or Friday 13th, set up to respond to clock interrupts and then run program Otherwise, set destructive flag; will delete, not infect, files
Multipartite Viruses
A virus that can infect either boot sectors or executables Typically, two parts
One part boot sector infector Other part executable infector
November 1, 2004
Slide #1-709
TSR Viruses
A virus that stays active in memory after the application (or bootstrapping, or disk mounting) is completed
TSR is Terminate and Stay Resident
Stealth Viruses
A virus that conceals infection of files Example: IDF virus modifies DOS service interrupt handler as follows:
Request for file length: return length of uninfected file Request to open file: temporarily disinfect file, and reinfect on closing Request to load file for execution: load infected file
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-711
Encrypted Viruses
A virus that is enciphered except for a small deciphering routine
Detecting virus by signature now much harder as most of virus is enciphered
Deciphering routine
Virus code
November 1, 2004
Slide #1-712
Example
( * Decrypt ion code o f the 1260 v i rus * ) ( *in i t ia l i ze the reg is te r sw i th t he keys * ) rA = k1;rB = k2; ( *in i t ia l i ze rC wi th the v i rus ; s ta r ts a t sov , ends at eov * ) rC = sov ; ( *the enc iphermentloop * ) whi le ( r C != eov) do beg in ( * enc ipher the byte o f the message * ) ( * rC) = ( * rC) xor rA xor rB ; ( * advance a l lthe counters * ) rC = rC + 1 ; rA = rA + 1 ; end
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-713
Polymorphic Viruses
A virus that changes its form each time it inserts itself into another program Idea is to prevent signature detection by changing the signature or instructions used for deciphering routine At instruction level: substitute instructions At algorithm level: different algorithms to achieve the same purpose Toolkits to make these exist (Mutation Engine, Trident Polymorphic Engine)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-714
Example
These are different instructions (with different bit patterns) but have the same effect:
add 0 to register subtract 0 from register xor 0 with register no-op
Macro Viruses
A virus composed of a sequence of instructions that are interpreted rather than executed directly Can infect either executables (Duffs shell virus) or data files (Highlands Lotus 1-2-3 spreadsheet virus) Independent of machine architecture
But their effects may be machine dependent
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-716
Example
Melissa
Infected Microsoft Word 97 and Word 98 documents
Windows and Macintosh systems
Invoked when program opens infected file Installs itself as open macro and copies itself into Normal template
This way, infects any files that are opened in future
Computer Worms
A program that copies itself from one computer to another Origins: distributed computations
Schoch and Hupp: animations, broadcast messages Segment: part of program copied onto workstation Segment processes data, communicates with worms controller Any activity on workstation caused segment to shut down
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-718
Analysts had to disassemble it to uncover function Disabled several thousand systems in 6 or so hours
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-719
Rabbits, Bacteria
A program that absorbs all of some class of resources Example: for UNIX system, shell commands:
whi le t rue do mkdi rx chd i rx done
Logic Bombs
A program that performs an action that violates the site security policy when some external event occurs Example: program that deletes companys payroll records when one particular record is deleted
The particular record is usually that of the person writing the logic bomb Idea is if (when) he or she is fired, and the payroll record deleted, the company loses all those records
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-722
Defenses
Distinguish between data, instructions Limit objects accessible to processes Inhibit sharing Detect altering of files Detect actions beyond specifications Analyze statistical characteristics
Introduction to Computer Security 2004 Matt Bishop Slide #1-723
November 1, 2004
Approach: treat data and instructions as separate types, and require certifying authority to approve conversion
Keys are assumption that certifying authority will not make mistakes and assumption that tools, supporting infrastructure used in certifying process are not corrupt
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-724
Example: LOCK
Logical Coprocessor Kernel
Designed to be certified at TCSEC A1 level
Limiting Accessibility
Basis: a user (unknowingly) executes malicious logic, which then executes with all that users privileges
Limiting accessibility of objects should limit spread of malicious logic and effects of its actions
Example
Anne: VA = 3; Bill, Cathy: VB = VC = 2 Anne creates program P containing virus Bill executes P
P tries to write to Bills program Q
Works, as fd(P) = 0, so fd(Q) = 1 < VB
Cathy executes Q
Q tries to write to Cathys program R
Fails, as fd(Q) = 1, so fd(R) would be 2
Implementation Issues
Metric associated with information, not objects
You can tag files with metric, but how do you tag the information in them? This inhibits sharing
s1 needs to run p2
p2 contains Trojan horse
So s1 needs to ensure p12 (subject created when s1 runs p2) cant write to f3 In practice, p12 inherits s1s rightsbad! Note s1 does not own f3, so cant change its rights over f3
Introduction to Computer Security 2004 Matt Bishop Slide #1-732
Problem: how do you decide what should be in your authorization denial subset?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-733
Kargers Scheme
Base it on attribute of subject, object Interpose a knowledge-based subsystem to determine if requested file access reasonable
Sits between kernel and application
4. Ask user. If yes, effective UID/GID controls allowing access; if no, deny access
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-736
Example
Assembler invoked from compiler
as x . s/ tmp/c tm2345
Trusted Programs
No VALs applied here
UNIX command interpreters
csh, sh
Guardians, Watchdogs
System intercepts request to open file Program invoked to determine if access is to be allowed
These are guardians or watchdogs
November 1, 2004
Slide #1-739
Trust
Trust the user to take explicit actions to limit their process protection domain sufficiently
That is, enforce least privilege correctly
Trust mechanisms to describe programs expected actions sufficiently for descriptions to be applied, and to handle commands without such descriptions properly Trust specific programs and kernel
Problem: these are usually the first programs malicious logic attack
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-740
Sandboxing
Sandboxes, virtual machines also restrict rights
Modify program by inserting instructions to cause traps when violation of policy Replace dynamic load libraries with instrumented routines
November 1, 2004
Slide #1-741
Inhibit Sharing
Use separation implicit in integrity policies Example: LOCK keeps single copy of shared procedure in memory
Master directory associates unique owner with each procedure, and with each user a list of other users the first trusts Before executing any procedure, system checks that user executing procedure trusts procedure owner
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-743
Multilevel Policies
Put programs at the lowest security level, all subjects at higher levels
By *-property, nothing can write to those programs By ss-property, anything can read (and execute) those programs
Example: tripwire
Signature consists of file attributes, cryptographic checksums chosen from among MD4, MD5, HAVAL, SHS, CRC-16, CRC-32, etc.)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-745
Assumptions
Files do not contain malicious logic when original signature block generated Pozzo & Grey: implement Bibas model on LOCUS to make assumption explicit
Credibility ratings assign trustworthiness numbers from 0 (untrusted) to n (signed, fully trusted) Subjects have risk levels
Subjects can execute programs with credibility ratings risk level If credibility rating < risk level, must use special command to run program
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-746
Antivirus Programs
Look for specific sequences of bytes (called virus signature in file
If found, warn user and/or disinfect file
Each agent must look for known set of viruses Cannot deal with viruses not yet analyzed
Due in part to undecidability of whether a generic program is a virus
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-747
N-Version Programming
Implement several different versions of algorithm Run them concurrently
Check intermediate results periodically If disagreement, majority wins
Assumptions
Majority of programs not infected Underlying operating system secure Different algorithms with enough equal intermediate results may be infeasible
Especially for malicious logic, where you would check file accesses
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-749
Proof-Carrying Code
Code consumer (user) specifies safety requirement Code producer (author) generates proof code meets this requirement
Proof integrated with executable code Changing the code invalidates proof
Binary (code + proof) delivered to consumer Consumer validates proof Example statistics on Berkeley Packet Filter: proofs 300900 bytes, validated in 0.3 1.3 ms
Startup cost higher, runtime cost considerably shorter
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-750
Key Points
A perplexing problem
How do you tell what the user asked for is not what the user intended?
Strong typing leads to separating data, instructions File scanners most popular anti-virus agents
Must be updated as new viruses come out
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-752
November 1, 2004
Slide #1-753
Overview
What is a vulnerability? Penetration studies
Flaw Hypothesis Methodology Examples
November 1, 2004
Definitions
Vulnerability, security flaw: failure of security policies, procedures, and controls that allow a subject to commit an action that violates the security policy
Subject is called an attacker Using the failure to violate the policy is exploiting the vulnerability or breaking in
November 1, 2004 Slide #1-755
Formal Verification
Mathematically verifying that a system satisfies certain constraints Preconditions state assumptions about the system Postconditions are result of applying system operations to preconditions, inputs Required: postconditions satisfy constraints
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-756
Penetration Testing
Testing to verify that a system satisfies certain constraints Hypothesis stating system characteristics, environment, and state relevant to vulnerability Result is compromised system state Apply tests to try to move system from state in hypothesis to compromised system state
November 1, 2004
Slide #1-757
Notes
Penetration testing is a testing technique, not a verification technique
It can prove the presence of vulnerabilities, but not the absence of vulnerabilities
For formal verification to prove absence, proof and preconditions must include all external factors
Realistically, formal verification proves absence of flaws within a particular program, design, or environment and not the absence of flaws in a computer system (think incorrect configurations, etc.)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-758
Penetration Studies
Test for evaluating the strengths and effectiveness of all security controls on system
Also called tiger team attack or red team attack Goal: violate site security policy Not a replacement for careful design, implementation, and structured testing Tests system in toto, once it is in place
Includes procedural, operational controls as well as technological ones
November 1, 2004
Slide #1-759
Goals
Attempt to violate specific constraints in security and/or integrity policy
Implies metric for determining success Must be well-defined
Example: subsystem designed to allow owner to require others to give password before accessing file (i.e., password protect files)
Goal: test this control Metric: did testers get access either without a password or by gaining unauthorized access to a password?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-760
Goals
Find some number of vulnerabilities, or vulnerabilities within a period of time
If vulnerabilities categorized and studied, can draw conclusions about care taken in design, implementation, and operation Otherwise, list helpful in closing holes but not more
Example: vendor gets confidential documents, 30 days later publishes them on web
Goal: obtain access to such a file; you have 30 days Alternate goal: gain access to files; no time limit (a Trojan horse would give access for over 30 days)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-761
Layering of Tests
1. External attacker with no knowledge of system
Locate system, learn enough to be able to access it
Methodology
Usefulness of penetration study comes from documentation, conclusions
Indicates whether flaws are endemic or not It does not come from success or failure of attempted penetration
2. Flaw hypothesis
Draw on knowledge to hypothesize vulnerabilities
3. Flaw testing
Test them out
4. Flaw generalization
Generalize vulnerability to find others like it
Information Gathering
Devise model of system and/or components
Look for discrepencies in components Consider interfaces among components
Flaw Hypothesizing
Examine policies, procedures
May be inconsistencies to exploit May be consistent, but inconsistent with design or implementation May not be followed
Examine implementations
Use models of vulnerabilities to help locate potential problems Use manuals; try exceeding limits and restrictions; try omitting steps in procedures
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-767
Flaw Testing
Figure out order to test potential flaws
Priority is function of goals
Example: to find major design or implementation problems, focus on potential system critical flaws Example: to find vulnerability to outside attackers, focus on external access protocols and programs
Procedure
Back up system Verify system configured to allow exploit
Take notes of requirements for detecting flaw
Flaw Generalization
As tests succeed, classes of flaws emerge
Example: programs read input into buffer on stack, leading to buffer overflow attack; others copy command line arguments into buffer on stack these are vulnerable too
Flaw Elimination
Usually not included as testers are not best folks to fix this
Designers and implementers are
Requires understanding of context, details of flaw including environment, and possibly exploit
Design flaw uncovered during development can be corrected and parts of implementation redone
Dont need to know how exploit works
Design flaw uncovered at production site may not be corrected fast enough to prevent exploitation
So need to know how exploit works
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-772
November 1, 2004
Slide #1-773
Focus on segment 5
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-774
Run in user mode when user code being executed User code issues system call, which in turn issues supervisor call
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-775
X+2
X X + 1X+ 2
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-776
X+2
X X + 1X+ 2
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-777
Setup:
Set address for storing line number to be address of line length
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-778
Step 5: Execution
System routine validated all parameter addresses
All were indeed in user segment
When line read, line length written into location address of which was in parameter list
So it overwrote value in segment 5
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-779
Testers realized that privilege level in segment 5 controlled ability to issue supervisor calls (as opposed to system calls)
And one such call turned off hardware protection for segments 0-4
Effect: this flaw allowed attackers to alter anything in memory, thereby completely controlling computer
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-780
Burroughs B6700
System architecture: based on strict file typing
Entities: ordinary users, privileged users, privileged programs, OS tasks
Ordinary users tightly restricted Other 3 can access file data without restriction but constrained from compromising integrity of system
No assemblers; compilers output executable code Data files, executable files have different types
Only compilers can produce executables Writing to executable or its attributes changes its type to data
November 1, 2004
Slide #1-783
Reinstall program as a new compiler Write new subroutine, compile it normally, and change machine code to give privileges to anyone calling it (this makes it data, of course)
Now use new compiler to change its type from data to executable
November 1, 2004
Slide #1-786
Success
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-790
Penetrating a System
Goal: gain access to system We know its network address and nothing else First step: scan network ports of system
Protocols on ports 79, 111, 512, 513, 514, and 540 are typically run on UNIX systems
November 1, 2004
Output of sendmail
220 zzz .com sendma i l3 .1 / zzz .3 .9, Da l las, Texas,ready at Wed, 2 Apr 97 22:07:31 CST Version 3.1 has the wiz vulnerability that recognizes the shell command so lets try it Start off by identifying yourself he lo xxx .org 250 zzz .com Hel lo xxx .org ,p leased to meet you Now see if the wiz command works if it says command unrecognized, were out of luck wiz 250 Ente r , O mighty wizard ! It does! And we didnt need a password so get a shell she l l # And we have full privileges as the superuser, root
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-793
November 1, 2004
Slide #1-794
loadmodule
Validates module ad being a dynamic load module Invokes dynamic loader ld.so to do actual load; also calls arch to determine system architecture (chip set)
Check, but only privileged user can call ld.so
First Try
Set environment to look in local directory, write own version of ld.so, and put it in local directory
This version will print effective UID, to demonstrate we succeeded
Set search path to look in current working directory before system directories Then run loadmodule
Nothing is printeddarn! Somehow changing environment did not affect execution of subprogramswhy not?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-796
What Happened
Look in executable to see how ld.so, arch invoked
Invocations are /bin/ld.so, /bin/arch Changing search path didnt matter as never used
Second Try
Change value of IFS to include / Change name of our version of ld.so to bin
Search path still has current directory as first place to look for commands
Run loadmodule
Prints that its effective UID is 0 (root)
Success!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-798
Generalization
Process did not clean out environment before invoking subprocess, which inherited environment
So, trusted program working with untrusted environment (input) result should be untrusted, but is trusted!
November 1, 2004
Slide #1-800
November 1, 2004
Slide #1-801
First Try
Probe for easy-to-guess passwords
Find system administrator has password Admin Now have administrator (full) privileges on local system
Next Step
Domain administrator installed service running with domain admin privileges on local system Get program that dumps local security authority database
This gives us service account password We use it to get domain admin privileges, and can access any system in domain
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-803
Generalization
Sensitive account had an easy-to-guess password
Possible procedural problem
Look for weak passwords on other systems, accounts Review company security policies, as well as education of system administrators and mechanisms for publicizing the policies
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-804
Debate
How valid are these tests?
Not a substitute for good, thorough specification, rigorous design, careful and correct implementation, meticulous testing Very valuable a posteriori testing technique
Ideally unnecessary, but in practice very necessary
Problems
Flaw Hypothesis Methodology depends on caliber of testers to hypothesize and generalize flaws Flaw Hypothesis Methodology does not provide a way to examine system systematically
Vulnerability classification schemes help here
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-806
Vulnerability Classification
Describe flaws from differing perspectives
Exploit-oriented Hardware, software, interface-oriented
November 1, 2004
Slide #1-807
Example Flaws
Use these to compare classification schemes First one: race condition (xterm) Second one: buffer overflow on stack leading to execution of injected code (fingerd) Both are very well known, and fixes available!
And should be installed everywhere
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-808
November 1, 2004
Slide #1-809
File Exists
Check that user can write to file requires special system call
Because root can append to any file, check in open will always succeed
Check that user can write to file /usr/tom/X i f(access( /usr / t om/X , W _ O K) == 0) { Open /usr/tom/X to append log entries i f( ( fd = open( /usr / tom/X , O_ W R O NLY|O_APPEN D))< 0) { / * hand l e er ror :cannot open f i le * / } }
November 1, 2004
Slide #1-810
Problem
Binding of file name /usr/tom/X to file object can change between first and second lines
(a) is at access; (b) is at open Note file opened is not file checked
/ etc passwd passwd data usr X tom etc passwd passwd data / usr X tom X data
X data
finger client send request for information to server fingerd (finger daemon)
Request is name of at most 512 chars What happens if you send more?
November 1, 2004
Slide #1-812
Buffer Overflow
Extra chars overwrite rest of stack, as shown Can make those chars change return address to point to beginning of buffer If buffer contains small program to spawn shell, attacker gets shell on target system
getslocal variables other return state info return address ofmain parameter to gets input buffer mainlocal variables getslocal variables other return state info address of input buffer program to invoke shell mainlocal variables
After message
November 1, 2004
Slide #1-813
Frameworks
Goals dictate structure of classification scheme
Guide development of attack tool focus is on steps needed to exploit vulnerability Aid software development process focus is on design and programming errors causing vulnerabilities
Following schemes classify vulnerability as n-tuple, each element of ntuple being classes into which vulnerability falls
Some have 1 axis; others have multiple axes
November 1, 2004
Slide #1-814
November 1, 2004
Slide #1-815
Classification Scheme
Incomplete parameter validation Inconsistent parameter validation Imnplicit sharing f privileged/confidential data Asynchronous validation/inadequate serialization Inadequate identification/authentication/authorization Violable prohibition/limit Exploitable logic error
November 1, 2004
Slide #1-816
Check for type, format, range of values, access rights, presence (or absence)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-817
November 1, 2004
Slide #1-818
Position guess for password so page fault occurred between 1st, 2nd char
If no page fault, 1st char was wrong; if page fault, it was right
November 1, 2004
Slide #1-819
November 1, 2004
Slide #1-820
November 1, 2004
Slide #1-821
November 1, 2004
Slide #1-822
November 1, 2004
Slide #1-823
Legacy of RISOS
First funded project examining vulnerabilities Valuable insight into nature of flaws
Security is a function of site requirements and threats Small number of fundamental flaws recurring in many contexts OS security not critical factor in design of OSes
November 1, 2004
Slide #1-824
Classification Scheme
Improper protection domain initialization and enforcement
Improper choice of initial protection domain Improper isolation of implementation detail Improper change Improper naming Improper deallocation or deletion
November 1, 2004
Slide #1-826
November 1, 2004
Slide #1-827
November 1, 2004
Slide #1-828
Improper Change
Data is inconsistent over a period of time Example: xterm flaw
Meaning of /usr/tom/X changes between access and open
Example: parameter is validated, then accessed; but parameter is changed between validation and access
Burroughs B6700 allowed allowed this
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-829
Improper Naming
Multiple objects with same name Example: Trojan horse
loadmodule attack discussed earlier; bin could be a directory or a program
November 1, 2004
Slide #1-831
Improper Validation
Inadequate checking of bounds, type, or other attributes or values Example: fingerds failure to check input length
November 1, 2004
Slide #1-832
Improper Indivisibility
Interrupting operations that should be uninterruptable
Often: interrupting atomic operations
November 1, 2004
Slide #1-833
Improper Sequencing
Required order of operations not enforced Example: one-time password scheme
System runs multiple copies of its server Two users try to access same account
Server 1 reads password from file Server 2 reads password from file Both validate typed password, allow user to log in Server 1 writes new password to file Server 2 writes new password to file
Should have every read to file followed by a write, and vice versa; not two reads or two writes to file in a row
November 1, 2004
Slide #1-834
November 1, 2004
Slide #1-835
Legacy
First to explore automatic detection of security flaws in programs and systems Methods developed but not widely used
Parts of procedure could not be automated Complexity Procedures for obtaining system-independent patterns describing flaws not complete
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-836
NRL Taxonomy
Goals:
Determine how flaws entered system Determine when flaws entered system Determine where flaws are manifested in system
November 1, 2004
Slide #1-837
Genesis of Flaws
Nonreplicating Trojan horse Malicious Intentional Covert channel Nonmalicious Other Trapdoor Logic/time bomb Replicating
Storage Timing
Inadvertent (unintentional) flaws classified using RISOS categories; not shown above
If most inadvertent, better design/coding reviews needed If most intentional, need to hire more trustworthy developers and do more securityrelated testing
November 1, 2004
Slide #1-838
Time of Flaws
Development Time of introduction Maintenance Operation Requirement/specification/design Source code Object code
Development phase: all activities up to release of initial version of software Maintenance phase: all activities leading to changes in software performed under configuration control Operation phase: all activities involving patching and not under configuration control
Introduction to Computer Security 2004 Matt Bishop Slide #1-839
November 1, 2004
Location of Flaw
Operating system Software Location Hardware Support Application System initialization Memory management Process management/scheduling Device management File management Identification/authentication Other/unkno wn Privileged utilities Unprivileged utilities
Focus effort on locations where most flaws occur, or where most serious flaws occur
November 1, 2004
Slide #1-840
Legacy
Analyzed 50 flaws Concluded that, with a large enough sample size, an analyst could study relationships between pairs of classes
This would help developers focus on most likely places, times, and causes of flaws
November 1, 2004
Slide #1-841
Aslams Model
Goal: treat vulnerabilities as faults and develop scheme based on fault trees Focuses specifically on UNIX flaws Classifications unique and unambiguous
Organized as a binary tree, with a question at each node. Answer determines branch you take Leaf node gives you classification
Top Level
Coding faults: introduced during software development
Example: fingerds failure to check length of input string before storing it in buffer
November 1, 2004
Slide #1-843
Coding Faults
Synchronization errors: improper serialization of operations, timing window between two operations creates flaw
Example: xterm flaw
Condition validation errors: bounds not checked, access rights ignored, input not validated, authentication and identification fails
Example: fingerd flaw
November 1, 2004
Slide #1-844
Emergent Faults
Configuration errors: program installed incorrectly
Example: tftp daemon installed so it can access any file; then anyone can copy any file
November 1, 2004
Slide #1-845
Legacy
Tied security flaws to software faults Introduced a precise classification scheme
Each vulnerability belongs to exactly 1 class of security flaws Decision procedure well-defined, unambiguous
November 1, 2004
Slide #1-846
Levels of abstraction
How does flaw appear at different levels?
Levels are abstract, design, implementation, etc.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-847
November 1, 2004
Slide #1-848
November 1, 2004
Slide #1-849
November 1, 2004
Slide #1-850
Genesis: ambiguous
If intentional:
Lowest level: inadvertent flaw of serialization/aliasing
If unintentional:
Lowest level: nonmalicious: other
November 1, 2004
Slide #1-852
Note: in absence of explicit decision procedure, all could go into class race condition
November 1, 2004
Slide #1-853
The Point
The schemes lead to ambiguity
Different researchers may classify the same vulnerability differently for the same classification scheme
Not true for Aslams, but that misses connections between different classifications
xterm is race condition as well as others; Aslam does not show this
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-854
November 1, 2004
Slide #1-855
Consider even higher level of abstraction, where securityrelated value in memory is changing and data executed that should not be executable
operating system: improper choice of initial protection domain
November 1, 2004
Slide #1-856
November 1, 2004
Slide #1-857
Consider even higher level of abstraction, where securityrelated value in memory is changing and data executed that should not be executable
operating system: inadequate identification/authentication/authorization
November 1, 2004
Slide #1-858
Genesis: ambiguous
Known to be inadvertent flaw Parallels that of RISOS
November 1, 2004
Slide #1-859
November 1, 2004
Slide #1-860
Summary
Classification schemes requirements
Decision procedure for classifying vulnerability Each vulnerability should have unique classification
Key Points
Given large numbers of non-secure systems in use now, unrealistic to expect less vulnerable systems to replace them Penetration studies are effective tests of systems provided the test goals are known and tests are structured well Vulnerability classification schemes aid in flaw generalization and hypothesis
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-862
November 1, 2004
What is Auditing?
Logging
Recording events or statistics to provide information about system use and performance
Auditing
Analysis of log records to present information about the system in a clear, understandable manner
November 1, 2004
Slide #1-864
Uses
Describe security state
Determine if system enters unauthorized state
Problems
What do you log?
Hint: looking for violations of a policy, so record at least what will show such violations
November 1, 2004
Slide #1-866
Analyzer
Analyzes logged information looking for something
Notifier
Reports results of analysis
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-867
Logger
Type, quantity of information recorded controlled by system or program configuration parameters May be human readable or not
If not, usually viewing tools supplied Space available, portability influence storage format
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-868
Example: RACF
Security enhancement package for IBMs MVS/VM Logs failed access attempts, use of privilege to change security levels, and (if desired) RACF interactions View events with LISTUSERS commands
November 1, 2004
Slide #1-869
Example: Windows NT
Different logs for different types of events
System event logs record system crashes, component failures, and other system events Application event logs record events that applications request be recorded Security event log records security-critical events such as logging in and out, system file accesses, and other events
Logs are binary; use event viewer to see them If log full, can have system shut down, logging disabled, or logs overwritten
November 1, 2004
Slide #1-871
Description: A new process has been created: New Process ID: 2216594592 Image File Name: \Program Files\Internet Explorer\IEXPLORE.EXE Creator Process ID: 2217918496 User Name: Administrator FDomain: WINDSOR Logon ID: (0x0,0x14B4c4)
November 1, 2004
Slide #1-872
Analyzer
Analyzes one or more logs
Logs may come from multiple systems, or a single system May lead to changes in logging May lead to a report of an event
November 1, 2004
Slide #1-873
Examples
Using swatch to find instances of telnet from tcpd logs:
/ te lne t /&! / loca lhos t /& ! / * .s i te .com/
November 1, 2004
Slide #1-874
Notifier
Informs analyst, other entities of results of analysis May reconfigure logging and/or analysis on basis of results
November 1, 2004
Slide #1-875
Examples
Using swatch to notify of telnets
/ te lne t / &! / loca l host /& ! / * .s i te .com/ mai ls t af f
November 1, 2004
Slide #1-877
Example: Bell-LaPadula
Simple security condition and *-property
S reads O L(S) L(O) S writes O L(S) L(O) To check for violations, on each read and write, must log L(S), L(O), action (read, write), and result (success, failure) Note: need not record S, O!
In practice, done to identify the object of the (attempted) violation and the user attempting the violation
November 1, 2004
Slide #1-878
Implementation Issues
Show non-security or find violations?
Former requires logging initial state as well as changes
Defining violations
Does write include append and create directory?
November 1, 2004
Slide #1-879
Syntactic Issues
Data that is logged may be ambiguous
BSM: two optional text fields followed by two mandatory text fields If three fields, which of the optional fields is omitted?
Example
ent ry: da te hos t prog [ bad ] user [ f rom host ] t o user on t t y date : day t ime host : s t r ing prog : s t r ing : bad :FAILED user : s t r ing t t y : /dev / s t r i ng
Log file entry format defined unambiguously Audit mechanism could scan, interpret entries without confusion
November 1, 2004
Slide #1-881
Log Sanitization
U set of users, P policy defining set of information C(U) that U cannot see; log sanitized when all information in C(U) deleted from log Two types of P
C(U) cant leave site
People inside site are trusted and information not sensitive to them
November 1, 2004
Slide #1-883
Logging Organization
Logging system Log Sanitizer Users
Logging system
Sanitizer
Log
Users
November 1, 2004
Slide #1-884
Reconstruction
Anonymizing sanitizer cannot be undone
No way to recover data from this
Importance
Suppose security analysis requires access to information that was sanitized?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-885
Issue
Key: sanitization must preserve properties needed for security analysis If new properties added (because analysis changes), may have to resanitize information
This requires pseudonymous sanitization or the original log
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-886
Example
Company wants to keep its IP addresses secret, but wants a consultant to analyze logs for an address scanning attack
Connections to port 25 on IP addresses 10.163.5.10, 10.163.5.11, 10.163.5.12, 10.163.5.13, 10.163.5.14, 10.163.5.15 Sanitize with random IP addresses
Cannot see sweep through consecutive IP addresses
November 1, 2004
Slide #1-887
Generation of Pseudonyms
1. Devise set of pseudonyms to replace sensitive information
Replace data with pseudonyms Maintain table mapping pseudonyms to data
2.
Use random key to encipher sensitive datum and use secret sharing scheme to share key
Used when insiders cannot see unsanitized data, but outsiders (law enforcement) need to Requires t out of n people to read data
November 1, 2004
Slide #1-888
Application Logging
Applications logs made by applications
Applications control what is logged Typically use high-level abstractions such as:
su: b i shop to root on / dev/ t t yp0
Does not include detailed, system call level information such as results, parameters, etc.
November 1, 2004
Slide #1-889
System Logging
Log system events such as kernel actions
Typically use low-level events
3876 ktrace 3876 ktrace 3876 ktrace 3876 su 3876 su 3876 su 3876 su 3876 su 3876 su 3876 su CALL NAMI NAMI RET CALL RET CALL RET CALL RET execve(0xbfbff0c0,0xbfbff5cc,0xbfbff5d8) "/usr/bin/su" "/usr/libexec/ld - e lf.so.1" xecve 0 __sysctl(0xbfbff47c,0x2,0x2805c928,0xbfbff478,0,0) __sysctl 0 mmap(0,0x8000,0x3,0x1002,0xffffffff,0,0,0) mmap 671473664/0x2805e000 geteuid geteuid 0
Does not include high-level abstractions such as loading libraries (as above)
November 1, 2004
Slide #1-890
Contrast
Differ in focus
Application logging focuses on application events, like failure to supply proper password, and the broad operation (what was the reason for the access attempt?) System logging focuses on system events, like memory mapping or file accesses, and the underlying causes (why did access fail?)
System logs usually much bigger than application logs Can do both, try to correlate them
November 1, 2004
Slide #1-891
Design
A posteriori design
Need to design auditing mechanism for system not built with security in mind
Goal of auditing
Detect any violation of a stated policy
Focus is on policy and actions designed to violate policy; specific actions may not be known
November 1, 2004
Slide #1-892
Transition-based auditing
Look at actions that transition system from one state to another
November 1, 2004
Slide #1-893
State-Based Auditing
Log information about state and determine if state allowed
Assumption: you can get a snapshot of system state Snapshot needs to be consistent Non-distributed system needs to be quiescent Distributed system can use Chandy-Lamport algorithm, or some other algorithm, to obtain this
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-894
Example
File system auditing tools
Thought of as analyzing single state (snapshot) In reality, analyze many slices of different state unless file system quiescent Potential problem: if test at end depends on result of test at beginning, relevant parts of system state may have changed between the first test and the last
Classic TOCTTOU flaw
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-895
Transition-Based Auditing
Log information about action, and examine current state and proposed transition to determine if new state would be disallowed
Note: just analyzing the transition may not be enough; you may need the initial state Tend to use this when specific transitions always require analysis (for example, change of privilege)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-896
Example
TCP access control mechanism intercepts TCP connections and checks against a list of connections to be blocked
Obtains IP address of source of connection Logs IP address, port, and result (allowed/blocked) in log file Purely transition-based (current state not analyzed at all)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-897
November 1, 2004
Slide #1-898
Example
Land attack
Consider 3-way handshake to initiate TCP connection (next slide) What happens if source, destination ports and addresses the same? Host expects ACK(t+1), but gets ACK(s+1). RFC ambiguous:
p. 36 of RFC: send RST to terminate connection p. 69 of RFC: reply with empty packet having current sequence number t+1 and ACK number s+1but it receives packet and ACK number is incorrect. So it repeats this system hangs or runs very slowly, depending on whether interrupts are disabled
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-899
SYN(s)
ACK(t+1)
Destination
Normal: 1. srcseq = s, expects ACK s+1 2. destseq = t, expects ACK t+1; src gets ACK s+1 3. srcseq = s+1, destseq = t+1; dest gets ACK t+1 Land: 1. srcseq = destseq = s, expects ACK s+1 2. srcseq = destseq = t, expects ACK t+1 but gets ACK s+1 3. Never reached; recovery from error in 2 attempted
Slide #1-900
November 1, 2004
Detection
Must spot initial Land packet with source, destination addresses the same Logging requirement:
source port number, IP address destination port number, IP address
Auditing requirement:
If source port number = destination port number and source IP address = destination IP address, packet is part of a Land attack
November 1, 2004
Slide #1-901
Auditing Mechanisms
Systems use different mechanisms
Most common is to log all events by default, allow system administrator to disable logging that is unnecessary
Two examples
One audit system designed for a secure system One audit system designed for non-secure system
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-902
Secure Systems
Auditing mechanisms integrated into system design and implementation Security officer can configure reporting and logging:
To report specific events To monitor accesses by a subject To monitor accesses to an object
Kernel is layered
Logging done where events of interest occur Each layer audits accesses to objects it controls
Other Issues
Always logged
Programmer can request event be logged Any attempt to violate policy
Protection violations, login failures logged when they occur repeatedly Use of covert channels also logged
Log filling up
Audit logging process signaled to archive log when log is 75% full If not possible, system stops
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-906
Example 2: CMW
Compartmented Mode Workstation designed to allow processing at different levels of sensitivity
Auditing subsystem keeps table of auditable events Entries indicate whether logging is turned on, what type of logging to use User level command chaud allows user to control auditing and what is audited
If changes affect subjects, objects currently being logged, the logging completes and then the auditable events are changed
November 1, 2004
Slide #1-907
System Calls
On system call, if auditing on:
System call recorded First 3 parameters recorded (but pointers not followed)
CMW Auditing
Tool (redux) to analyze logged events Converts binary logs to printable format Redux allows user to constrain printing based on several criteria
Users Objects Security levels Events
Introduction to Computer Security 2004 Matt Bishop Slide #1-911
November 1, 2004
Non-Secure Systems
Have some limited logging capabilities
Log accounting data, or data for non-security purposes Possibly limited security data like failed logins
November 1, 2004
Slide #1-913
Grouped into audit event classes based on events causing record generation
Before log created: tell system what to generate records for After log created: defined classes control which records given to analysis tools
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-914
Example Record
Logs are binary; this is from praudit
header , 35,AUE_EXIT,W ed Sep 18 11 : 35:28 1991, + 570000 msec, process , b ishop, r oot , roo t , daemon,1234, re tu rn ,Er ro r0 ,5 t ra i le r ,35
November 1, 2004
Slide #1-915
NFS Version 2
Mounting protocol
Client kernel contacts servers mount daemon Daemon checks client is authorized to mount file system Daemon returns file handle pointing to server mount point Client creates entry in client file system corresponding to file handle Access restrictions enforced
On client side: server not aware of these On server side: client not aware of these
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-917
Iterate above three steps until handle obtained for requested file
Or access denied by client
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-918
Site Policy
1. NFS servers respond only to authorized clients 2. UNIX access controls regulate access to servers exported file system 3. No client host can access a non-exported file system
November 1, 2004
Slide #1-920
Resulting Constraints
1. File access granted client authorized to import file system, user can search all parent directories, user can access file as requested, file is descendent of servers file system mount point
From P1, P2, P3
November 1, 2004
Slide #1-921
More Constraints
3. Possession of file handle file handle issued to user
From P1, P2; otherwise unauthorized client could access files in forbidden ways
November 1, 2004
Slide #1-922
NFS Operations
Transitions from secure to non-secure state can occur only when NFS command occurs Example commands:
MOUNT filesystem
Mount the named file system on the requesting client, if allowed
Logging Requirements
1.When file handle issued, server records handle, UID and GID of user requesting it, client host making request
Similar to allocating file descriptor when file opened; allows validation of later requests
2.When file handle used as parameter, server records UID, GID of user
Was user using file handle issued that file handleuseful for detecting spoofs
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-924
Logging Requirements
3. When file handle issued, server records relevant attributes of containing object
On LOOKUP, attributes of containing directory show whether it can be searched
November 1, 2004
Slide #1-926
3. Check that directory has file system mount point as ancestor and user has search permission on directory
Obtained from constraint 1 Log requirements 2 (who is using handle), 3 (owner, group, type, permissions of object), 4 (result), 5 (reconstruct path name)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-927
LAFS
File system that records user level activities Uses policy-based language to automate checks for violation of policies Implemented as extension to NFS
You create directory with lmkdir and attach policy with lattach:
lmkd i r/ usr /home/xyzzy /pro j ect po l i cy l a t tach /us r /home/xyzzy /pro j ect/ la f s /xyzzy/p ro jec t
November 1, 2004 Slide #1-928
LAFS Components
Name server File manager Configuration assistant
Sets up required protection modes; interacts with name server, underlying file protection mechanisms
Audit logger
Logs file accesses; invoked whenever process accesses file
Policy checker
Validates policies, checks logs conform to policy
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-929
How It Works
No changes to applications Each file has 3 associated virtual files
file%log: all accesses to file file%policy: access control policy for file file%audit: when accessed, triggers audit in which accesses are compared to policy for file
Example Policies
proh ib i t :0900-1700:* : * :wumpus:exec
Program make can read Makefile Owner can change Makefile using makedepend Owner, group member can create .o, .out files using gcc and ld Owner can modify .c, .h files using named editors up to Sep. 29, 2001
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-931
Comparison
Security policy controls access
Goal is to detect, report violations Auditing mechanisms built in
Comparison
Users can specify policies in LAFS
Use %policy file
Which is better?
Depends on goal; LAFS is more flexible but easier to evade. Use both together, perhaps?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-933
Audit Browsing
Goal of browser: present log information in a form easy to understand and use Several reasons to do this:
Audit mechanisms may miss problems that auditors will spot Mechanisms may be unsophisticated or make invalid assumptions about log format or meaning Logs usually not integrated; often different formats, syntax, etc.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-934
Browsing Techniques
Text display
Does not indicate relationships between events
Hypertext display
Indicates local relationships between events Does not indicate global relationships clearly
Graphing
Nodes are entities, edges relationships Often too cluttered to show everything, so graphing selects subsets of events
Slicing
Show minimum set of log events affecting object Focuses on local relationships, not global ones
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-936
Movie Maker
Generates sequence of graphs, each event creating a new graph suitably modified
Hypertext Generator
Produces page per user, page per modified file, summary and index pages
Example Use
File changed
Use focused audit browser
Changed file is initial focus Edges show which processes have altered file
Tracking Attacker
Use hypertext generator to get all audit records with that UID
Now examine them for irregular activity Frame visualizer may help here Once found, work forward to reconstruct activity
Example: MieLog
Computes counts of single words, word pairs
Auditor defines threshold count MieLog colors data with counts higher than threshold
Example Use
Auditor notices unexpected gap in time information area
No log entries during that time!?!?
Color of words in entries helps auditor find similar entries elsewhere and reconstruct patterns
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-941
Key Points
Logging is collection and recording; audit is analysis Need to have clear goals when designing an audit system Auditing should be designed into system, not patched into system after it is implemented Browsing through logs helps auditors determine completeness of audit (and effectiveness of audit mechanisms!)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-942
November 1, 2004
Example
Goal: insert a back door into a system
Intruder will modify system configuration file or program Requires privilege; attacker enters system as an unprivileged user and must acquire privilege
Nonprivileged user may not normally acquire privilege (violates #1) Attacker may break in using sequence of commands that violate security policy (violates #2) Attacker may cause program to act in ways that violate programs specification
November 1, 2004 Slide #1-945
Detection
Rootkit configuration files cause ls, du, etc. to hide information
ls lists all files in a directory
Except those hidden by configuration file
Key Point
Rootkit does not alter kernel or file structures to conceal files, processes, and network connections
It alters the programs or system calls that interpret those structures Find some entry point for interpretation that rootkit did not alter The inconsistency is an anomaly (violates #1)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-948
Dennings Model
Hypothesis: exploiting vulnerabilities requires abnormal use of normal commands or instructions
Includes deviation from usual actions Includes execution of actions leading to breakins Includes actions inconsistent with specifications of privileged programs
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-949
Goals of IDS
Detect wide variety of intrusions
Previously known and unknown attacks Suggests need to learn/adapt to new attacks or changes in behavior
Goals of IDS
Present analysis in simple, easy-to-understand format
Ideally a binary indicator Usually more complex, allowing analyst to examine suspected attack User interface critical, especially when monitoring many systems
Be accurate
Minimize false positives, false negatives Minimize time spent verifying attacks, looking for them
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-951
Misuse detection
What is bad, is known What is not bad, is good
Specification-based detection
What is good, is known What is not good, is bad
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-952
Anomaly Detection
Analyzes a set of characteristics of system, and compares their values with expected values; report when computed statistics do not match expected statistics
Threshold metrics Statistical moments Markov model
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-953
Threshold Metrics
Counts number of events that occur
Between m and n events (inclusive) expected to occur If number falls outside this range, anomalous
Example
Windows: lock user out after k failed sequential login attempts. Range is (0, k1).
k or more failed logins deemed anomalous
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-954
Difficulties
Appropriate threshold may depend on nonobvious factors
Typing skill of users If keyboards are US keyboards, and most users are French, typing errors very common
Dvorak vs. non-Dvorak within the US
November 1, 2004
Slide #1-955
Statistical Moments
Analyzer computes standard deviation (first two moments), other measures of correlation (higher moments)
If measured values fall outside expected interval for particular moments, anomalous
Potential problem
Profile may evolve over time; solution is to weigh data appropriately or alter rules to take changes into account
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-956
Example: IDES
Developed at SRI International to test Dennings model
Represent users, login session, other entities as ordered sequence of statistics <q0,j, , qn,j> qi,j (statistic i for day j) is count or time interval Weighting favors recent behavior over past behavior
Ak,j sum of counts making up metric of kth statistic on jth day qk,l+1 = Ak,l+1 Ak,l + 2rtqk,l where t is number of log entries/total time since start, r factor determined through experience
November 1, 2004
Slide #1-957
Potential Problems
Assumes behavior of processes and users can be modeled statistically
Ideal: matches a known distribution such as Gaussian or normal Otherwise, must use techniques like clustering to determine moments, characteristics that show anomalies, etc.
Markov Model
Past state affects current transition Anomalies based upon sequences of events, and not on occurrence of single event Problem: need to train system to establish valid sequences
Use known, training data that is not anomalous The more training data, the better the model Training data should cover all possible normal uses of system
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-959
Example: TIM
Time-based Inductive Learning Sequence of events is abcdedeabcabc TIM derives following rules:
R1: abc (1.0) R4: de (1.0) R2: cd (0.5) R5: ea (0.5) R3: ce (0.5) R6: ed (0.5)
Misuse Modeling
Determines whether a sequence of instructions being executed is known to violate the site security policy
Descriptions of known or potential exploits grouped into rule sets IDS matches data against rule sets; on success, potential attack found
Example: NFR
Built to make adding new rules easily Architecture:
Packet sucker: read packets from network Decision engine: uses filters to extract information Backend: write data generated by filters to disk
Query backend allows administrators to extract raw, postprocessed data from this file Query backend is separate from NFR process
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-962
N-Code Language
Filters written in this language Example: ignore all traffic not intended for 2 web servers:
#l i s to f my web servers my_web_servers = [ 10 .237.100.189 10. 237.55. 93 ]; # we assume a l l HTTP t r af f i ci s on por t 80 f i l te r watch tcp ( c l ien t , dpor t :80 ) { i f( ip .dest!= my_web_servers) re tu rn ; # now process the packet; we jus t wr i te out packet i n fo record sys tem. t i me, ip .src ,ip .des tto www._ l i s t ; } w w w_l ist = recorder ( log)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-963
Specification Modeling
Determines whether execution of sequence of instructions violates specification Only need to check programs that alter protection state of system
November 1, 2004
Slide #1-964
System Traces
Notion of subtrace (subsequence of a trace) allows you to handle threads of a process, process of a system Notion of merge of traces U, V when trace U and trace V merged into single trace Filter p maps trace T to subtrace T such that, for all events ti T, p(ti) is true
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-965
November 1, 2004
Slide #1-966
November 1, 2004
Slide #1-967
Anomaly detection: detects unusual events, but these are not necessarily security problems Specification-based vs. misuse: spec assumes if specifications followed, policy not violated; misuse assumes if policy as embodied in rulesets followed, policy not violated
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-968
IDS Architecture
Basically, a sophisticated audit system
Agent like logger; it gathers data for analysis Director like analyzer; it analyzes data obtained from the agents according to its internal rules Notifier obtains results from director, and takes some action
May simply notify security officer May reconfigure agents, director to alter collection, analysis methods May activate response mechanism
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-969
Agents
Obtains information and sends to director May put information into another form
Preprocessing of records to extract relevant parts
May delete unneeded information Director may request agent send other information
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-970
Example
IDS uses failed login attempts in its analysis Agent scans login log every 5 minutes, sends director for each new login attempt:
Time of failed login Account name and entered password
Director requests all records of login (failed or not) for particular user
Suspecting a brute-force cracking attempt
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-971
Host-Based Agent
Obtain information from logs
May use many logs as sources May be security-related or not May be virtual logs if agent is part of the kernel
Very non-portable
Network-Based Agents
Detects network-oriented attacks
Denial of service attack introduced by flooding a network
Monitor traffic for a large number of hosts Examine the contents of the traffic itself Agent must have same view of traffic as destination
TTL tricks, fragmentation may obscure this
Network Issues
Network architecture dictates agent placement
Ethernet or broadcast medium: one agent per subnet Point-to-point medium: one agent per connection, or agent at distribution/routing point
November 1, 2004
Slide #1-974
Aggregation of Information
Agents produce information at multiple layers of abstraction
Application-monitoring agents provide one view (usually one line) of an event System-monitoring agents provide a different view (usually many lines) of an event Network-monitoring agents provide yet another view (involving many network packets) of an event
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-975
Director
Reduces information from agents
Eliminates unnecessary, redundant records
Example
Jane logs in to perform system maintenance during the day She logs in at night to write reports One night she begins recompiling the kernel Agent #1 reports logins and logouts Agent #2 reports commands executed
Neither agent spots discrepancy Director correlates log, spots it at once
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-977
Adaptive Directors
Modify profiles, rule sets to adapt their analysis to changes in system
Usually use machine learning or planning to determine how to do this
Notifier
Accepts information from director Takes appropriate action
Notify system security officer Respond to attack
Often GUIs
Well-designed ones use visualization to convey information
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-979
GrIDS GUI
D B A C E
GrIDS interface showing the progress of a worm as it spreads through network Left is early in spread Right is later on
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-980
Other Examples
Courtney detected SATAN attacks
Added notification to system log Could be configured to send email or paging message to system administrator
Organization of an IDS
Monitoring network traffic for intrusions
NSM system
November 1, 2004
Slide #1-982
Problem
Too much data!
S1 Solution: arrange data hierarchically into groups (S1, D2)
Construct by folding axes of matrix
(S1, D1)
(S1, D1, SMTP) (S1, D2, SMTP) (S1, D1, FTP) (S1, D2, FTP)
November 1, 2004
Slide #1-984
Signatures
Analyst can write rule to look for specific occurrences in matrix
Repeated telnet connections lasting only as long as set-up indicates failed login attempt
Other
Graphical interface independent of the NSM matrix analyzer Detected many attacks
But false positives too
Top Layers
5. Network threats (combination of events in context)
Abuse (change to protection state) Misuse (violates policy, does not change state) Suspicious act (does not violate policy, but of interest)
Advantages
No single point of failure
All agents can act as director In effect, director distributed over all agents
Compromise of one agent does not affect others Agent monitors one resource
Small and simple
Disadvantages
Communications overhead higher, more scattered than for single director
Securing these can be very hard and expensive
As agent monitors one resource, need many agents to monitor multiple resources Distributed computation involved in detecting intrusions
This computation also must be secured
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-996
Example: AAFID
Host has set of agents and transceiver
Transceiver controls agent execution, collates information, forwards it to monitor (on local or remote system)
Perform high level correlation for multiple hosts If multiple monitors interact with transceiver, AAFID must ensure transceiver receives consistent commands
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-998
Other
User interface interacts with monitors
Could be graphical or textual
November 1, 2004
Slide #1-999
Incident Prevention
Identify attack before it completes Prevent it from completing Jails useful for this
Attacker placed in a confined environment that looks like a full, unrestricted environment Attacker may download files, but gets bogus ones Can imitate a slow system, or an unreliable one Useful to figure out what attacker wants MLS systems provide natural jails
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1000
Intrusion Handling
Restoring system to satisfy site security policy Six phases
Preparation for attack (before attack detected) Identification of attack Containment of attack (confinement) Eradication of attack (stop attack) Recovery from attack (restore system to secure state) Follow-up to attack (analysis and other actions)
Containment Phase
Goal: limit access of attacker to system resources Two methods
Passive monitoring Constraining access
November 1, 2004
Slide #1-1002
Passive Monitoring
Records attackers actions; does not interfere with attack
Idea is to find out what the attacker is after and/or methods the attacker is using
Example: type of operating system can be derived from settings of TCP and IP packets of incoming connections
Analyst draws conclusions about source of attack
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1003
Constraining Actions
Reduce protection domain of attacker Problem: if defenders do not know what attacker is after, reduced protection domain may contain what the attacker is after
Stoll created document that attacker downloaded Download took several hours, during which the phone call was traced to Germany
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1004
Deception
Deception Tool Kit
Creates false network interface Can present any network configuration to attackers When probed, can return wide range of vulnerabilities Attacker wastes time attacking non-existent systems while analyst collects and analyzes attacks to determine goals and abilities of attacker Experiments show deception is effective response to keep attackers from targeting real systems
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1005
Eradication Phase
Usual approach: deny or remove access to system, or terminate processes involved in attack Use wrappers to implement access control
Example: wrap system calls
On invocation, wrapper takes control of process Wrapper can log call, deny access, do intrusion detection Experiments focusing on intrusion detection used multiple wrappers to terminate suspicious processes
Firewalls
Mediate access to organizations network
Also mediate access out to the Internet
Neighbor is system directly connected IDIP domain is set of systems that can send messages to one another without messages passing through boundary controller
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1008
Protocol
IDIP protocol engine monitors connection passing through members of IDIP domains
If intrusion observed, engine reports it to neighbors Neighbors propagate information about attack Trace connection, datagrams to boundary controllers Boundary controllers coordinate responses
Usually, block attack, notify other controllers to block relevant communications
November 1, 2004
Slide #1-1009
Example
C b A a W X Y Z f D e
C, D, W, X, Y, Z boundary controllers f launches flooding attack on A Note after X xuppresses traffic intended for A, W begins accepting it and A, b, a, and W can freely communicate again
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1010
Follow-Up Phase
Take action external to system against attacker
Thumbprinting: traceback at the connection level IP header marking: traceback at the packet level Counterattacking
November 1, 2004
Slide #1-1011
Counterattacking
Use legal procedures
Collect chain of evidence so legal authorities can establish attack was real Check with lawyers for this
Rules of evidence very specific and detailed If you dont follow them, expect case to be dropped
Technical attack
Goal is to damage attacker seriously enough to stop current attack and deter future attacks
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1012
Consequences
1. May harm innocent party
Attacker may have broken into source of attack or may be impersonating innocent party
Example: Counterworm
Counterworm given signature of real worm
Counterworm spreads rapidly, deleting all occurrences of original worm
Some issues
How can counterworm be set up to delete only targeted worm? What if infected system is gathering worms for research? How do originators of counterworm know it will not cause problems for any system?
And are they legally liable if it does?
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1014
Key Points
Intrusion detection is a form of auditing Anomaly detection looks for unexpected events Misuse detection looks for what is known to be bad Specification-based detection looks for what is known not to be good Intrusion response requires careful thought and planning
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1015
November 1, 2004
Slide #1-1016
Introduction
Goal: apply concepts, principles, mechanisms discussed earlier to a particular situation
Focus here is on securing network Begin with description of company Proceed to define policy Show how policy drives organization
Introduction to Computer Security 2004 Matt Bishop Slide #1-1017
November 1, 2004
The Drib
Builds and sells dribbles Developing network infrastructure allowing it to connect to Internet to provide mail, web presence for consumers, suppliers, other partners
November 1, 2004
Slide #1-1018
Specific Problems
Internet presence required
E-commerce, suppliers, partners Drib developers need access External users cannot access development sites
November 1, 2004
Slide #1-1019
When customer supplies data to buy a dribble, only folks who fill the order can access that information
Company analysts may obtain statistics for planning
Policy Development
Policy: minimize threat of data being leaked to unauthorized entities Environment: 3 internal organizations
Customer Service Group (CSG)
Maintains customer data Interface between clients, other internal organizations
Private
CSG: customer info like credit card numbers CG: corporate info protected by attorney privilege DG: plans, prototypes for new products to determine if production is feasible before proposing them to CG
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1022
Data Classes
Public data (PD): available to all Development data for existing products (DDEP): available to CG, DG only Development data for future products (DDFP): available to DG only Corporate data (CpD): available to CG only Customer data (CuD): available to CSG only
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1023
CpD PD: as privileged info becomes public through mergers, lawsiut filings, etc. Note: no provision for revealing CuD directly
This protects privacy of Dribs customers
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1024
User Classes
Outsiders (O): members of public
Access to public data Can also order, download drivers, send email to company
D r r r, w r r r
C r
r, w w r
Introduction to Computer Security 2004 Matt Bishop
r, w
Slide #1-1026
Type of Policy
Mandatory policy
Members of O, D, C, E cannot change permissions to allow members of another user class to access data
Discretionary component
Within each class, individuals may have control over access to files they own View this as an issue internal to each group and not of concern at corporate policy level
At corporate level, discretionary component is allow always
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1027
Reclassification of Data
Who must agree for each?
C, D must agree for DDFP DDEP C, E must agree for DDEP PD C can do CpD PD
But two members of C must agree to this
Availability
Drib world-wide multinational corp
Does business on all continents
November 1, 2004
Slide #1-1029
Corporate executives
Need to read, alter CpD, and read DDEP
Customer support
Need to read, alter CuD
D r r r, w w r r r
C r
r, w r
w r, w
Slide #1-1033
Interpretation
From transitive closure:
Only way for data to flow into PD is by reclassification Key point of trust: members of C By rules for moving data out of DDEP, DDFP, someone other than member of C must also approve
Satisfies separation of privilege
Network Organization
Partition network into several subnets
Guards between them prevent leaks
Internet Outer firewall DMZ Web server INTERNAL Corporate data subnet Customer data subnet Mail serv er DNS server Inner firewall Internal DNS server Development subnet Internal mail server
November 1, 2004
Slide #1-1035
DMZ
Portion of network separating purely internal network from external network
Allows control of accesses to some trusted systems inside the corporate perimeter If DMZ systems breached, internal systems still safe Can perform different types of checks at boundary of internal,DMZ networks and DMZ,Internet network
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1036
Firewalls
Host that mediates access to a network
Allows, disallows accesses based on configuration and type of access
Slide #1-1037
Filtering Firewalls
Access control based on attributes of packets and packet headers
Such as destination address, port numbers, options, etc. Also called a packet filtering firewall Does not control access based on content Examples: routers, other infrastructure systems
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1038
Proxy
Intermediate agent or server acting on behalf of endpoint without allowing a direct connection between the two endpoints
So each endpoint talks to proxy, thinking it is talking to other endpoint Proxy decides whether to forward messages, and whether to alter them
November 1, 2004
Slide #1-1039
Proxy Firewall
Access control done with proxies
Usually bases access control on content as well as source, destination addresses, etc. Also called an applications level or application level firewall Example: virus checking in electronic mail
Incoming mail goes to proxy firewall Proxy firewall receives mail, scans it If no virus, mail forwarded to destination If virus, mail rejected or disinfected before forwarding
Introduction to Computer Security 2004 Matt Bishop Slide #1-1040
November 1, 2004
Views of a Firewall
Access control mechanism
Determines which traffic goes into, out of network
Audit mechanism
Analyzes packets that enter Takes action based upon the analysis
Leads to traffic shaping, intrusion response, etc.
November 1, 2004
Slide #1-1041
Implementation
Conceal all internal addresses
Make them all on 10., 172., or 192.168. subnets
Inner firewall uses NAT to map addresses to firewalls address
November 1, 2004
Slide #1-1043
Email
Problem: DMZ mail server must know address in order to send mail to internal destination
Could simply be distinguished address that causes inner firewall to forward mail to internal mail server
November 1, 2004
Slide #1-1045
Application of Principles
Least privilege
Containment of internal addresses
Complete mediation
Inner firewall mediates every access to DMZ
Separation of privilege
Going to Internet must pass through inner, outer firewalls and DMZ servers
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1046
Application of Principles
Least common mechanism
Inner, outer firewalls distinct; DMZ servers separate from inner servers DMZ DNS violates this principle
If it fails, multiple systems affected Inner, outer firewall addresses fixed, so they do not depend on DMZ DNS
November 1, 2004
Slide #1-1047
Details
Proxy firewall SMTP: mail assembled on firewall
Scanned for malicious logic; dropped if found Otherwise forwarded to DMZ mail server
Attack Analysis
Three points of entry for attackers:
Web server ports: proxy checks for invalid, illegal HTTP, HTTPS requests, rejects them Mail server port: proxy checks email for invalid, illegal SMTP requests, rejects them Bypass low-level firewall checks by exploiting vulnerabilities in software, hardware
Firewall designed to be as simple as possible Defense in depth
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1050
Defense in Depth
Form of separation of privilege To attack system in DMZ by bypassing firewall checks, attacker must know internal addresses
Then can try to piggyback unauthorized messages onto authorized packets
More Configuration
Internal folks require email
SMTP proxy required
DMZ
Look at servers separately:
Web server: handles web requests with Internet
May have to send information to internal network
DNS
Used to provide addresses for systems DMZ servers talk to
Log server
DMZ systems log info here
November 1, 2004
Slide #1-1054
Mail to Internet
Like mail from Internet with 2 changes:
Step 2: also scan for sensitive data (like proprietary markings or content, etc.) Step 3: changed to rewrite all header lines containing host names, email addresses, and IP addresses of internal network
All are replaced by drib.org or IP address of external firewall
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1057
Administrative Support
Runs SSH server
Configured to accept connections only from trusted administrative host in internal network All public keys for that host fixed; no negotiation to obtain those keys allowed Allows administrators to configure, maintain DMZ mail host remotely while minimizing exposure of host to compromise
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1058
Server is www.drib.org and uses IP address of outer firewall when it must supply one
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1059
Internet Ordering
Orders for Drib merchandise from Internet
Customer enters data, which is saved to file After user confirms order, web server checks format, content of file and then uses public key of system on internal customer subnet to encipher it
This file is placed in a spool area not accessible to web server program
Original file deleted Periodically, internal trusted administrative host uploads these files, and forwards them to internal customer subnet system
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1061
Analysis
If attacker breaks into web server, cannot get order information
There is a slight window where the information of customers still on system can be obtained
Attacker can get enciphered files, public key used to encipher them
Use of public key cryptography means it is computationally infeasible for attacker to determine private key from public key
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1062
Summary
Each server knows only what is needed to do its task
Compromise will restrict flow of information but not reveal info on internal network
Internal Network
Goal: guard against unauthorized access to information
read means fetching file, write means depositing file
For now, ignore email, updating of DMZ web server, internal trusted administrative host Internal network organized into 3 subnets, each corresponding to Drib group
Firewalls control access to subnets
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1066
WWW-close
Provides staging area for web updates All internal firewalls allow access to this
WWW-clone controls who can put and get what files and where they can be put
All connections to DMZ through inner firewall must use this host
Exceptions: internal mail server, possibly DNS
Analysis
DMZ servers never communicate with internal servers
All communications done via inner firewall
Only client to DMZ that can come from internal network is SSH client from trusted administrative host
Authenticity established by public key authentication
Analysis
Only data from DMZ is customer orders and email
Customer orders already checked for potential errors, enciphered, and transferred in such a way that it cannot be executed Email thoroughly checked before it is sent to internal mail server
November 1, 2004
Slide #1-1071
Assumptions
Software, hardware does what it is supposed to
If software compromised, or hardware does not work right, defensive mechanisms fail Reason separation of privilege is critical
If component A fails, other components provide additional defenses
Assurance is vital!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1072
Availability
Access over Internet must be unimpeded
Context: flooding attacks, in which attackers try to overwhelm system resources
Intermediate Hosts
Use routers to divert, eliminate illegitimate traffic
Goal: only legitimate traffic reaches firewall Example: Cisco routers try to establish connection with source (TCP intercept mode)
On success, router does same with intended destination, merges the two On failure, short time-out protects router resources and target never sees flood
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1074
Intermediate Hosts
Use network monitor to track status of handshake
Example: synkill monitors traffic on network
Classifies IP addresses as not flooding (good), flooding (bad), unknown (new) Checks IP address of SYN
If good, packet ignored If bad, send RST to destination; ends handshake, releasing resources If new, look for ACK or RST from same source; if seen, change to good; if not seen, change to bad
Intermediate Hosts
Problem: dont solve problem!
They move the locus of the problem to the intermediate system In Dribs case, Drib does not control these systems
November 1, 2004
Slide #1-1076
Endpoint Hosts
Control how TCP state is stored
When SYN received, entry in queue of pending connections created
Remains until an ACK received or time-out In first case, entry moved to different queue In second case, entry made available for next SYN
SYN Cookies
Source keeps state Example: Linux 2.4.9 kernel
Embed state in sequence number When SYN received, compute sequence number to be function of source, destination, counter, and random data
Use as reply SYN sequence number When reply ACK arrives, validate it
Adaptive Time-Out
Change time-out time as space available for pending connections decreases Example: modified SunOS kernel
Time-out period shortened from 75 to 15 sec Formula for queueing pending connections changed:
Process allows up to b pending connections on port a number of completed connections but awaiting process p total number of pending connections c tunable parameter Whenever a + p > cb, drop current SYN message
Introduction to Computer Security 2004 Matt Bishop Slide #1-1079
November 1, 2004
Anticipating Attacks
Drib realizes compromise may come through unanticipated means
Plans in place to handle this
Extensive logging
DMZ log server does intrusion detection on logs
November 1, 2004
Slide #1-1080
In the DMZ
Very interested in attacks, successful or not Means someone who has obtained access to DMZ launched attack
Some trusted administrator shouldnt be trusted Some server on outer firewall is compromised Software on DMZ system not restrictive enough
IDS system on DMZ log server looks for misuse (known attacks) to detect this
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1082
Drib: So what?
Not sufficient personnel to handle all alerts Focus is on what Drib cares most about
Successful attacks, or failed attacks where there should be none
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1083
November 1, 2004
Slide #1-1084
Key Points
Begin with policy Craft network architecture and security measures from it Assume failure will occur
Try to minimize it Defend in depth Have plan to handle failures
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1085
November 1, 2004
Introduction
How does administering security affect a system? Focus on two systems
DMZ web server User system in development subnet
Assumptions
DMZ system: assume any user of trusted administrative host has authenticated to that system correctly and is a trusted user Development system: standard UNIX or UNIX-like system which a set of developers can use
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1087
Policy
Web server policy discussed in Chapter 23
Focus on consequences
November 1, 2004
Slide #1-1088
7. Implements services correctly, restricts access as much as possible 8. Public keys reside on web server
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1089
WC2
User access only to those with user access to trusted administrative host
Number of these users as small as possible All actions attributed to individual account, not group or group account
November 1, 2004
Slide #1-1090
WC4 WC5
Contains as few programs, as little software, configuration information, and other data as possible
Minimizes effects of successful attack
Introduction to Computer Security 2004 Matt Bishop Slide #1-1091
November 1, 2004
Development System
Development network (devnet) background
Firewall separating it from other subnets DNS server Logging server for all logs File servers User database information servers Isolated system used to build base system configuration for deployment to user systems User systems
November 1, 2004
DC3
November 1, 2004
November 1, 2004
Slide #1-1096
Procedural Mechanisms
Some restrictions cannot be enforced by technology
Moving files between ISP workstation, devnet workstation using a floppy No technological way to prevent this except by removing floppy drive
Infeasible due to nature of ISP workstations
Drib has made procedures, consequences for violating procedures, very clear
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1098
Comparison
Spring from different roles
DMZ web server not a general-use computer Devnet workstation is
Networks
Both systems need appropriate network protections
Firewalls provide much of this, but separation of privilege says the systems should too
November 1, 2004
Slide #1-1100
Note inner firewall prevents internal hosts from accessing DMZ web server (for now)
If changed, web server configuration will stay same
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1101
Note inner firewall prevents internal hosts from accessing DMZ web server (for now)
If changed, web server configuration will stay same
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1102
Note inner firewall prevents other internal hosts from accessing SSH server on this system
Not expected to change
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1103
Availability
Need to restart servers if they crash
Automated, to make restart quick
Script
#! /b in / sh echo $$ > /var/ servers /webdwrapper .p id whi le t r ue do /us r / loca l /b in /webd s leep 30 done
Devnet Workstation
Servers:
Mail (SMTP) server
Very simple. just forwards mail to central devnet mail server
Benefits
Minimizes number of services that devnet workstations have to run Minimizes number of systems that provide these services
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1108
Checking Security
Security officers scan network ports on systems
Compare to expected list of authorized systems and open ports
Discrepencies lead to questions
Comparison
Location
DMZ web server: all systems assumed hostile, so server replicates firewall restrictions Devnet workstation: internal systems trusted, so workstation relies on firewall to block attacks from non-devnet systems
Use
DMZ web server: serve web pages, accept commercial transactions Devnet workstation: many tasks to provide pleasant development environment for developers
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1110
Users
What accounts are needed to run systems?
User accounts (users) Administrative accounts (sysadmins)
November 1, 2004
Slide #1-1111
November 1, 2004
Slide #1-1112
User Accounts
Web server account: webbie Commerce server account: ecommie CGI script (as webbie) creates file with ACL, in directory with same ACL:
( ecommie, { read, write } )
Commerce server copies file into spooling area (enciphering it appropriately), then deletes original file
Note: webbie can no longer read, write, delete file
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1113
Sysadmin Accounts
One user account per system administrator
Ties actions to individual
Devnet Workstation
One user account per developer Administrative accounts as needed Groups correspond to projects All identities consistent across all devnet workstations
Example: trusted host protocols, in which a user authenticated to host A can log into host B without re-authenticating
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1115
Naming Problems
Host stokes trusts host navier
User Abraham has account abby on navier Different user Abigail has account abby on stokes Now Abraham can log into Abigails account without authentication!
November 1, 2004
UINFO System
Central repository defining users, accounts
Uses NIS protocol All systems on devnet, except firewall, use it
No user accounts on workstations
About NIS
NIS uses cleartext messages to send info
Violates requirement as no integrity checking
Not possible from inside system as are secured Not possible from outside as firewall will block message
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1118
Comparison
Differences lie in use of systems
DMZ web server: in area accessible to untrusted users
Limiting number of users limits damage successful attacker can do User info on system, so dont need to worry about network attacks on that info Few points of access
Authentication
Focus here is on techniques used All systems require some form
November 1, 2004
Slide #1-1120
Devnet Workstation
Requires authentication as unauthorized people have access to physically secure area
Janitors, managers, etc.
Comparison
Both use strong authentication
All certificates installed by trusted sysadmins
November 1, 2004
Slide #1-1123
Processes
What each system must run
Goal is to minimize the number of these
November 1, 2004
Slide #1-1124
Commerce server
Enough privileges to copy files from web servers area to spool area; not enough to alter web pages
Potential Problem
UNIX systems: need privileges to bind to ports under 1024
Including port 80 (for web servers) But web server is unprivileged!
Solution 1: Server starts privileged, opens port, drops privileges Solution 2: Write wrapper to open port, drop privilege, invoke web server
The wrapper passes open port to web server
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1126
File Access
Augment ACLs with something like capabilities Change process notion of root directory to limit access to files in file system Example: web server needs to access page
Without change: /usr/Web/pages/index.html After change: /pages/index.html
Cannot refer to /usr/trans as cannot name it
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1127
Example
Web server changes root directory to /usr/Web Commerce server changes root directory to /usr/trans Note xdir accessible to both processes
November 1, 2004
/ usr
trans
commerce server
pages
xdir
1
Slide #1-1128
Interprocess Communications
Web server needs to tell commerce server a file is ready Use shared directory
Web server places file with name trnsnnnn in directory (n is digit) Commerce server periodically checks directory for files of that name, operates on them Alternative: web server signals commerce server to get file using signal mechanism
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1129
Devnet Workstation
Servers provide administrative info
Run with as few privileges as possible
Best: user nobody and group nogroup
Use master daemon to listen at ports, spawn less privileged servers to service request Servers change notion of root directory
Clients
NIS client to talk to UINFO system File server client to allow file server access
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1130
Devnet Workstation
Logging mechanism
Records OS calls, parameters, results Saves it locally, sent to central logging server
Intrusion detection done; can augment logging as needed Initially, process start, end, audit and effective UIDs recorded
Disk space
If disk utilization over 95%, program scans local systems and deletes all temp files and editor backup files not in use
Meaning have not been accessed in last 3 days
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1131
Comparison
DMZ web server: only necessary processes
New software developed, compiled elsewhere Processes run in very restrictive environment Processes write to local log, directly to log server
Files
Protections differ due to differences in policies
Use physical limits whenever possible, as these cannot be corrupted Use access controls otherwise
November 1, 2004
Slide #1-1133
Example
Web server: user webbie
When running, root directory is root of web page directory, /mnt/www CGI programs owned by root, located in directory (/mnt/www/cgi-bin) mounted from CD-ROM
Keys in /mnt/www/keys
No other programs
None to read mail or news, no batching, no web browsers, etc.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1136
If question:
Stop web server Transfer all remaining transaction files Reboot system from CD-ROM Reformat hard drive Reload contents of user directories, web pages from WWW-clone Restart servers
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1137
Devnet Workstation
Standard configuration for these
Provides folks with needed tools, configurations Configuration is on bootable CD-ROM
Devnet Workstation
Logs on log server examined using intrusion detection systems
Security officers validate by analyzing 30 min worth of log entries and comparing result to reports from IDS
Scans of writable media look for files matching known patterns of intrusions
If found, reboot and wipe hard drive Then do full check of file server
November 1, 2004
Slide #1-1139
Comparison
Both use physical means to prevent system software from being compromised
Attackers cant alter CD-ROMs
Reloading systems
DMZ web server: save transaction files, regenerate system from WWW-clone
Actually, push files over to internal network system
Comparison
Devnet workstation: users trusted not to attack it
Any developer can use any devnet workstation Developers may unintentionally introduce Trojan horses, etc
Hence everything critical on read-only media
Key Points
Use security policy to derive security mechanisms Apply basic principles, concepts of security
Least privilege, separation of privilege (defense in depth), economy of mechanism (as few services as possible) Identify who, what you are trusting
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1144
November 1, 2004
Slide #1-1145
Policy
Assume user is on Drib development network
Policy usually highly informal and in the mind of the user
Access
U1: users must protect access to their accounts
Consider points of entry to accounts
November 1, 2004
Slide #1-1147
Passwords
Theory: writing down passwords is BAD! Reality: choosing passwords randomly makes them hard to remember
If you need passwords for many systems, assigning random passwords and not writing something down wont work
Problem: Someone can read the written password Reality: degree of danger depends on environment, how you record password
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1148
Isolated System
System used to create boot CD-ROM
In locked room; system can only be accessed from within that room
No networks, modems, etc.
Multiple Systems
Non-infrastructure systems: have users use same password
Done via centralized user database shared by all non-infrastructure systems
Infrastructure systems: users may have multiple accounts on single system, or may not use centralized database
Write down transformations of passwords
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1150
Infrastructure Passwords
Drib devnet has 10 infrastructure systems, 2 lead admins (Anne, Paul)
Both require privileged access to all systems root, Administrator passwords chosen randomly
Annes version C04ceJxX5 4VX9Q3 GA2 8798QqDt$ 3 W XywgnwS feo ioC4f9 VR D0Hj9Eq e7BUkcbaX ywYj5cVw* 5 iUIkLB4 m af4HC2kg+
Introduction to Computer Security 2004 Matt Bishop
Pauls version R C84cEJxX a2VX9q3GA 67f98Qqdt Z1 W X Ywgnw YfeOioC2f pVRd8Hj9E Xe5Bukcba rywy j3cVw 3J iU ikLB4 daf2hC2kg
Slide #1-1152
Non-Infrastructure Passwords
Users can pick
Proactive password checker vets proposed password
Analysis
Isolated system meets U1
Only authorized users can enter room, read password, access system
Login Procedure
User obtains a prompt at which to enter name Then comes password prompt Attacks:
Lack of mutual authentication Reading password as it is entered Untrustworthy trusted hosts
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1155
Simple approach: if name, password entered incorrectly, prompt for retry differed
In UNIX V6, it said Name rather than login
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1156
More Complicated
Attack program feeds name, password to legitimate login program on behalf of user, so user logged in without realizing attack program is an intermediary Approach: trusted path
Example: to log in, user hits specified sequence of keys; this traps to kernel, which then performs login procedure; key is that no application program can disable this feature, or intercept or modify data sent along this path
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1157
Analysis
Mutual authentication meets U1
Trusted path used when available; other times, system prints time, place of last login
Once authenticated, users must control access to their session until it ends
What to do when one goes to bathroom?
Walking Away
Procedures require user to lock monitor
Example: X window system: xlock
Only user, system administrator can unlock monitor
November 1, 2004
Slide #1-1163
Modems
Terminates sessions when remote user hangs up
Problem: this is configurable; may have to set physical switch
If not done, next to call in connects to previous users session
Analysis
Procedures about walking away meet U1
Screen locking programs required, as is locking doors when leaving office; failure to do so involves disciplinary action If screen locking password forgotten, system administrators can remotely access system and terminate program
Users manipulate system through devices, so their protection affects user protection as well
Policy components U1, U4 require this
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1166
Files
Often different ways to do one thing
UNIX systems: Pete wants to allow Deb to read file design, but no-one else to do so
If Pete, Deb have their own group, make file owned by that group and group readable but not readable by others If Deb only member of a group, Pete can give group ownership of file to Deb and set permissions appropriately Pete can set permissions of containing directory to allow himself, Debs group search permission
Group Access
Provides set of users with same rights Advantage: use group as role
All folks working on Widget-NG product in group widgetng All files for that product group readable, writable by widgetng Membership changes require adding users to, dropping users from group
No changes to file permissions required
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1169
Group Access
Disadvantage: use group as abbreviation for set of users; changes to group may allow unauthorized access or deny authorized access
Maria wants Anne, Joan to be able to read movie System administrator puts all in group maj Later: sysadmin needs to create group with Maria, Anne, Joan, and Lorraine
Adds Lorraine to group maj Now Lorraine can read movie even though Maria didnt want her to be able to do so
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1170
File Deletion
Is the name or the object deleted? Terms
File attribute table: contains information about file File mapping table: contains information allowing OS to access disk blocks belonging to file Direct alias: directory entry naming file Indirect alias: directory entry naming special file containing name of target file
Generally false
File attribute table contains access permissions for each file
So users can use any alias; rights the same
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1172
Deletion
Removes directory entry of file
If no more directory entries, data blocks and table entries released too Note: deleting directory entry does not mean file is deleted!
November 1, 2004
Slide #1-1173
Example
Anna on UNIX wants to delete file x, setuid to herself
rm x works if no-one else has a direct alias to it
Sandra has one, so file not deleted (but Annas directory entry is deleted)
File still is setuid to Anna
Persistence
Disk blocks of deleted file returned to pool of unused disk blocks When reassigned, new process may be able to read previous contents of disk blocks
Most systems offer a wipe or cleaning procedure that overwrites disk blocks with zeros or random bit patterns as part of file deletion Useful when files being deleted contain sensitive data
November 1, 2004
Slide #1-1175
Analysis
Use of ACLs, umask meet U2
Both set to deny permission toother and group by default; user can add permissions back
Deletion meets U2
Procedures require sensitive files be wiped when deleted
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1177
Devices
Must be protected so user can control commands sent, others cannot see interactions Writable devices Smart terminals Monitors and window systems
November 1, 2004
Slide #1-1178
Writable Devices
Restrict access to these as much as possible Example: tapes
When process begins writing, ACL of device changes to prevent other processes from writing Between mounting of media, process execution, another process can begin writing Moral: write protect all mounted media unless it is to be written to
Example: terminals
Write control sequence to erase screensend repeatedly
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1179
Smart Terminals
Has built-in mechanism for performing special functions
Most important one: block send The sequence of chars initiating block send do not appear on screen
Write Trojan horse to send command from users terminal Next slide: example in mail message sent to Craig
When Craig reads letter, his startup file becomes world writable
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1180
November 1, 2004
Slide #1-1181
Why So Dangerous?
With writable terminal, someone must trick user of that terminal into executing command; both attacker and user must enter commands With smart terminal, only attacker need enter command; if user merely reads the wrong thing, the attackers compromise occurs
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1182
Access Control
Use ACLs, C-Lists, etc. Granularity varies by windowing system X window system: host name or token
Host name, called xhost method Manager determines host on which client runs Checks ACL to see if host allowed to connect
November 1, 2004
Slide #1-1184
X Windows Tokens
Called xauth method
X window manager given random number (magic cookie)
Stored in file .Xauthority in users home directory
Any client trying to connect to manager must supply this magic cookie to succeed
Local processes run by user can access this file Remote processes require special set-up by user to work
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1185
Analysis
Writable devices meet U1, U4
Devnet users have default settings denying all write access to devices except the user
Process
Manipulate objects, including files
Policy component U3 requires users to be aware of how
Copying, moving files Accidentally overwriting or erasing files Encryption, keys, passwords Start-up settings Limiting privileges Malicious logic
Introduction to Computer Security 2004 Matt Bishop Slide #1-1187
November 1, 2004
Copying Files
Duplicates contents Semantics determines whether attributes duplicated
If not, may need to set them to prevent compromise
Moving Files
Semantics determines attributes Example: Mona Anne moves xyzzy to /tmp/plugh
If both on same file system, attributes unchanged If on different file systems, semantically equivalent to:
cp xyzzy / tmp/p lugh rm xyzzy
November 1, 2004
Slide #1-1189
Encryption
Must trust system
Cryptographic keys visible in kernel buffers, swap space, and/or memory Anyone who can alter programs used to encrypt, decrypt can acquire keys and/or contents of encrypted files
Saving Passwords
Some systems allow users to put passwords for programs in files
May require file be read-protected but not use encryption
Start-Up Settings
When programs start, often take state info, commands from environment or start-up files
Order of access affects execution
Problem: if any of these files can be altered by untrusted user, sh may execute undesirable commands or enter undesirable state on start
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1193
Limiting Privileges
Users should know which of their programs grant privileges to others
Also the implications of granting these
Malicious Logic
Watch out for search paths Example: Paula wants to see Johns confidential designs
Paula creates a Trojan horse that copies design files to /tmp; calls it ls Paula places copies of this in all directories she can write to John changes to one of these directories, executes ls
Johns search path begins with current working directory
Search Paths
Search path to locate program to execute Search path to locate libraries to be dynamically loaded when program executes Search path for configuration files
November 1, 2004
Slide #1-1196
Analysis
Copying, moving files meets U3
Procedures are to warn users about potential problems
Analysis (cont)
Publicizing start up procedures of programs meets U3
Startup files created when account created have restrictive permissions
Electronic Communications
Checking for malicious content at firewall can make mistakes
Perfect detectors require solving undecidable problem Users may unintentionally send out material they should not
November 1, 2004
Slide #1-1199
November 1, 2004
Slide #1-1200
Rapid saves often do not delete information, but rearrange pointers so information appears deleted
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1202
Analysis
Automated e-mail processing meets U4
All programs configured not to execute attachments, contents of letters
Key Points
Users have policies, although usually informal ones Aspects of system use affect security even at the user level
System access issues File and device issues Process management issues Electronic communications issues
Introduction to Computer Security 2004 Matt Bishop Slide #1-1204
November 1, 2004
Introduction
Goal: implement program that:
Verifies users identity Determines if change of account allowed If so, places user in desired role
Why?
Eliminate password sharing problem
Role accounts under Linux are user accounts If two or more people need access, both need role accounts password
Requirements
1. Access to role account based on user, location, time of request 2. Settings of role accounts environment replaces corresponding settings of users environment, but rest of users environment preserved 3. Only root can alter access control information for access to role account
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1208
More Requirements
4. Mechanism provides restricted, unrestricted access to role account
Restricted: run only specified commands Unrestricted: access command interpreter
5. Access to files, directories, objects owned by role account restricted to those authorized to use role account, users trusted to install system programs, root
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1209
Threats
Group 1: Unauthorized user (UU) accessing role accounts
1. UU accesses role account as though authorized user 2. Authorized user uses nonsecure channel to obtain access to role account, thereby revealing authentication information to UU 3. UU alters access control information to gain access to role account 4. Authorized user executes Trojan horse giving UU access to role account
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1210
Relationships
threat requirement 1 1, 5 2 3 4
November 1, 2004
notes
Restricts who can access role account, protects access control data Restricts location from where user can access role account Restricts change to trusted users Users search path restricted to own or role account; only trusted users, role account can manipulate executables
1 3 2, 4, 5
Slide #1-1211
More Threats
Group 2: Authorized user (AU) accessing role accounts
5. AU obtains access to role account, performs unauthorized commands 6. AU executes command that performs functions that user not authorized to perform 7. AU changes restrictions on users ability to obtain access to role account
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1212
Relationships
threat requirement 5 4 6 7 2, 5 3 notes
Allows user restricted access to role account, so user can run only specific commands Prevent introduction of Trojan horse root users trusted; users with access to role account trusted
November 1, 2004
Slide #1-1213
Design
Framework for hooking modules together
User interface High-level design
User Interface
User wants unrestricted access or to run a specific command (restricted access) Assume command line interface
Can add GUI, etc. as needed
Command
ro le ro l e_account [com m and ]
where
role_account name of role account command command to be run (optional)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1215
High-Level Design
1. Obtain role account, command, user, location, time of day
If command omitted, assume command interpreter (unrestricted access)
Ambiguity in Requirements
Requirements 1, 4 do not say whether command selection restricted by time, location
This design assumes it is
Backups may need to be run at 1AM and only 1AM Alternate: assume restricted only by user, role; equally reasonable
Update requirement 4 to be: Mechanism provides restricted, unrestricted access to role account
Restricted: run only specified commands Unrestricted: access command interpreter
Interface: controls how info passed between module, caller Internal structure: how does module handle errors, access control data structures
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1219
Interface to Module
Minimize amount of information being passed through interface
Follow standard ideas of information hiding Module can get user, time of day, location from system So, need pass only command (if any), role account name
rname: name of role cmd: command (empty if unrestricted access desired) returns true if access granted, false if not (or error)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1220
Internals of Module
Part 1: gather data to determine if access allowed Part 2: retrieve access control information from storage Part 3: compare two, determine if access allowed
November 1, 2004
Slide #1-1221
Part 1
Required:
user ID: who is trying to access role account time of day: when is access being attempted
From system call to operating system
entry point: terminal or network connection remote host: name of host from which user accessing local system (empty if on local system)
These make up location
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1222
Part 2
Obtain handle for access control file
May be called a descriptor
Part 3
Iterate through access control file
If no more records
Release handle Return failure
Check role
If not a match, skip record (go back to top)
Time Representation
Use ranges expressed (reasonably) normally
Mon-Thu 9A M-5P M Any time between 9AM and 5PM on Mon, Tue, Wed, or Thu Mon 9A M-Thu 5P M Any time between 9AM Monday and 5PM Thursday Apr 15 8AM-Sep 15 6P M Any time from 8AM on April 15 to 6PM on September 15, on any year
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1226
Commands
Command plus arguments shown
/b in / i nsta l l* Execute /bin/install with any arguments /b in /cp log /var/inst/log Copy file log to /var/inst/log /us r /bi n / id Run program id with no arguments
User need not supply path names, but commands used must be the ones with those path names
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1227
November 1, 2004
Slide #1-1228
First-Level Refinement
Use pseudocode:
boo lean accessok( r o le rname, com mand cmd) ; s ta t fa l se user obta in user ID t imeday obta in t ime o f day ent ry obta in en t r y po in t( termina ll i n e, remote hos t ) open access cont r o lf i l e repeat rec ge t nex t record f rom f i l e; EOF i fnone i frec EOF then s ta t match( rec , rname, cmd, user ,t imeday, ent ry ) unt i lrec = EOF or sta t =t rue c lose access cont r o lf i l e re tu rn s tat
November 1, 2004
Slide #1-1229
Check Sketch
Interface right Stat (holds status of access control check) false until match made, then true Get user, time of day, location (entry) Iterates through access control records
Get next record If there was one, sets stat to result of match Drops out when stat true or no more records
Slide #1-1230
Second-Level Refinement
Map pseudocode to particular language, system
Well use C, Linux (UNIX-like system) Role accounts same as user accounts
Interface decisions
User, role ID representation Commands and arguments Result
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1231
Decision: represent all user, role IDs as uid_t Note: no design decision relied upon representation of user, role accounts, so no need to revisit any
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1232
November 1, 2004
Slide #1-1233
Resulting Interface
i n t accessok(u i d_trname, char *cmd[] ) ;
November 1, 2004
Slide #1-1234
Second-Level Refinement
Obtaining user ID Obtaining time of day Obtaining location Opening access control file Processing records Cleaning up
Introduction to Computer Security 2004 Matt Bishop Slide #1-1235
November 1, 2004
Obtaining User ID
Which identity?
Effective ID: identifies privileges of process
Must be 0 (root), so not this one
November 1, 2004
Slide #1-1236
November 1, 2004
Slide #1-1237
Obtaining Location
System dependent
So we defer, encapsulating it in a function to be written later ent r y = get locat ion( ) ;
November 1, 2004
Slide #1-1238
November 1, 2004
Slide #1-1239
Processing Records
Internal record format not yet decided
Note use of functions to delay deciding this
do { ac rec = ge t nextacrec( fp ) ; i f(acrec != NULL) s t at = match( r ec, rname, cmd, user , t imeday, en t ry ) ; } un t i l( acrec == NULL | |s t at == 1) ;
November 1, 2004
Slide #1-1240
Cleaning Up
Release handle by closing file
(vo id )f c lose( fp ) ; re tu r n(s ta t ) ;
November 1, 2004
Slide #1-1241
Getting Location
On login, Linux writes user name, terminal name, time, and name of remote host (if any) in file utmp Every process may have associated terminal To get location information:
Obtain associated process terminal name Open utmp file Find record for that terminal Get associated remote host from that record
Introduction to Computer Security 2004 Matt Bishop Slide #1-1242
November 1, 2004
Security Problems
If any untrusted process can alter utmp file, contents cannot be trusted
Several security holes came from this
Process may have no associated terminal Design decision: if either is true, return meaningless location
Unless location in access control file is any wildcard, fails
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1243
Time Representation
Here, time is an interval
May 30 means any time on May 30, or May 30 12AM-May 31 12AM
Record Format
Here, commands is repeated once per command, and numcommands is number of commands fields
record ro le rname s t r ing user l i s t s t r ing l ocat ion s t r ing t i meofday s t r ing com mands[ ] s t r ing com mands[ ] i n teger numcom mands end record ;
May be able to compute numcommands from record November 1, 2004 Introduction to Computer Security Slide #1-1247
2004 Matt Bishop
Error Handling
Suppose syntax error or garbled record Error cannot be ignored
Log it so system administrator can see it
Include access control file name, line or record number
Implementation
Concern: many common security-related programming problems
Present management and programming rules Use framework for describing problems
NRL: our interest is technical modeling, not reason for or time of introduction Aslam: want to look at multiple components of vulnerabilities Use PA or RISOS; we choose PA
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1249
November 1, 2004
Slide #1-1250
Process Privileges
Least privilege: no process has more privileges than needed, but each process has the privileges it needs Implementation Rule 1:
Structure the process so that all sections requiring extra privileges are modules. The modules should be as small as possible and should perform only those tasks that require those privileges.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1251
Basis
Reference monitor
Verifiable: here, modules are small and simple Complete: here, access to privileged resource only possible through privileges, which require program to call module Tamperproof: separate modules with well-defined interfaces minimizes chances of other parts of program corrupting those modules
Note: this program, and these modules, are not reference monitors!
Were approximating reference monitors
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1252
November 1, 2004
Slide #1-1253
Implementation Issues
Can we have privileged modules in our environment?
No; this is a function of the OS Cannot acquire privileges after start, unless process started with those privileges
Privileges released
But they can be reacquired
Key points: privileges acquired only when needed, and relinquished once immediate task is complete
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1255
Permissions
Set these so only root can alter, move program, access control file Implementation Rule 2:
Ensure that any assumptions in the program are validated. If this is not possible, document them for the installers and maintainers, so they know the assumptions that attackers will try to invalidate.
November 1, 2004 Slide #1-1258
UNIX Implementation
Checking permissions: 3 steps
Check root owns file Check no group write permission, or that root is single member of the group owner of file
Check list of members of that group first Check password file next, to ensure no other users have primary GID the same as the group; these users need not be listed in group file to be group members
Memory Protection
Shared memory: if two processes have access, one can change data other relies upon, or read data other considers secret Implementation Rule 3
Ensure that the program does not share objects in memory with any other program, and that other programs cannot access the memory of a privileged process.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1260
Memory Management
Dont let data be executed, or constants change
Declare constants in program as const Turn off execute permission for data pages/segments Do not use dynamic loading
Management Rule 3:
Configure memory to enforce the principle of least privilege. If a section of memory is not to contain executable instructions, turn execute permission off for that section of memory. If the contents of a section of memory are not to be altered, make that section read-only.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1261
Trust
What does program trust?
System authentication mechanisms to authenticate users UINFO to map users, roles into UIDs Inability of unprivileged users to alter system clock
Management Rule 4:
Identify all system components on which the program depends. Check for errors whenever possible, and identify those components for which error checking will not work.
November 1, 2004
Slide #1-1262
Implementation Rule 4:
The error status of every function must be checked. Do not try to recover unless the cause of the error, and its effects, do not affect any security considerations. The program should restore the state of the system to the state before the process began, and then terminate.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1263
November 1, 2004
Slide #1-1266
Improper Change
Data that changes unexpectedly or erroneously Memory File contents File/object bindings
November 1, 2004
Slide #1-1267
Memory
Synchronize interactions with other processes Implementation Rule 5:
If a process interacts with other processes, the interactions should be synchronized. In particular, all possible sequences of interactions must be known and, for all such interactions, the process must enforce the required security policy.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1268
More Memory
Asynchronous exception handlers: may alter variables, state
Much like concurrent process
Implementation Rule 6:
Asynchronous exception handlers should not alter any variables except those that are local to the exception handling module. An exception handler should block all other exceptions when begun, and should not release the block until the handler completes execution, unless the handler has been designed to handle exceptions within itself (or calls an uninvoked exception handler).
Introduction to Computer Security 2004 Matt Bishop
November 1, 2004
Slide #1-1269
Buffer Overflows
Overflow not the problem Changes to variables, state caused by overflow is the problem
Example: fingerd example: overflow changes return address to return into stack
Fix at compiler level: put random number between buffer, return address; check before return address used
Example: login program that stored unhashed, hashed password in adjacent arrays
Enter any 8-char password, hit space 72 times, enter hash of that password, and system authenticates you!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1270
Problem
Trusted data can be affected by untrusted data
Trusted data: return address, hash loaded from password file Untrusted data: anything user reads
Implementation Rule 7:
Whenever possible, data that the process trusts and data that it receives from untrusted sources (such as input) should be kept in separate areas of memory. If data from a trusted source is overwritten with data from an untrusted source, a memory error will occur.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1271
Our Program
No interaction except through exception handling
Implementation Rule 5 does not apply
Exception handling: disable further exception handling, log exception, terminate program
Meets Implementation Rule 6
Do not reuse variables used for data input; ensure no buffers overlap; check all array, pointer references; any out-of-bounds reference invokes exception handler
Meets Implementation Rule 7
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1272
File Contents
If access control file changes, either:
File permissions set wrong (Management Rule 2) Multiple processes sharing file (Implementation Rule 5)
Dynamic loading: routines not part of executable, but loaded from libraries when program needs them
Note: these may not be the original routines
Implementation Rule 8:
Do not use components that may change between the time the program is created and the time it is run. November 1, 2004 Introduction to Computer Security Slide #1-1273
2004 Matt Bishop
Race Conditions
Time-of-check-to-time-of-use (TOCTTOU) problem
Issue: dont want file to change after validation but before access UNIX file locking advisory, so cant depend on it
Improper Naming
Ambiguity in identifying object Names interpreted in context
Unique objects cannot share names within available context Interchangeable objects can, provided they are truly interchangeable
Management Rule 5:
Unique objects require unique names. Interchangeable objects may share a name.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1275
Contexts
Program must control context of interpretation of name
Otherwise, the name may not refer to the expected object
Example
Context includes:
Character set composing name Process, file hierarchies Network domains Customizations such as search path Anything else affecting interpretation of name
Implementation Rule 9:
The process must ensure that the context in which an object is named identifies the correct object.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1277
Sanitize of Not?
Replace context with known, safe one on start-up
Program controls interpretation of names now
Host names
No domain part means local domain
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1278
Our Program
Cleartext password for user
Once hashed, overwritten with random bytes
Log file
Close log file before command interpreter overlaid
Same reasoning, but for writing
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1280
Improper Validation
Something not checked for consistency or correctness
Bounds checking Type checking Error checking Checking for valid, not invalid, data Checking input Designing for validation
Introduction to Computer Security 2004 Matt Bishop Slide #1-1281
November 1, 2004
Bounds Checking
Indices: off-by-one, signed vs. unsigned Pointers: no good way to check bounds automatically Implementation Rule 11:
Ensure that all array references access existing elements of the array. If a function that manipulates arrays cannot ensure that only valid elements are referenced, do not use that function. Find one that does, write a new version, or create a wrapper.
November 1, 2004
Slide #1-1282
Our Program
Use loops that check bounds in our code Library functions: understand how they work
Example: copying strings
In C, string is sequence of chars followed by NUL byte (byte containing 0) strcpy never checks bounds; too dangerous strncpy checks bounds against parameter; danger is not appending terminal NUL byte
Type Checking
Ensure arguments, inputs, and such are of the right type
Interpreting floating point as integer, or shorts as longs
Compilers
Most compilers can do this
Declare functions before use; specify types of arguments, result so compiler can check If compiler cant do this, usually other programs can use them!
Management Rule 6:
When compiling programs, ensure that the compiler flags report inconsistencies in types. Investigate all such warnings and either fix the problem or document the warning and why it is spurious.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1285
Error Checking
Always check return values of functions for errors
If function fails, and program accepts result as legitimate, program may act erroneously
Our Program
Every function call, library call, system call has return value checked unless return value doesnt matter
In some cases, return value of close doesnt matter, as program exits and file is closed Here, only true on denial of access or error
On success, overlay another program, and files must be closed before that overlay occurs
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1287
November 1, 2004
Slide #1-1288
Example
Program executed commands in very restrictive environment
Only programs from list could be executed
Scanned commands looking for metacharacters before passing them to shell for execution
Old shell: ` ordinary character New shell: `x` means run program x, and replace `x` with the output of that program
Our Program
Checks that command being executed matches authorized command
Rejects anything else
Problem: can allow all users except a specific set to access a role (keyword not)
Added because on one key system, only system administrators and 1 or 2 trainees Used on that system, but recommended against on all other systems
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1290
Handling Trade-Off
Decision that weakened security made to improve useability
Document it and say why
Management Rule 7:
If a trade-off between security and other factors results in a mechanism or procedure that can weaken security, document the reasons for the decision, the possible effects, and the situations in which the compromise method should be used. This informs others of the trade-off and the attendant risks.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1291
Checking Input
Check all data from untrusted sources
Users are untrusted sources
Example
Setting variables while printing
i contains 2, j contains 21
pr in t f ( % d %d %n % d\n% n,i ,j , &m, i , &n) ;
stores 4 in m and 7 in n
Design, implement data structures in such a way that they can be validated Implementation Rule 16:
Create data structures and functions in such a way that they can be validated.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1294
Improper Indivisibility
Operations that should be indivisible are divisible
TOCTTOU race conditions, for example Exceptions can break single statements/function calls, etc. into 2 parts as well
November 1, 2004
Slide #1-1296
Our Program
Validation, then open, of access control file
Method 1: do access check on file name, then open it
Problem: if attacker can write to directory in full path name of file, attacker can switch files after validation but before opening
Method 2 (program uses this): open file, then before reading from it do access check on file descriptor
As check is done on open file, and file descriptor cannot be switched to another file unless closed, this provides protection
Method 3 (not implemented): do it all in the kernel as part of the open system call!
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1297
Improper Sequencing
Operations performed in incorrect order Implementation Rule 18:
Describe the legal sequences of operations on a resource or object. Check that all possible sequences of the program(s) involved match one (or more) legal sequences.
November 1, 2004
Slide #1-1298
Our Program
Sequence of operations follow proper order:
User authenticated Program checks access If allowed:
New, safe environment set up Command executed in it
When dropping privileges, note ordinary user cannot change groups, but root can
Change group to that of role account Change user to that of role account
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1299
Assurance
Use assurance techniques
Document purpose, use of each function Check algorithm, call
Management Rule 8:
Use software engineering and assurance techniques (such as documentation, design reviews, and code reviews) to ensure that operations and operands are appropriate.
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1301
Our Program
Granting Access
Only when entry matches all characteristics of current session
When characteristics match, verify access control module returns true Check when module returns true, program grants access and when module returns false, denies access
Summary
Approach differs from using checklist of common vulnerabilities Approach is design approach
Apply it at each level of refinement Emphasizes documentation, analysis, understanding or program, interfaces, execution environment Documentation will help other analysts, or folks moving program to new system with different environment
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1305
Testing
Informal validation of design, operation of program
Goal: show program meets stated requirements If requirements drive design, implementation then testing likely to uncover minor problems If requirements ill posed, or change during development, testing may uncover major problems
In this case, do not add features to meet requirements! Redesign and reimplement
November 1, 2004
Slide #1-1306
Process
Construct environment matching production environment
If range of environments, need to test in all
Usually considerable overlap, so not so bad
Steps
Begin with requirements
Appropriate? Does it solve the problem?
Proceed to design
Decomposition into modules allows testing of each module, with stubs to take place of uncompleted modules
Then to implementation
Test each module Test interfaces (composition of modules)
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1308
Philosophy
Execute all possible paths of control
Compare results with expected results
In practise, infeasible
Analyze paths, order them in some way
Order depends on requirements
Testing Module
Goal: ensure module acts correctly
If it calls functions, correctly regardless of what functions return
Types of Tests
Normal data tests
Unexceptional data Exercise as many paths of control as possible
Gets host name by mapping IP address using DNS DNS has fake record: hi nobody; rm -r f* ;t rue When mail command executed, deletes files
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1313
Testing Program
Testers assemble program, documentation New tester follows instructions to install, configure program and tries it
This tester should not be associated with other testers, so can provide independent assessment of documentation, correctness of instructions
Distribution
Place program, documentation in repository where only authorized people can alter it and from where it can be sent to recipients Several factors afftct how this is done
November 1, 2004
Slide #1-1315
Factors
Who can use this program?
Licensed to organization: tie each copy to the organization so it cannot be redistributed
Factors (cont)
How to protect integrity of master copy?
Attacker changing distribution copy can attack everyone who gets it Example: tcp_wrappers altered at repository to incluse backdoor; 59 hosts compromised when they downloaded and installed it Damages credibility of vendor Customers may disbelieve vendors when warned
November 1, 2004 Introduction to Computer Security 2004 Matt Bishop Slide #1-1317
Key Points
Security in programming best done by mimicing high assurance techniques Begin with requirements analysis and validation Map requirements to design Map design to implementation
Watch out for common vulnerabilities