Вы находитесь на странице: 1из 24

COMPUTER SCIENCE REVISION

CHAPTER 1 – COMPUTATIONAL THINKING

Computational Thinking

A problem-solving approach borrowing techniques from computer science such as


abstraction, decomposition, and developing algorithms. Can be applied to a wide
variety of problems outside the scope of computer systems.

How computational thinking can be applied:

 Looking at a problem to assess how difficult it is


 Using recursion to apply a simple solution repeatedly
 Reformulating a problem into something familiar we know how to solve
 Model a problem so that we can create a computer program
 Assess a proposed solution for efficiency and elegancy
 Build in processes to solutions to limit errors and recover from errors
 Scale a solution to cope with bigger similar problems

Decomposition

Decomposition is the breaking down of a problem into smaller parts that are
easier to solve. Smaller parts can sometimes be run recursively.

Problems can be decomposed into a hierarchical tree structure.

Structured programming includes the three constructs:


 Sequence
 Selection
 Iteration

Algorithms are defined as a finite set of instructions to perform a certain task


given an input or set of inputs and producing some sort of output.

CHAPTER 2 – ELEMENTS OF COMPUTATIONAL THINKING


Memory and time are the most important limiting factors for solving a problem.
Given enough memory and time, any problem solved by a computer is possible to
solve on any computer.
Backtracking

Backtracking is an algorithmic approach to a problem. Partial solutions are built


up incrementally as a pathway to follow. If the pathway fails at some point, the
partial solutions are abandoned, and the algorithm continues from the last
potentially successful point.

Data Mining

 Trawling through lots of data, often from many sources.


 Searching for relationships between data which are not immediately
obvious to a person
 Data mining helps with pattern matching
 Data mining helps with anomaly detection
 Uses include business modelling, planning, and disease prediction

Performance Modelling

May be necessary when it is not feasible to test all possibilities for reasons such
as:
 Safety
 Time
 Expense

Mathematical considerations:
 Statistics – existing data should be taken into account in the model
 Randomisation – modelling uncertainty

Pipelining
 Increases efficiency, particularly in RISC processors where

MARSARL

CHAPTER 6 – TYPES OF PROGRAMMING LANGUAGE

Paradigm

A paradigm is a way of thinking in computing.


 Imperative programming – this is ‘telling the computer what to do’.
Procedural and object orientated languages are of this paradigm
 Declarative programming – this is ‘telling the computer what quality the
solutions have’ (i.e. what the goal is). Examples include SQL
Turing completion

A programming language is described as Turing complete when it can solve all


problems that have been solved by a computer before.

Assembly language

Assembly language is a low level language with a 1:1 ratio to machine code.
Machine code is what the processor’s architecture reads. Machine code consists
of a binary opcode and operand, while assembly language uses the same format
in a more readable way (e.g. LDA 3).

Instruction set

Each processor has its own unique instruction set. Programs require machine
code to be for the correct instruction set. RISC and CISC are two aspects of
processor architecture (covered later).

Little Man Computing

An assembly language using 11 opcodes:

 INP, OUT, STA, LDA, ADD, SUB, BRZ, BRP, BRA, HLT, DAT

Memory addressing

There are four main types of addressing data within the memory:

 Immediate Addressing – Operand is the actual value to be used


 Direct Addressing – Operand is the memory location of value
 Indirect Addressing – Operand represents the address of the location of
the value
 Indexed Addressing – Incrementing the index register by a constant to
iterate through a data structure.

Object-orientated programming

Objects are part of constructs called classes, which contain:


 Attributes – variables common with all objects with specific values for
each instance.
 Methods – functions or procedures that can be run with respect to the
object.

Constructor
Code that is run to define or create methods in a class (or more generally). For
example, in Python exists the __init__() constructor.

Inheritance

Inheritance occurs between classes. A subclass inherits all attributes and


methods from a parent class / superclass. Overriding is the process of
overwriting existing attributes or methods inherited from a superclass.

Polymorphism

Meaning ‘many forms’. Typically, when methods behave differently in certain


contexts. A polymorphic array contains objects of different classes but inherits
from the same superclass.

Encapsulation

This is the isolation of attributes and variables within a class so that they cannot
be altered by other classes.

A private declaration is used to define such variables, limiting them to the


scope of that class.

CHAPTER 7 – SOFTWARE

Applications

Applications are pieces of software allowing the user to perform a task or


produce something.

Applications of productivity include:


 Word processors
 Spreadsheet packages
 Presentation software
 Image/Video editors
 Web browsers

Utilities
Utilities are small software packages which generally have a specific purpose or
task.

Examples of utilities include:


 Anti-virus software
 Disk defragmentation
 Compression
 File managers
 Backup utilities

Operating Systems

Operating systems serve four main purposes:


 To manage the hardware of the system
 To manage programs being run and stored (software)
 To manage the security of the system
 To provide an interface between the computer and the user.

Software that manages the system and provides for the user.

Types of operating system:

 Multi-tasking – these operating systems allow for more processes to be


executed than CPU cores. The OS splits the data accordingly so that due
to the speed that processors work, the rapid switching between programs
allows the computer to apparently process multiple tasks simultaneously.
 Multi-user – these operating systems allow more than one person/user to
access the computers resources simultaneously such as in mainframe
computers.
 Distributed – these operating systems control many systems and
coordinate them to appear as one system to the end user.
 Embedded – these operating systems are typically dedicated systems on a
small scale such as in smart TVs, cars, and printers.
 Real-time – these operating systems carry out processes within a
guaranteed amount of time, nearly instantly such as an aeroplane’s
autopilot.

Kernel

The kernel is the ‘core’ of the operating system. It manages system resources
including memory management and scheduling. Lies below the user interface.
Applications use the kernel to send and receive data to and from hardware
devices.

Memory Management
Memory management allows programs to be stored in the computer’s memory
safely and efficiently. From a safety point of view, it is important that data of one
programs is protected from another, preventing possible malicious access or
amendments of data. The OS’s memory management deals with this.

Memory can be stored using segmentation, logically splitting the data into
modules or routines, or paging, where the data is split up in pages of the same
physical size.

Virtual Memory

Virtual memory uses the hard drive as an ‘extension’ of the RAM. If the system
is running low of RAM, a dedicated partition of the hard drive will be used. It is
stored temporarily to allow running processes to use the RAM and swapped with
other data when in use. Process is therefore slower that just using physical
memory. Disk thrashing occurs when the RAM is full, and pages are moved back
and forth between physical and virtual memory rapidly, significantly slowing the
system down.

Scheduling

Scheduling manages the amount of time different processes have in the CPU.
Scheduling allows multiple processes to run apparently simultaneously. Types of
scheduling algorithm:

 Round robin – each process is allocated a certain amount of time, and if


the processes is incomplete after this fixed amount of time, the process
goes to the back of the queue.

 First come first served – Processes are processed until complete in the
order they arrive in (like a shop queue). The downside to this is that if the
process takes a long time, no other processes can be worked on until it is
complete.

 Shortest job first – This scheduling algorithm picks the job that will take
the shortest amount of time and runs it until it finishes. The algorithm
needs to know how long each process will take beforehand.

 Shortest time remaining – The scheduler estimates the processing time for
each process and executes the quickest one. If a quicker process is added,
this will be executed instead until complete.

 Multi-level feedback queues – This algorithm uses a number of queues


with different priorities. Higher priority queues are executed first, and the
processes are moved to different queues depending on their behaviour.
Interrupts

Interrupts occur when a device sends a signal to the CPU requiring its attention.
Polling is when the CPU checks each peripheral to see if it needs attention at
certain intervals. Interrupts are more efficient because the CPU is only involved
when attention is needed.

Interrupt service routines (ISR) determine what happens when a particular


interrupt is raised. Interrupts all have a priority level. If the interrupts priority
is higher than that of the current process, than the process will be interrupted to
deal with the peripheral. The processor checks for interrupts at the end of every
fetch-decode-execute cycle.

If a higher priority interrupt is called:

 The contents of the program counter and other registers are copied to the
stack
 The relevant ISR is loaded by changing the program counter to the
corresponding memory location of the ISR.
 Once the ISR has completed, the previous values of the program counter
and other registers are restored to the CPU and continue as before.

If an even higher priority interrupt is raised whilst an ISR is being run, the
same process applies – the current ISR is added to the stack in the same way
and loaded from the stack once the higher priority ISR is complete.

Device Drivers

Drivers are a software package, usually supplied with a device, to optimise


compatibility by informing the system how to communicate with the device.

Virtual Machines

Virtual machines emulate a computer using software. A common use of virtual


machines is to run other operating systems within a system. Network
infrastructures are often virtualised as multiple servers can be run on physical
machines.

Virtual machines can also be used to interpret intermediate code.

BIOS

Basic input/output systems are loaded when a computer is booted up. The bios is
stored on the EPROM (non-volatile) memory and point towards the operating
system for start-up. The power-on self-test (POST) tests that the computer is
functional (i.e. RAM installed and accessible and the processor is working). The
operating system’s kernel is then loaded during the boot process.
Open and closed source software

Closed source software is software where the source code is not publically
available. These programs will generally be protected by more copyright laws,
and users are not able to edit the source code, only the executable machine code
is visible.

Open source software has its source code publically available. Users can modify
code to suit their needs and the development process is more community
orientated.

 OSS is generally free as code is public.


 OSS is generally not perfect because created by unpaid volunteers.
 More people work on open source software so potential for more
innovation.
 OSS can be more quickly updated as more available programmers.

CHAPTER 8 – APPLICATIONS GENERATION

Machine Code

Machine code is a low-level language which uses binary to represent opcodes and
operands. Different processors have different instruction sets.

Assembler

Assembly code is a way of writing machine code using three or four-letter codes
to represent the binary. Little man computing uses assembly language for
example, with commands such as LDA, STA and ADD. Assembly code has a one-
to-one relationship with machine code.

Assemblers convert assembly language into object/machine code.


Compilers and Interpreters

Both are types of translators.

Interpreters take each line of a high level language program, converts it to


machine code and runs it. This makes for easy debugging as the programmer
will know exactly which line contains an error.

Compilers take the entire high-level language program and converts it into
object code. Debugging is harder as there is nothing to say where an error occurs,
the whole program will just fail to compile.

Object Code

Object code is an intermediary step sometimes taken before pure machine code is
produced. Object code makes up portions (libraries and subroutines) of code that
contains placeholders linking where this code needs to go. The linker links these
and produces the final executable program.

How compilers work

A sequence of stages occurs during the compiling process, moving closer to the
machine code:

Lexical analysis
 This stage first removes all comments and whitespace
 Tokens are identified which include operators, variables, and constants.
 A symbol table keeps track of the variables and subroutines in the
program.

Syntax analysis
 The syntax of the code is checked that it adheres to the program
languages rules.
 If any errors occur, a list of syntax errors will be communicated to the
programmer.
 An abstract syntax tree (AST) is created to represent the program.

Code generation
 The AST is converted into object code

Optimisation
 Lines of code which have no effect on the program are removed
 Some instructions may be replaced by a more efficient means.

Libraries
Libraries are pre-written and contain functions and procedures that can be used
by the programmer without having to re-write them.

Linkers and Loaders

The linkers are included to make sure where library code is used. Static linkers
will insert the code into the correct position of the machine code, making the files
larger in size as the code is effectively copied. With dynamic linkers, the
compiled library is stored on the PC and run when needed by the OS. A loader is
responsible for loading a program into memory.

CHAPTER 9 – SOFTWARE DEVELOPMENT


Elements of Software development

Feasibility study

An evaluation as to whether a program is worthwhile to start in the first place.


Reasons for failing a feasibility test include:

 Budget may not be big enough


 Legal reasons including data protection and privacy
 Not technically feasible from a programming standpoint

Requirements specification

Understandable and measurable requirements from the end user’s needs. At the
end of a project, the initial requirements specification will be measured against
the final product to see if the developer has fulfilled the consumer’s
requirements.
Testing

 Testing should be carried throughout development (after every module of


code is written).
 Destructive testing is testing designed to break the program in order to
understand how to full proof the system.
 Alpha testing is an early release of the product where other developers,
usually within the same company can assess the product.
 Beta testing is the next stage where actually example consumers are used
(i.e. not developers) to test the program, usually in a small group.
 Acceptance testing is when the consumer tests the product against the
requirements specification.

Methodologies

The Waterfall Lifecycle


Requirements > Analysis > Design > Coding > Testing > Maintenance

Advantages – simplicity, easy to manage, clear responsibilities at each stage,


easy to see whether on schedule.

Disadvantages – carries a risk; nothing tangible is created for the user to see
until the testing stage. If the task has been misunderstood at any stage, the
testing stage could show some significant misunderstandings and therefore a
waste of money and time.

Waterfall is therefore suited to less complex projects.

Rapid Application Development

RAD uses prototyping as a means of development. The user is presented with


prototypes during the process and can direct the programmers to improve on the
prototypes. There is something to show early on in the development stage. Once
the prototypes stop improving and the user is satisfied, it is at this point that the
final product is created.

Advantages – Where the requirements aren’t very clear. Continuous feedback,


excellent usability.

Disadvantages – RAD focuses on the final product rather than how it works, so
code may be inefficient. The client must be able to make a commitment to review
prototypes or RAD is ineffective.

Best for smaller projects.

Spiral Model

Spiral model is designed to take into account risks within the project.

1. Determine objectives
2. Identify and resolve risks
3. Develop and test
4. Plan the next iteration

Agile Programming

Designed to cope with changing requirements throughout the development


process.
Extreme programming is like RAD yet the iterations of improvement occur more
frequently and testing is carried out during the development programming.
Planning is carried out before each iteration of development.

Key practised of extreme programming:


 ‘The planning game’
 40-hour week
 Pair programming

Advantages – High emphasis on programming usual results in high quality code


Drawbacks(?) – Programmers must collaborate well together and work in the
same physical location. Clients need to commit a representative to work with the
team.

CHAPTER 10
The Central Processing Unit

The CPU processes instructions passed on from the memory using transistors
which control binary circuits using logic gates.

Processors work using the fetch-decode-execute cycle

Clock speed – this is measured in hertz and represents the number of cycles per
second.

Registers

Program Counter (PC) – This register keeps track of the memory location of the
line of machine code being executed. It is incremented to point to the next
instruction. The PC can also be changed when branching to subsections of code.

Memory Data Register (MDR) – The MDR stores the data that has been fetched
from or stored in memory

Memory Address Register (MAR) – The MAR stores the address of the data or
instructions that are to be fetched from or sent to memory.

Current Instruction Register (CIR) – The CIR stores the most recently fetched
instruction, which is waiting to be decoded and executed.

Accumulator (ACC) – The accumulator stores the results of calculations and


logical operations made by the ALU. In LMC, LDA loads data from a memory
location into the ACC and STA stores data in the ACC to a memory location.

General Purpose Registers – Other registers are used to store intermediate data
rather than sending data all the way to the RAM.
Buses – Busses in the CPU are used as communication channels – to transfer
data from two locations. The USB is a type of bus.
 Data bus – this sends data between the processor and memory
 Address bus – this carries the memory addresses.
 Control bus – this sends data to and from the control unit.

Arithmetic logic unit (ALU) – The ALU carries out calculations and logical
operations. The results are stored in the ALU.

Control Unit (CU) – the CU coordinates the processor by controlling how data
moves around the CPU and communicating through the control bus. Instructions
are decoded in the control unit.

Registers and the fetch-decode-execute cycle

Fetch

 The PC will start at 0 at the beginning of a program.


 This value in the PC is loaded into the MAR.
 A fetch signal is sent along the control bus.
 The value in the MAR is send along the address bus.
 The corresponding data is send down the data bus and stored in the CIR.
 The PC is incremented by 1

Decode

 The contents of the CIR is sent to the CU where it is decoded.


 Appropriate action is taken (e.g. Load contents into ACC)

Execute

 Instruction is executed appropriately…


Improving CPU performance

 Clock speed – increasing clock speed increases the number of cycles per
second, thus increasing processing performance.
 Cache memory – The CPU has different levels of cache memory inside (L1,
L2, L3 typically), where each vary in size, speed, and location. L1 is the
smallest, fastest, most expensive and closest to the core. L3 cache is often
shared between cores.
 Multiple cores – Multiple cores allows multitasking where different cores
run different applications, however multiple cores can still work on one
task.
 Pipelining – pipeloinging is used in modern processors where while one
instruction is being executed, the next instruction is being decoded, and
the next one after that being fetched. ‘Flushing the pipes’ occurs when
there is a branch in the program and the next instruction can’t be
predicted.

Graphics Processing Unit (GPU)

The GPU is designed to perform the calculations associated with graphics and
rendering. 3D graphics in games require real time rendering. GPUs have
dedicated instruction sets for graphics processing.

GPUs apply Single instruction multiple data (SIMD) to do the same calculation
for multiple points on the screen.

Graphics cards contain GPUs with their own memory, but GPUs can be
embedded one the main processor sharing the systems memory.

GPUs are not limited to graphics rendering; similar calculations for graphics
processing are useful for:
 Modelling physical systems
 Audio processing
 Breaking passwords with brute force (number crunching)
 Machine learning

Input, output and storage devices

Input/Output devices
Storage Devices
Memory
 RAM
 ROM

Computer Architectures

Von Neumann Architecture

In the Von Neumann architecture, instructions and data are stored in the same
memory location. Therefore, data and instructions are send along the same data
bus, meaning instructions can’t be fetched at the same time as data is being
send, which causes a ‘Von Neumann bottleneck’ (i.e. pipelining not possible)

Harvard Architecture

Data and instructions are stored in separate memory locations. Pipelining is


therefore possible. Harvard architecture tends to be used by RISC processors.

Parallel Processing
Parallel processing is when the computer carries out multiple computations
simultaneously to solve a given problem.

Single Instruction Multiple Data (SIMD) – Same operation is carried out on


multiple pieces of data simultaneously

Multiple Instructions Multiple Data (MIMD) – Different instructions are carried


out on different pieces of data using.

SIMD and MIMD make use of multiple cores, and MIMD takes place on a larger
scale on supercomputers. Supercomputers are expensive to buy and run and
therefore distributed computing is a more viable alternative to parallel
processing. Distributed computing uses the internet to split problems between
machines.

Using 100 cores to solve a problem does not necessarily mean the process will be
completed 100x faster, as some problems don’t lend themselves to parallelisation.

RISC vs CISC

Complex instruction sets (CISC) have a wider range on instructions available,


possibly matching the functionality of high level languages. Programs require
less memory as they can be implemented in fewer instructions.

Reduced instruction sets (RISC) have more streamlined instruction sets. RISC
tends to have fewer addressing modes and more general-purpose registers. All
instructions in a RISC system should execute in the same time, ideally one clock
cycle.

RISC processors involve fewer transistors so they produce less heat, consume
less power, and cost less. However, compilers have a harder job compiling to
RISs.

CHAPTER 11 – DATA TYPES


Important Data Types

Character – single letter, number, or symbol


String – A collection of alphanumeric characters
Boolean – A binary data type (True/False)
Integer – Whole numbers
Real/float – Decimal numbers

ASCII and Unicode


American Standard Code for Information Interchange is a standard for storing
characters using 8-bit binary, representing 256 characters. E.g. a string of 6
characters requires 6 bytes to store in ASCII.

Unicode uses 16-bits which allows for >65000 characters.

BINARY AND HEXADECIMAL

Denary to Binary

E.g. 147 = 10010011

Each binary digit represents a power of 2. From left to right: 20, 21, 22 etc.

Binary to Denary

01100101 = 64+32+4+1 = 101

Representing Negative numbers

Sign and Magnitude Representation

This uses a sign bit to represent the sign (+/–) of the number. This is the left
most bit, called the most significant bit (MSB). If the sign bit is 1, the number is
negative.

An 8-bit sign and magnitude number can represent –127 < n < 127.

01000011 = 67
11000011 = –67

Two’s Complement

In two’s complement, the most significant bit (still the leftmost bit) represents
the negative version of that power of two if the bit is 1. In an 8-bit number, if the
MSB is 1, that means that –128 should be added to the number.

01000011 = 67
11000011 = –61

Hexadecimal

Hexadecimal is base-16, meaning each bit represents a value between 0-15. 0-9
use the corresponding number, and 10-15 use the letters A-F.
Binary to Hexadecimal

1011 0010 can be split into two nibbles. Each binary nibble represents a bit in
hexadecimal.

1011 = 11 = B
0010 = 2

Therefore 10110010 = B2

Images, sound and instructions

Simple black and white graphics can be represented by a matrix of binary


numbers, where 0 represents white pixel and 1 represents a black pixel.

1 bit = 21 colours, 2
2 bit = 22 colours, 4
8 bit = 28 colours, 256
16 bit = 216 colours, 65,536

Sample rate determines the quality of a digital sound. The more samples, the
closer to the analogue audio.

Metadata

This is information stored about the image/file which contains misc information
as well as properties of the image for loading it e.g. width, height, bit depth,
resolution.

Representing Instructions

Comprised of an operator and operand. The operator represents a machine level


instruction, such as LDA or loading to the accumulator. The operand is the data
or address that the operator is operating on.

CHAPTER 12 – COMPUTER ARITHMETIC


Binary Addition
Easy

Binary Subtraction
Convert number to be subtracted into negative two’s complement form and add
as normal

CHAPTER 13 – DATA STRUCTURES


Records, lists, and tuples

Records are used to store data with attributes, typically in databases. The
attributes in records are not ordered but may sometimes be indexed.

Lists are ordered sets of data organised by indexes. Data is accessed through an
index, representing the position of the data within the list. Lists require less
setup than records as records need the attributes to be defined beforehand,
however identifying data by attribute is more user friendly than by index.

A tuple is an immutable list, meaning it is of fixed size and cannot be changed. It


works the same as a list – data is referenced by index, but you cannot append,
modify, or remove values in a tuple. The fixed size means tuples save space.

Stacks and Queues

Stacks
A stack is a method for handling linear data structures (i.e. a list). Data is added
to and removed from the top of the stack. Adding data to a stack is ‘pushing’ and
removing is ‘popping’. Commands PUSH and POP used in assembly language.

A stack is implemented using pointers.

Pushing to a stack:
 If full, return error
 Else, add 1 to the stack pointer
 Set stack(new pointer) to the data

Popping from a stack:


 If empty, return error
 Else, set data to stack(pointer)
 Remove 1 from stack pointer.

Queues
A queue uses first in first out (FIFO).

Popping removes the data at the start pointer, and the new start pointer
becomes the next value.

Pushing adds the data after the end pointer, and the new end pointer becomes
the location of the data just added.
Circular queues are created when the data ‘wraps around’ the queue so the end
pointer is less than the start pointer.

Linked lists

Linked lists store data using pointers. Each piece of data is attached to a pointer,
which points to the next piece of data in the list. Linked lists are are also a
linear, one way data structure. There may be multiple sets of pointers each
representing an attribute/property of the data.

Adding to linked lists


The free storage pointer points to the first available free node.

 Store added data in the free node that is being pointed to


 Create the pointer for this data to be the next available free node
 Change the pointer of the previous piece of data to the previous free node

Removing from linked lists


Simply remove by changing the pointer of its preceding piece of data, bypassing
the data needing to be removed. The data is then deleted and noted as a free
node.

Traversing a linked list


List must be traversed in order of pointers. Searching for an item requires
checking each previous linked item in the list until found.

Trees

Top of a tree is called the root node, and nodes are structured as parents and
children of related nodes.

Binary Trees
Binary trees have only two children and contain:
 Data
 Left pointer
 Right pointer

Traversing a binary tree

Pre-order traversal:
 Start at root node
 Traverse left sub-tree
 Traverse right sub-tree

Post-order traversal:
 Traverse left sub-tree
 Traverse right sub-tree
 Return to root node

In-order traversal:
 Traverse left sub-tree
 Visit root node
 Traverse right sub-tree

Reverse Polish notation

 If the next symbol is an operand (number), push to the stack.


 If the next symbol is an operator, pop the last two items from the stack,
perform the operation, push the result to the stack.

Post-order traversal of a tree converts between infix notation and reverse Polish.

Graphs

Graphs are made up of verticies/nodes and edges/arcs.

Traversal of graphs

Depth-first – Visit all the nodes attached to a node connected to a starting node
before visiting another node attatched to the starting node. Uses a stack

Breadth-first – Visit all the nodes attached directly to a starting node first. Uses
a queue.

Hash tables

NEED TO GO OVER

CHAPTER 14 – LOGIC GATES AND BOOLEAN ALGEBRA


Logic Gates

Truth tables represent all corresponding outputs for each combination of inputs
in the form of a table.

AND Gate
Output is TRUE if both inputs are TRUE
OR Gate
Output is TRUE if at least 1 input is TRUE

NOT Gate
Output is TRUE if input is FALSE

XOR Gate
Output is TRUE if exactly one input is TRUE

NAND and NOR Gates


Combination of AND/OR gates and NOT gate after the output, therefore
expected output of basic gate is inverted.

De Morgan’s Rules
 (AB) =  AB  NOT(A OR B) = NOT A AND NOT B
 (AB) =  AB  NOT(A AND B) = NOT A OR NOT B

Adder Circuits

NEED TO GO OVER

Karnaugh Maps – DO QUESTIONS ON p181

These are used to represent a truth table optimised to enable pattern recognition
for identifying minimal logical expressions (i.e. simplifying them)

Patterns of 1 should be circled in a Karnaugh map as blocks.

Rules for using Karnaugh maps:


 No zeros allowed in blocks
 No diagonal blocks
 Groups as large as possible
 Groupings need to be of size 2n
 Every 1 must be within a block
 Overlapping allowed
 Wrap around allowed
 Smallest possible number of groups
Flip-flop circuits

Truth table outputs depend on previous values as outputs from one gate acts as
an input to another dependant gate.
Serial and sequential files

Serial – Records are organised one after the other. Structure of each record must
be the same. To search for a record, records must be examined one by one until a
match is found.

Sequential – Stored in the same way but records are orderd (e.g. alphabetically)

Sequential files can be searched more quickly by storing an index file.

1st Normal Form

Remove duplicate columns from table


Create separate tables for each group of related data
Create primary keys for these tables
Ensure atomicity

2nd Normal Form

Check data is in 1NF


Remove any data sets that occur in multiple rows and transfer to new tables
Create relationships with foreign keys

3rd Normal Form

Check data is in 2NF


Identify any data that does not depend on the primary key and separate into new
tables

CRUD

 Create
 Update
 Read
 Delete

SQL
INSERT, SELECT, FROM, WHERE, JOIN, INNER JOIN, DELETE, INTO

HTML
<body><heading><br><div><link><li><script><html>
Levels of Database Views

 Physical View – How it is recordered on the storage medium, binary. Only


relevant to the programmers of the DBMS

 Logical View – How the data is organised. Construction of tables, queries.


Production of the data dictionary.

 User View – The appearance and functionality of the database. i.e.


interface.

Referential integrity is the idea that inconsistent transactions are not


possible.

ACID RULES

 Atomicity – Transaction performed should be completely performed or not


performed at all. Database should always be in a consistent state
 Consistency – A transaction should always take the database from one
consistent state to another.
 Isolation – While a transaction is ongoing, nothing else can interfere. For
example, locking records during transactions to prevent inconsistencies.
 Durability – Backing up the database; once a change has been made, it
should be written to secondary storage.

Layers of a Network

 Application Layer
 Network Layer
 Physical Layer

OSI

Physical – Physical components such as Ethernet cables and binary level


Data Link – Mac Addressing, controlling access and error checking
Network – Transmission of packets and routing (logical addresses)
Transport Layer – Keeping track of the segments of a network, checking
transmission status and packetisation. E.g. TCP
Session Layer – Responible for starting, managing and ending connection
sessions
Presentation – May involve data conversions, encryption/decryption
Application – How the data is delivered to the user from the presentation layer

TCP/IP Stack

 Application Layer – production, communication and reception of data.


Protocols operate at this level
 Transport Layer – Establishing and terminating sessions
 Internet Layer – Transmission of datagrams and packets. IP used at this
level, directing datagrams
 Link Layer – Links the datagrams to the physical network.

Вам также может понравиться