Академический Документы
Профессиональный Документы
Культура Документы
Email address
irabashetti@gmail.com (P. S. Irabashetti), Anjaligawali8@gmail.com (Gawali Anjali B.),
Akshaybetkar74@gmail.com (Betkar Akshay S.)
Abstract
This paper review the reporting of parallel processing in processor organization. A parallel processing becomes more
trendy, the oblige for improvement in parallel processing in processor organization becomes even more significant. Here
we evaluate each multiple processor organization, 1) SISD: single instruction single data, 2) SIMD: Single instruction
multiple data 3) MISD: Multiple instruction single data and 4) MIMD: Multiple instruction multiple data along with
Vector Processor/Array Processor, Symmetric Multiprocessing, NUMA and Cluster.
Keywords
Data, Instruction, IS: Instruction Set, CU: Control Unit, MU: Memory Unit PU: Processing Unit, LM: Local Memory,
PE: Processing Element, SMP: Symmetric Multiprocessor, NUMA: Non-Uniform Memory Access
instructions and data streams present in the computer • Single Data: Only one data stream is being used as
architecture. SISD can have concurrent processing input during any one clock cycle
characteristics. Instruction fetching and pipelined execution • Deterministic execution
of instructions are common examples found in most • This is the oldest and even today, the most common
modern SISD computer type of computer
• A serial (non-parallel) computer • Examples: older generation mainframes,
• Single Instruction: Only one instruction stream is minicomputers and workstations; most modern day
being acted on by the CPU during any one clock PCs.
cycle
C90, Fujitsu VP, NEC SX-2, Hitachi S820, ETA10 techniques. Specifically, they allow better scaling and use
• Most modern computers, particularly those with of computational resources than MISD does. However, one
graphics processor units (GPUs) employ SIMD prominent example of MISD in computing are the Space
instructions and execution units Shuttle flight control computers.
hypercube or mesh interconnection schemes. A multi-core • SIMD computers require wide design effort
CPU is an MIMD machine. resulting in larger product development times.
Since the underlying serial processors change so
rapidly, SIMD computers suffer from fast
obsolescence. Their regular nature of many
applications also makes SIMD architectures less
suitable
• Vector instructions access memory with known NUMA can remove the problem, arrives when number of
patterns, which allows multiple memory banks to processors accesses the shared memory. For that NUMA
simultaneously supply operands. provides the separate memory for each processor to
• Less memory access = faster processing time. eliminate demerits of tightly coupled systems. NUMA
systems include additional hardware or software to move
2.7. Symmetric Multiprocessing (SMP) data between memory banks. This operation slows the
In computing, symmetric multiprocessing or SMP processors attached to those banks, so the overall speed
involves a multiprocessor computer hardware architecture increase due to NUMA depends heavily on the nature of
where two or more identical processors are connected to a the running tasks.
single shared main memory and are controlled by a single
OS instance. Most common multiprocessor systems today
use an SMP architecture. In the case of multi-core
processors, the SMP architecture applies to the cores,
treating them as separate processors. Processors may be
interconnected using buses, crossbar switches or on-chip
Mesh network. The bottleneck in the scalability of SMP
using buses or crossbar switches is the bandwidth and
power consumption of the interconnect among the various
processors, the memory, and the disk arrays. Mesh
architectures avoid these bottlenecks, and provide nearly
linear scalability to much higher processor counts at the
sacrifice of programmability. Figure 8. NUMA.