Вы находитесь на странице: 1из 14

Advanced Microprocessors and Microcontrollers

by

A. Narendiran
Question 1
Briefly explain the following ...
a. Pipeline hazards.
b. Instruction level parallelism.
c. Virtual memory and paging.

Answer

Pipeline Hazards
There are situations, called hazards, that prevent the next instruction in the instruction stream from being executing
during its designated clock cycle. Hazards reduce the performance from the ideal speedup gained by pipelining. There
are three classes of hazards:

• Structural Hazards
They arise from resource conflicts when the hardware cannot support all possible combinations of instructions in
simultaneous overlapped execution.
• Data Hazards: They arise when an instruction depends on the result of a previous instruction in a way that is
exposed by the overlapping of instructions in the pipeline. Data hazards are divided into three categories.

1. RAW (read after write) hazards - the current instruction must wait to read data until after a previous instruction
writes the correct data.
2. WAR (write after read) hazards - the current instruction must wait to write data until after a previous instruction
reads the old data.
3. WAW (write after write) hazards - the current instruction must wait to write data until after a previous instruc-
tion writes to the same register. This hazard is more subtle in that neither instruction executes incorrectly.
However, subsequent instructions can be incorrect if the writes occur out of order.
Note that RAR (read after read) is not really a hazard because it makes no difference which order the same operand
is read.

• Control Hazards
In the ideal pipeline, we fetch instructions one after another in order. As long as location of the next instruction
is known, this process can go forward. When a branch instruction is fetched, the next instruction location is not
known until the branch instruction finishes execution. Thus, we may have to wait until the correct location of the
next instruction is known before fetching more instructions. This is a control hazard.

Instruction Level Parallelism


Instruction-level Parallelism (ILP) is a family of processor and compiler design techniques that speed up execution by
causing individual machine operations, such as memory loads and stores, integer additions and floating point multiplica-
tions, to execute in parallel. The operations involved are normal RISC-style operations, and the system is handed a single
program written with a sequential processor in mind. VLIWsand superscalars are examples of processors that derive their
benefit from instruction-level parallelism.

ILP Execution

A typical ILP processor has the same type of execution hardware as a normal RISC machine. The differences between
a machine with ILP and one without is that there may be more of that hardware, for example several integer adders instead
of just one, and that the control will allow, and possibly arrange, simultaneous access to whatever execution hardware is
present.

1
ILP Architecture

• Sequential architectures: architectures for which the program is not expected to convey any explicit information
regarding parallelism. Superscalar processors are representative of ILP processor implementations for sequential
architectures.
• Dependence architectures: architectures for which the program explicitly indicates the dependences that exist
between operations. Dataflow processors ([29-31]) are representative of this class.
• Independence architectures: architectures for which the program provides information as to which operations are
independent of one another. Very Long Instruction Word (VLIW) processors ([12, 17, 18]) are examples of the
class of independence architectures.
Virtual Memory and paging
Virtual memory addressing introduces a layer of abstraction between program code and physical memory. It allows
program code to be compiled as though each process will enjoy exclusive access to the entire memory address space.
However, in practice the virtual memory address space of each process is dynamically mapped to arbitrary physical mem-
ory pages during execution. All memory references are then dynamically translated from virtual memory addresses to the
correct arbitrary physical memory addresses just before each instruction is executed.

Most modern computers have special hardware called a memory management unit (MMU). This unit sits between
the CPU and the memory unit. Whenever the CPU wants to access memory, it sends the desired memory address to the
MMU, which translates it to another address before passing it on the the memory unit. The address generated by the CPU,
after any indexing or other addressing-mode arithmetic, is called a virtual address, and the address it gets translated to by
the MMU is called a physical address.

The operating system maintains data structures, called page tables, to support virtual-to-physical memory address
translation. The most recently used page table entries are cached in each CPU to optimize address translation. This cache
is commonly called a translation lookaside buffer or TLB. To further optimize address translation, TLB lookups are per-
formed in hardware. A TLB miss must be resolved by reference to the page tables in main memory. This operation is also
performed by hardware in some cases.

2
Question 2
Explain the advantage of segmentation in microprocessor architecture.

Answer

Segmentation divides the memory into variable-sized segments.

In the segmentation method, an MMU utilizes the segment selector to obtain a descriptor from a table in memory
containing several descriptors. A descriptor contains the physical base address for a segment, the segments privilege
level, and some control bits. When the MMU obtains a logical address from the microprocessor, it first determines
whether the segment is already in physical memory. If it is, the MMU adds an offset component to the segment base
component of the address obtained from the segment descriptor table to provide the physical address. The MMU then
generates the physical address on the address bus for selecting the memory. On the other hand, if the MMU does not find
the logical address in physical memory, it interrupts the microprocessor. The microprocessor executes a service routine
to bring the desired program from a secondary memory such as disk to the physical memory. The MMU determines the
physical address using the segment offset and descriptor as described earlier and then generates the physical address on
the address bus for memory.

Memory is unusable for segmentation when it is sandwiched between already allocated segments and if it is not large
enough to hold the latest segment that needs to be loaded. This is called externalfragmentation and is handled by MMUs
using special techniques.

Advantages

The advantages of segmented memory management are that few descriptors are required for large programs or data
spaces and that internal fragmentation is minimized.

Address translation using descriptor tables offers a protection feature. A segment or a page can be protected from
access by a program section of a lower privilege level. For example, the selector component of each logical address
includes 1 or 2 bits indicating the privilege level of the program requesting access to a segment. Each segment descriptor
also includes 1 or 2 bits providing the privilege level of that segment. When an executing program tries to access a
segment, the MMU can compare the selector privilege level with the descriptor privilege level. If the segment selector has
the same or a higher privilege level, the MMU permits access. If the privilege level of the selector is lower than that of
the descriptor, the MMU can interrupt the microprocessor, informing it of a privilege-level violation.

3
Question 3
Explain the architecture of Pentium processor.

Answer
The Pentium family of processors originated from the 80486 microprocessor. The term ”Pentium processor” refers to
a family of microprocessors that share a common architecture and instruction set. The first Pentium processors were
introduced in 1993. It runs at a clock frequency of either 60 or 66 MHz and has 3.1 million transistors. Some of the
features of Pentium architecture are ...

1. CISC architecture with RISC performance.

2. 64-Bit Bus.
3. Uses Superscalar architecture and hence can issue multiple instructions per cycle.
4. 5 Stage Pipeline
5. Branch Prediction

6. The Pentium processor has two separate 8-kilobyte (KB) caches on chip, one for instructions and one for data. It
allows the Pentium processor to fetch data and instructions from the cache simultaneously.

The Pentium processor has two primary operating modes.

• Protected Mode: In this mode all instructions and architectural features are available, providing the highest per-
formance and capability. This is the recommended mode that all new applications and operating systems should
target.
• Real-Address Mode: This mode provides the programming environment of the Intel 8086 processor, with a few
extensions. Reset initialization places the processor in real mode where, with a single instruction, it can switch to
protected mode.

Superscalar Architecture

The Pentium(R) processor’s superscalar architecture enables the processor to achieve new levels of performance by
executing more than one instruction per clock cycle. The term ”superscalar” refers to a microprocessor architecture that
contains more than one execution unit.

The Pentium processor also uses hardwired instructions to replace many of the microcoded instructions used in pre-
vious microprocessor generations. Hardwired instructions are simple and commonly used, and can be executed by the
processor’s hardware without requiring microcode. This improves performance without affecting compatibility. In the

4
case of more complex instructions, the Pentium processor’s enhanced microcode further boosts performance by employ-
ing both dual integer pipelines to execute instructions.

Pipeline Stages

The Pentium’s basic integer pipeline is five stages long, with the stages broken down as follows:
1. Pre-fetch/Fetch: Instructions are fetched from the instruction cache and aligned in pre-fetch buffers for decoding.
2. Decode1: Instructions are decoded into the Pentium’s internal instruction format. Branch prediction also takes
place at this stage.

3. Decode2: Same as above, and microcode ROM kicks in here, if necessary. Also, address computations take place
at this stage.
4. Execute: The integer hardware executes the instruction.
5. Write-back: The results of the computation are written back to the register file.

Branch Prediction

To efficiently predict branches, the Pentium processor uses two prefetch buffers. One buffer prefetches code in a linear
fashion (for the next execution step) while the other prefetches instructions based on addresses in the Branch Target Buffer
(to jump to the beginning of the loop). As a result, the needed code is always prefetched before it is required for execu-
tion. The Pentium processor’s prediction algorithm can not only forecast simple branch choices, but also support more
complex branch prediction-for example, within nested loops. This is accomplished by storing multiple branch addresses
in the Branch Prediction Buffer. The BTB’s design allows 256 addresses to be recorded, and thus the prediction algorithm
can forecast up to 256 branches.

Floating point unit

There are 8 general-purpose 80-bit Floating point registers. Through instruction scheduling and overlapped (pipelined)
execution, the floating-point unit is capable of executing two floating-point instructions in a single clock. Incorporated
into the unit is a sophisticated eight-stage pipelining. The first four stages are the same as that of the integer pipelines
while the final four stages consist of a two-stage Floating Point Execute, rounding and writing of the result to the register
file, and Error Reporting. In addition, common floatingpoint functions such as add, multiply and divide are hardwired for
faster execution.

Cache

The Intel Pentium processor incorporates separate on-chip code and data caches. This increases performance because
bus conflicts are reduced. The Pentium processor’s code and data caches each contain 8 Kbytes of information, and
both are organized as two-way set associative caches - meaning that they save time by searching only pre-specified 32-
byte segments rather than the entire cache. The Pentium processor’s data cache uses two other important techniques:
”writeback” caching and an algorithm called the MESI (Modified, Exclusive, Shared, Invalid) protocol. The writeback
method transfers data to the cache without going out to main memory (data is written to main memory only when it is
removed from the cache).

5
Question 4
Explain the floating point unit available in general Pentium processor.

Answer

The pentium floating-point architecture supports single-precision (32-bit), double-precision (66bit), and extended-
precision (80-bit) floating point operations. The floating-point unit is heavily pipelined, permitting several instructions to
execute simultaneously. Most floating-point instructions are issued singly to the U pipeline and can not be paired with
integer instructions. However some floating-point instructions may be paired. The floating-point pipeline consists of eight
stages. The first four are shared with the integer pipeline.

The pipeline stages of FPU are


1. Prefetch : Identical to the Integer Prefetch

2. Instruction Decode 1 (D1) : Identical to the Integer Instruction Decode 1 - Instructions are decoded into the
Pentiums internal instruction format. Branch prediction also takes place at this stage.
3. Instruction Decode 1 (D1) : Identical to the Integer Instruction Decode 2 - Microcode ROM kicks in here if
necessary. Also, address computations take place at this stage.

4. Execution : Register read, memory read, memory write as required


5. FP Execution 1 : Information from the register or memory is written to a FP register. Data is converted to FP
format before loading into the FPU
6. FP Execution 2 : Floating-point operation is performed within the FPU

7. Write FP Result : The FPU results are rounded and written to the target floating-point register
8. Error Reporting : If an error is detected, an error reporting stage is entered where the error is reported and the
FPU status word is updated
The eight-stage pipeline in the FPU allows a single cycle throughput for most of the ”basic” floating-point instructions
such as floating-point add. subtract, inultiply, and compare. This means that a sequence of basic floating-point instnic-
tions free from data dependencies would execute at a rate of one instruction per cycle, assuming instruction cache and
data cache hits.

Data dependencies exist between floating-point instructions when a subsequent instruction uses the result of a preced-
ing instruction. Since the actual computation of floating-point results takes place during X1, X2, and WF stages, special
paths in the hardware allow other stages to be bypassed and present the result to the subsequent instruction upon genera-
tion. Consequently, the latency of the basic floating-point instructions is three cycles.

Floating-point instructions execute in the U pipe and generally cannot be paired with any other integer or floating
point instructions. The design was tuned for instructions that use one 64-bit operand in memory with the other operand
residing in the floating-point register file. Thus. these operations may execute at the maximum throughput rate, since a
full stage (Estage) in the pipeline is dedicated to operand fetching. Although floating-point instructions use the U pipe
during the E stage. the tuo ports to the data cache (which are used by the U pipe and the V pipe for integer operations)
are used to bring 64-bit data to the FPU. Consequently, during intensive floating-point computation programs, the data
cache access ports of the LJ pipe and V pipe operate concurrently with the floating-point computation. This behavior is
similar to superscalar load-store RISC designs where load instructions execute in parallel Lvith floating-point operations,
and therefore deliver equivalent throughput of floatingpoint operations per cycle.

6
Microarchitecture of FPU

The floating-point unit of the Pentium microprocessor consists of six functional sections.

The floating-point interface, register file, and control (FIRC) section is the only interface between the FPU and the
rest of the CPU. The FIRC section also contains most of the common floating-point resources: register file. centralized
control logic, and safe instruction recognition logic (described later). FIRC can complete execution of instructions that do
not need arithmetic computation. It dispatches the instructions requiring arithmetic computation to the arithmetic sections.

The floating-point exponent section (FEXP) calculates the exponent and the sign results for all the floating-point arith-
metic operations. It interfaces with all the other arithmetic sections for all the necessary adjustments between the mantissa
and the sign-and-exponent fields in the computation of floating-point results.

The floating-point multiplier section (FMUL) includes a full multiplier array to support single-precision (24-bit man-
tissa), double-precision (j3-bit mantissa), and extended-precision (64-bit mantissa) multiplication and rounding within
three cycles. FMUL executes all the floating-point multiplication operations. It is also used for integer multiplication,
which is implemented through microcode control.

The floating-point adder section (FADD) executes all the addfloating-point instructions, such as floating-point add,
subtract, and compare. FADD also executes a large set of micro-operations that are used by microcode sequences in the
calculation of complex instructions, such as binary coded decimal (BCD) operations, fomiat conversions, and transcen-
dental functions. The FAAD section operates during the X1 and X2 stages of the floatingpoint pipeline and employs
several wide adders and shifters to support high-speed arithmetic algorithms while inaintaining maximum performance
for all data precisions. The CPU achieves a Latency of three cycles with a throughput of one cycle for all the operations
directly executed by the FADD section for single-precision, double-precision, and extended-precision data.

The floating-point divider (FDIV) section executesthe floatingpoint divide, remainder, and square-root instructions.
It operates during the X1 and X2 pipeline stages and calcukates two bits of the divide quotient every cycle. The overall
instruction latency depends on the precision of the operation. FDIV uses its own sequencer for iterative computation
during the X1 stage. The results are fully accurate in accordance with IEEE standard 754 and ready for rounding at the
end of the X2 stage.

The floating-point rounder (FRND) section rounds the results delivered from the FADD and FDIV sections. It operates
during the WF stage of the floating-point pipeline and delivers a rounded result according to the precision control and the
rounding control, which are specified in the floating-point control word.

7
Question 5
Write a short note on multitasking.

Answer

Multitasking is the ability of a computer to run more than one program, or task , at the same time. Multitasking
contrasts with single-tasking, where one process must entirely finish before another can begin. Another reason for mul-
titasking was in the design of real-time computing systems, where there are a number of possibly unrelated external
activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled
with process prioritization to ensure that key activities were given a greater share of available process time.

On a single-processor multitasking system, multiple processes don’t actually run at the same time since there’s only
one processor. Instead, the processor switches among the processes that are active at any given time. Because computers
are so fast compared with people, however, it appears to the user as though the computer is executing all of the tasks
at once. Multitasking also allows the computer to make good use of the time it would otherwise spend waiting for I/O
devices and user input–that time can be used for some other task that doesn’t need I/O at the moment.

Multitasking on a multiple-processor system still involves the processors switching between tasks because there are
almost always more tasks to run than there are processors. Note, however, that there can be as many tasks running simul-
taneously as there are processors in the system. For the moment, we’ll discuss multitasking on a single-processor system.

Preemptive and Non-Preemptive Multitasking


Within the category of multitasking, there are two major sub-categories: preemptive and non-preemptive.

In non-preemptive multitasking use of the processor is never taken from a task; rather, a task must voluntarily yield
control of the processor before any other task can run. Programs running under a non-preemptive operating system must
be specially written to cooperate in multitasking by yielding control of the processor at frequent intervals. Programs that
do not yield sufficiently often cause non-preemptive systems to stay ”locked” in that program until it does yield. The
worst case of a program not yielding is when a program crashes.

Preemptive multitasking differs from non-preemptive multitasking in that the operating system can take control of
the processor without the task’s cooperation. (A task can also give it up voluntarily, as in non-preemptive multitasking).
The process of a task having control taken from it is called preemption. A preemptive operating system takes control of
the processor from a task in two ways:

1. When a task’s time quantum (or time slice) runs out. Any given task is only given control for a set amount of time
before the operating system interrupts it and schedules another task to run.
2. When a task that has higher priority becomes ready to run. The currently running task loses control of the processor
when a task with higher priority is ready to run regardless of whether it has time left in its quantum or not.

A task can be in serveral possible states:

1. Terminated : The task is completed and may be removed.


2. Ready : The task is ready to run at any time the processor is free.
3. Executing : The task is being executed on the processor.

4. Suspended : The task is waiting for some event to happen.


Switching Among Tasks
At any given time, a processor (CPU) is executing in a specific context. This context is made up of the contents of its
registers and the memory (including stack, data, and code) that it is addressing. When the processor needs to switch to a
different task, it must save its current context (so it can later restore the context and continue execution where it left off)
and switch to the context of the new task. This process is called context switching.

8
Synchronization
When using preemptive multitasking or SMP, new problems suddenly arrive - you can never be sure when your task
is executed, which leads to problems if two tasks are depending on each other. The solution for this is using some sort of
interface between the two tasks to synchronize their execution so they may tell another task what point in execution they
have reached. Another problem is in sharing data between the two tasks - if one task is half-through writing something
to memory and the other task starts to read that data - what happens? This is a so-called race condition. The solution for
both of these tasks are a mechanism known as a semaphore.

Memory protection

When multiple programs are present in memory, an ill-behaved program may (inadvertently or deliberately) overwrite
memory belonging to another program, or even to the operating system itself.

The operating system therefore restricts the memory accessible to the running program. A program trying to access
memory outside its allowed range is immediately stopped before it can change memory belonging to another process.

Another key innovation was the idea of privilege levels. Low privilege tasks are not allowed some kinds of memory
access and are not allowed to perform certain instructions. When a task tries to perform a privileged operation a trap
occurs and a supervisory program running at a higher level is allowed to decide how to respond.

Scheduler

The scheduler is the component of the OS, which is responsible for the assignment of CPU time to tasks. It usually
implements a priority engine which lets it assign more CPU time to high priority tasks.

The figure of merits for scheduler are - Throughput, Latency, Waiting Time.

In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable
compromise. Preference is given to any one of the above mentioned concerns depending upon the user’s needs and
objectives.

9
Question 6
Explain about branch prediction.

Answer

Modern microprocessors are pipelined in order to get more instructions completed faster. This means that instructions
do not wait for the previous ones to complete before their execution begins. A problem with this approach arises, how-
ever, due to conditional branches. If the microprocessor encounters a conditional branch and the result for the condition
has not yet been calculated, how does it know whether to take the branch or not? This is where branch prediction comes in.

Branch prediction is what the processor uses to decide whether to take a conditional branch or not. Getting this in-
formation as accurately as possible is important, as an incorrect prediction (mispredict) will cause the microprocessor to
throw out all the instructions that did not need to be executed and start over with the correct set of instructions. This
process is particularly expensive with deeply pipelined processors.

There are two kinds of branch prediction: static and dynamic. Static branch prediction is used by the microprocessor
the first time a conditional branch is encountered, and dynamic branch prediction is used for succeeding executions of the
conditional branch code.

Static branch prediction


Static branch prediction is used when there is no data collected by the microprocessor when it encounters a branch,
which is typically the first time a branch is encountered. Static prediction is the simplest branch prediction technique
because it does not rely on information about the dynamic history of code executing. Instead it predicts the outcome of a
branch based solely on the branch instruction. The rules are simple:

• A forward branch defaults to not taken


• A backward branch defaults to taken

Dynamic branch prediction

Dynamic branch prediction is done in the microprocessor by using a history log of previously encountered branches
containing data for each branch, noting whether or not it was taken. This branch-history log is known as the Branch Target
Buffer (BTB). Every time a branch is encountered and the microprocessor knows which direction the branch has taken,
the BTB is updated to reflect that information.

The BTB is a buffer that holds a branchs address and a brief history of the direction that a branch has taken. The
address field is used somewhat like an index to the BTB, where it looks up whether a branch should be taken or not taken.
There are 16 bits in the Pentium 4 processor to signify whether a branch should be taken or not. The bits work like a
circular buffer, with each bit being checked for each check into the BTB.

The following is an example of the BTB entry for a backward branch (i.e., a do-while loop in C++) doing four itera-
tions with all its entries already filled in:

T T T N T T T N T T T N T T T N

In this example, the do-while loop has been executed multiple times, with each execution of the loop containing a fixed
amount of four iterations. Now that the history for this loop is in the BTB, whenever this code is executed again, it will
not cause any branch mispredicts and the accompanying penalty. One thing to remember is that the BTB is finite. Once
all the entries in the BTB have been consumed, an older entry will need to be used for a new branch that is encountered.

It is also best to remove branches from within a loop, if possible. By doing so, the branch is only taken once, rather
than for each iteration of the loop. This is only possible when the conditional does not change during the entire duration
of the loop.

10
Saturation Counter
A saturating counter or bimodal predictor is a state machine with four states:

1. Strongly not taken


2. Weakly not taken
3. Weakly taken

4. Strongly taken

When a branch is evaluated, the corresponding state machine is updated. Branches evaluated as not taken decrement
the state towards strongly not taken, and branches evaluated as taken increment the state towards strongly taken. The
advantage of the two-bit counter over a one-bit scheme is that a conditional jump has to deviate twice from what it has
done most in the past before the prediction changes. For example, a loop-closing conditional jump is mispredicted once
rather than twice.

Loop predictor
A conditional jump that controls a loop is best predicted with a special loop predictor. A conditional jump in the
bottom of a loop that repeats N times will be taken N-1 times and then not taken once. If the conditional jump is placed at
the top of the loop, it will be not taken N-1 times and then taken once. A conditional jump that goes many times one way
and then the other way once is detected as having loop behavior. Such a conditional jump can be predicted easily with a
simple counter.

11
Question 7
Explain ADC conversion available Motorola microprocessor.

Answer

The 68HC11s A/D converter utilizes a charge pump, which is employed as the system for switching the capacitors
to redistribute their stored charge. The maximum voltage the 68HC11s charge pump can develop is 7 or 8 volts. Conse-
quently, the A/D converter is capable of converting analog signals as high as 6 volts. It is important to remember then that
when programming the Motorola 68HC11 and utilizing the A/D converter, the charge pump must be enabled within the
application program.

The Motorola 68HC11 A/D system is an 8-bit, 8-channel, multiplexed-input converter. Because of the charge redistri-
bution technique described earlier, no external sample and hold circuitry is required. The converter can be synchronized to
the systems E-Clock provided it is greater than 750 KHz. This limit is imposed to eliminate or prevent charge leakage of
the capacitors in the charge redistribution converter. However, the systems timing can also be synchronized to an internal
RC oscillator.

The A/D converter contains four main functional blocks: multiplexer, analog converter, digital control and result stor-
age.

The eight Port E pins of the 68HC11 are fixed-direction inputs into the multiplexer, which selects one of 16 inputs
for the conversion process. The selected input is determined by the value of the bits in the ADCTL register (CD:CA).
The multiple-channel control bit (MULT) is used to determine whether the system operates in a single-channel or multi-
channel mode. However, in either mode, the A/D system performs conversions of 32 cycles each. After completing a
conversion, the data is stored in a result register (ADR1 ADR4) and the 68HC11 sets the conversion complete flag (CCF).

The Analog Converter block contains an 8-bit DAC, comparator and the Successive Approximation Register (SAR).
Since the DAC is 8-bits, the conversion process is a cycle of eight comparisons which begin with the most significant bit
(MSB) and every one of the conversions determines the value of a bit in the SAR. Upon completion of an entire conversion
sequence, the results within the SAR are moved to the corresponding result register.

The ADCTL register is what controls all of the operations of the A/D converter. This register not only selects the
analog input to be converted, but also determines whether a single conversion is to be made or a continuous flow of con-
versions is to be made.

The four 8-bit result registers (ADR1 ADR4) is where the results of the conversions are stored. When valid data is
present in the result registers, the CPU sets the CCF flag.

12
A/D Conversion Registers
There are essentially two main registers that are utilized during the A/D conversion process of the Motorola 68HC11,
the System Configuration Option Register (OPTION) and the A/D Control Register (ADCTL).

The A/D conversion for the 68HC11 begins by writing to the System Configuration Option Register (OPTION) whos
two most significant bits are employed for A/D conversion.

ADPU CSEL

The A/D power up (ADPU) bit is used to enable the charge pump for the charge redistribution circuitry. In order to
begin conversion the CPU first writes a binary one to the ADPU. By performing this routine first, the CPU allows time
for the seven volt output to build up at the charge pump prior to the first conversion. The clock select (CSEL) bit is what
permits the user the choice of using either the systems E-Clock or RC oscillator for synchronization.

Whether operating in single-channel mode or multi-channel mode, the 68HC11s A/D converter makes four conver-
sions at a time of either one or four inputs. The converter then stores the results in one of four 8-bit, read only, result
registers (ADR1-ADR4). The user must determine whether conversion will take place in single-channel or multi-channel
mode. Therefore, the user must also determine which of the analog inputs will be used by the converter. These specifica-
tions are made by writing a control word to the control register (ADCTL).

CCF SCAN MULT CD CC CB CA

Bit 7 of this register is the conversion complete flag (CCF), which is set after the fourth conversion and when all
of the ADR registers contain data. It is important to note here that the CCF is a read only bit and that writing to the
ADCTL register automatically clears the CCF bit. Also, if the converter is set to perform continuous conversions, these
conversions will continue even while the CCF flag is set (1).

This choice of either four conversions or continuous conversions is determined by the bit 5 position of the ADCTL,
the SCAN bit. Writing a one (1) in this position sets the converter for continuous conversions and writing a zero (0) in
this position sets the converter to make four conversions, then stopping and placing the results in the ADR registers.

The 68HC11s CPU utilizes the four least significant bits of the ADCTL register to determine whether a single channel
(MULT = 0) or four channels (MULT = 1) will be converted.

Single Channel Selection


CD CC CB CA AN Channel
0 0 0 0 AN0
0 0 0 1 AN1
0 0 1 0 AN2
0 0 1 1 AN3
0 1 0 0 AN4
0 1 0 1 AN5
0 1 1 0 AN6
0 1 1 1 AN7

Multi Channel Selection


CD CC AN Channel
0 0 AN0 - AN4
0 1 AN3 - An7

13

Вам также может понравиться