Вы находитесь на странице: 1из 24

Assignment Embedded System

1. Answer the following a. Define embedded system. List common characteristics of embedded systems. Which distinguishes it from computing systems? Answer: An embedded system is some combination of computer hardware and software, either fixed in capability or programmable, that is specifically designed for a particular function. Industrial machines, automobiles, medical equipment, cameras, household appliances, airplanes, vending machines and toys (as well as the more obvious cellular phone and PDA) are among the myriad possible hosts of an embedded system. Definition of embedded system: An embedded system is a computer system designed to perform one or a few dedicated functions in real time and control a complete device . It is a system dedicated for an application(s) or is a specific part of an application or product or a part of a larger system. Typically an embedded system consists of a microcomputer with software in ROM / FLASH memory, which starts running a dedicated application as soon as power is turned on and does not stop until power is turned off. The program run by the processor is not generally reprogrammable by the end user. A general-purpose definition of embedded systems is that they are devices used to control, monitor or assist the operation of equipment, machinery or plant. "Embedded" reflects the fact that they are an integral part of the system that includes hardware and mechanical parts. Characteristics of Embedded system: An Embedded system is characterized by the following: Dedicated functions or tasks or application Real time response generally not reprogrammable by the end user part of the system that includes hardware and mechanical parts. b. How to measure performance of a system? List the important parameters required to measure performance of an embedded system? Answer: Performance measurement is another important area of SPE. This includes planning measurement experiments to ensure that results are both representative and reproducible. Software also needs to be instrumented to facilitate SPE data collection. Finally, once the performance critical components of the software are identified, they are measured early and often to validate the models that have been built and also to verify earlier predictions.

Figure: Key parameters influencing performance scenarios based on cycle counts

Figure: Output from a Performance Calculator used to identify and track key performance scenarios Step 1: Determine where you need to be Reject nonspecific requirements or demands such as "the system should be as fast as possible. Instead, use quantitative terms such as Packet throughput must be 600K packets per second for IP forwarding. Understand potential future use cases of the system and design the necessary scalability to handle them. Figure 9 shows an example of how to define these performance goals. To do this properly, the first step is to identify the system dimension. This is the context and establishes the what. Then the key attributes are identified. This identifies how good the system "shall be". The metrics are then identified that determine h ow well know. These metrics should include a should value and a must value. In the example, IP forwarding is the system dimension. For a networking application, IP forwarding is a key measurement focus for this application area. The key attribute is fast - the system is going to be measured based on how many packets can be forwarded through the system. The key metric is thousands of packets per second (KPPS). The system should be able to achieve 600 Kpps and must reach at least 550 Kpps to meet the minimum system requirements. Figure: Defining quantitative performance goals

Assignment Embedded System


Step 2: Determine where you are now Understand which system use cases are causing performance problems. Quantify these problems using available tools and measurements. Figure 10 shows a debug architecture for a Multicore SoC that can provide the visibility hooks into the device for performance analysis and tuning. The figure shows a strategy for using embedded profiling and analysis tools to provide visibility into a SoC in order to collect the necessary information to quantify performance problems in an embedded system. Perform the appropriate assessment of the system to determine if the software architecture can support performance objectives. Can the performance issues be solved with standard software tuning and optimization methods? This is important because it's not desirable to spend many months tuning the application only to determine later that the goals cannot be met using these tuning approaches and more fundamental changes are required. Ultimately, this phase needs to determine whether performance improvement requires re-design or if tuning is sufficient.

Figure: A debug architecture for a Multicore SoC that can provide the visibility hooks into the device for performance analysis and tuning

Figure: A tools strategy for using embedded profiling and analysis tools to provide visibility into a SoC in order to collect the necessary information to quantify performance problems in an embedded system. Step 3: Decide if you can achieve the objectives There are several categories of performance optimization, ranging from the simple to the more complex: Low-cost/low ROI techniques Usually these techniques involve automatic optimization options. A common approach in embedded systems is the use of compiler options to enable more aggressive optimizations for the embedded software.

High-cost/high ROI techniques Re-designing or re-factoring the embedded software architecture. Intermediate cost/intermediate ROI techniques This category includes optimizing algorithms and data structures (for example using a FFT instead of a DFT) as well as approaches like modifying software to use more efficient constructs. Step 4: Develop a plan for achieving the objectives The first step is to pareto rank the proposed solutions based on return on investment. There are various ways to estimate resource requirements, including modeling and benchmarking. Once the performance targets have been determined, the tuning phase becomes iterative until the targets have been met. Figure shows an example of a process used in optimizing DSP embedded software. As this figure shows, there is a defined process for optimizing the application based on an iterative set of steps: Understand key performance scenarios for the application

Assignment Embedded System


Set goals for key optimizations for performance, memory, and power Select processor architecture to match the DSP application and performance requirements Analyze key algorithms in the system and perform algorithmic transformation if necessary Analyze compiler performance and output for key benchmarks Write out of box code in a high level language (e.g.C) Debug and achieve correctness and develop regression test Profile application and pareto rank hot spots Turn on low level optimizations with the compiler Run test regression, profile application, and re-rank Tune C/C++ code to map to the hardware architecture Run test regression, profile application, and re-rank Instrument code to get data as close as possible to the CPU using DMA and other techniques Run test regression, profile application, and re-rank Instrument code to provide links to compiler with intrinsics, pragmas, keywords Run test regression, profile application, and re-rank Turn on higher level of optimizations using compiler directives Run test regression, profile application, and re-rank Re-write key inner loops using assembly languages Run test regression, profile application, and re-rank If goals are not met, re-partition the application in hardware and software and start over again. At each phase, if the goals are met, then document and save code build settings and compiler switch settings

Figure: A Process for Managing the Performance of an embedded DSP application The first step is to gather data that can be used to support the analysis. This data includes, but is not limited to, time and cost to complete the performance analysis, software changes required, hardware costs if necessary, and software build and distribution costs. The next step is to gather data on the effect of the improvements which include things like hardware upgrades that can be deferred, staff cost savings, etc

Assignment Embedded System


Performance Engineering can be applied to each phase of the embedded software development process. For example, the Rational Unified Process (RUP) has four key phases: Inception, Elaboration, Construction, and Transition (Figure 13). RUP is an iterative software development process framework created by the Rational Software Corporation (now IBM). RUP is an adaptable process framework instead of a single concrete prescriptive process. Its intended to be tailored by software development teams that will select the elements of the process .

c. Explain an embedded system design life cycle model with a suitable example. Answer: Embedded Systems Design Embedded systems structural design is impending from systems engineering standpoint, more than a few representations (Embedded systems life cycle models) can be functional to illustrate the life cycle of embedded systems design. Most of these representations are based in the lead one or several mixture of the following development models: Big Bang Model: There is fundamentally No planning, No processes prepared earlier than and throughout the development life cycle of the system. Big Bang is a cosmological model of preliminary circumstances and succeeding development of world that is supported by the majority wide-ranging and precise enlightenment from present methodical facts and inspection. The term Big Bang commonly refers to the design that the cosmos has prolonged from primeval burning and thick preliminary circumstance at several restricted point in time in the history. Code and Fix Model: The requirements are defined, but no strict processes are prepared earlier than the beginning of development. It is a especially simple type of the model. Mainly it consists of two steps. Step 1: Writing the source code (development) Step 2: Find and Fix the bugs in that source code (Bug Fixing) Code and Fix Model is used in first phase of the software development. It can be used with small systems which do not necessitate maintenance. Waterfall Model: There is a process for developing a system design in steps, where outcome of one step drive into the subsequent step. The waterfall development life cycle model has it is beginning in the manufacturing and construction industries; it is extremely planned physical environments in which following the fact revolutionize are prohibitively expensive, if not impracticable. As no official software development methodologies survived at the occasion, this is hardware oriented model was simply custom-made for software development. Spiral Model: There is a process for developing a system design in steps, and all the way through the various steps, response (feedback) is obtain and implemented support into the process. The spiral model (Also known as spiral lifecycle model or spiral development) is a software development process, adding device of both in design and prototyping, in an attempt to unite advantages of top down and bottom up conceptions. Information technology (IT) uses systems development method (SDM) Spiral Model combines the characteristic of the waterfall model and the prototyping model. Embedded Systems Development Lifecycle Model

Assignment Embedded System

d. Draw and explain the block diagram of a two level bus architecture in a microprocessor based embedded system. Answer: The arbitration methods described are typically used to arbitrate among peripherals in an embedded system. However, many embedded systems contain multiple microprocessors communicating via a shared bus; such a bus is sometimes called a network. Arbitration in such cases is typically built right into the bus protocol, since the bus serves as the only connection among the microprocessors. A key feature of such a connection is that a processor about to write to the bus has no way of knowing whether another processor is about to simultaneously write to the bus. Because of the relatively long wires and high capacitances of such buses, a processor may write many bits of data before those bits appear at another processor. For example, Ethernet and I2C use a method in which multiple processors may write to the bus simultaneously, resulting in a collision and causing any data on the bus to be corrupted. The processors detect this collision, stop transmitting their data, wait for some time, and then try transmitting again. The protocols must ensure that the contending processors dont start sending again at the same time, or must at least use statistical methods that make the chances of them sending again at the same time small. As another example, the CAN bus uses a clever address encoding scheme such that if two addresses are written simultaneously by different processors using the bus, the higher-priority address will override the lower-priority one. Each processor that is writing the bus also checks the bus, and if the address it is writing does not appear, then that processor realizes that a higher-priority transfer is taking place and so that processor stops writing the bus.

Assignment Embedded System

e. Describe how wireless communication will be useful in embedded system. Give brief description of any two wireless protocols. Answer: Wireless communications is revolutionizing the world around us. Using wireless communications to send and receive messages, browse the Internet, and access corporate databases from any location in the world has already become commonplace. Bluetooth, Ultra Wide Band, satellite, cellular, wireless LAN, fixed broadband, mobile computing, and WWAN communications offer promise of ubiquitous applications with always-on capability anywhere anytime. Wireless networks are essential for the unified, efficient and cost-effective exchange of electronic information within embedded component systems. By freeing the user from the cord, personal communications networks, wireless LAN's, mobile radio networks and cellular systems, harbor the promise of fully distributed mobile computing and communications, anytime, anywhere. "Embedded in the system of life" - A new definition for the Embedded Systems in the near future! Indeed, embedded system applications are extending their scope and reach to every aspect of life including consumer electronics, medicine, communication, aviation, battlefield, transport, finance, education, environment monitoring etc.. Embedded Systems with Networking and Wireless Communication capability are now generating a new set of requirements and challenges in the field of Embedded System Design. An Embedded Wireless Application - An embedded wireless application usually runs on a small portable device that has a microprocessor with limited speed, little memory and little or no hard disk. The most common application is a cellular mobile phone that holds contact information in memory. Being compact within a device requires autonomy. You cannot access a large enterprise network and load applications and resources locally. The system is practically built-in. Both embedded and wireless systems require real-time performance. Some examples of wireless embedded applications are personal digital assistants, pagers, wireless mice, wireless keyboards, wireless laser printers and cordless bar code scanners. Bluetooth technology addresses the requirements of a few of these devices. Target Microprocessor - Both wireless and embedded applications must target their software towards specific boards or microprocessors such as Intel, PowerPC, ARM, HP and MIPS. Firmware is low-level code that runs on the raw processor. This firmware is CPU specific. Software runs on the firmware and is relatively independent of the underlying hardware.

Operating Systems and Software - Examples of embedded operating systems are Wind River's VxWorks, Microsoft Windows Embedded XP and Microsoft Windows CE. Examples of a wireless system are PalmOS for PDAs, Nokia's Symbian OS, Microsoft Windows Mobile and Microsoft Windows CE. Note how Windows CE is both embedded and compact, which serves as a potential choice for a light, portable, embedded and wireless real-time system. VxWorks and other embedded real-time operating systems have wireless security and Web service features in their middleware layers. Characteristics - To sum up the combined characteristics of embedded real time wireless systems, they require a CPU with a reduced speed running on an OS whose kernel takes up little memory when loaded. The OS implements wireless protocols at data-link, network, transport, session and application layers, which supports an application development environment built in a limited device configuration. Such a system is autonomous and communicates with a variety of devices at each layer of communication

Assignment Embedded System


f. The design and configuration of caches can have a large impact on performance and power consumption of a system. Justify. Answer: Any embedded system contains both on-chip and off-chip memory modules with different access times. During system integration, the decision to map critical data on to faster memories is crucial. In order to obtain good performance targeting less amounts of memory, the data buffers of the application need to be placed carefully in different types of memory. There have been huge research efforts intending to improve the performance of the memory hierarchy. Recent advancements in semiconductor technology have made power consumption also a limiting factor for embedded system design. SRAM being faster than the DRAM, cache memory comprising of SRAM is configured between the CPU and the main memory. The CPU can access the main memory (DRAM) only via the cache memory. Cache memories are employed in all the computing applications along with the processors. The size of cache allowed for inclusion on a chip is limited by the large physical size and large power consumption of the SRAM cells used in cache memory. Hence, its effective configuration for small size and low power consumption is very crucial in embedded system design. We present an optimal cache configuration technique for the effective reduction of size and high performance. The proposed methodology was tested in real time hardware using FPGA. Matrix multiplication algorithm with various sizes of workloads is hence validated. For the validation of the proposed approach we have used Xilinx ISE 9.2i for simulation and synthesis purposes. The prescribed design was implemented in VHDL. In today's embedded systems, memory represents a major bottleneck in terms of cost, performance, and power. To overcome this, effective customization of memory is mandatory. Memory estimation and optimization are crucial in identifying the effect of optimization methodology on the performance and energy requirements of the system, in turn obtaining a cost effective embedded system[1]. Figure 1 shows the basic processor architecture. It consists of a main memory module (DRAM), whose performance is far behind that of the connected processor. One of the solutions to reduce this bottleneck is to employ a cache memory (SRAM) in between the main memory and the processor as shown in figure 2, as SRAM cells have faster access time than DRAM. Also, it helps in improving the overall system performance. g. List the advantages of Real Time OS in an embedded system. Give an example of a process synchronization procedure in RTOS for an embedded system. Answer: A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time application requests. A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter. A hard real-time operating system has less jitter than a soft real-time operating system. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it [2] can meet a deadline deterministically it is a hard real-time OS. An RTOS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time The advent of microprocessors has opened up several product opportunities that simply did not exist earlier. These intelligent processors have invaded and embedded themselves into all fields of our lives be it the kitchen (food processors, microwave ovens), the living rooms (televisions, airconditioners) or the work places (fax machines, pagers, laser printer, credit card readers) etc. As the complexities in the embedded appl ications increase, use of an operating system brings in lot of advantages. Most embedded systems also have real-time requirements demanding the use of Real time Operating Systems (RTOS) capable of meeting the embedded system requirements. Real-time Operating System allows real-time applications to be designed and expanded easily. The use of an RTOS simplifies the design process by splitting the application code into separate tasks. An RTOS allows one to make better use of the system recourses by providing with valuable services such as semaphores, mailboxes, queues, time delays, time outsetc. This report looks at the basic concepts of embedded systems, operating systems and specifically at Real Time Operating Systems in order to identify the features one has to look for in an RTOS before it is used in a real-time embedded application. Some of the popular RTOS have been discussed in brief, giving their salient features, which make them suitable for different applications. 2. a. Explain with the help of example how delayed market entry of an embedded product will yield to losses. Answer:

Assignment Embedded System


While constraining the hardware-software architecture is detrimental to software development cost, the corresponding effect on development time can be even more devastating. Time-to-market costs often outweigh design, prototyping, and production costs of commercial products. A recent survey showed that being six months late to market resulted in an average 33% profit loss, assuming a five-year product lifetime. Early market entry increases product yield, market share, and brand name recognition. Figure below shows a model of demand and potential sales revenues for a new product (based on market research performed by Logic Automation, now owned by Synopsys). The un-shaded region of the triangle signifies revenue loss due to late market entry. If the product life cycle is short, being late to market can spell disaster.

Cost-driven system-level design To effectively address these system-level design challenges, product developers need a unified approach that considers the costs of both software and hardware options. This approach, which we call cost-driven systemlevel design, converge hardware and software design efforts into a methodology that improves cost, cycle time, and quality, and enhances design space exploration. We have developed such a methodology at Georgia Techs Centre for Signal and Image Processing under the auspices of the US Defence Advanced Research Projects Agencys RASSP (Rapid Prototyping of Application-Specific Digital Signal Processors) program. Aimed at COTSbased embedded systems, the methodology uses parametric cost and development time estimation models to drive the design process. It seamlessly integrates a cost-driven architecture design engine (CADE) with a librarybased co-simulation and co-verification environment for rapid prototyping. We use virtual prototypes3 to perform hierarchical design verification, with VHDL (VHSIC Hardware Description Language) software models of the hardware executing a representation of the application code. Figure 4 diagrams the overall process flow. Our research focuses on demonstrating how to implement the shaded process steps (system definition and architecture definition) using virtual prototyping in an automated environment. We believe that emphasizing cost-related issues benefits the cost-effectiveness of embedded micro-systems more in the early design stages than in the later stages. Figure 5,4 which depicts costs committed versus costs incurred over the product life cycle, illustrates the rationale for our belief. Although the front-end design process typically involves less than 10% of the total prototyping time and cost, it accounts for more than 80% of a systems life -cycle cost. For this reason, our research focuses on the front-end design process. Our approach uses cost estimation models as well as performance estimation models to facilitate system-level design exploration early in the design cycle. We model the architecture selection process using mathematical programming formulations. We implement the models with commercial optimization packages, which efficiently solve complex problems, enabling the user to concentrate on problem-specific issues rather than data structures and implementation details. As output, CADE produces candidate architectures that we verify using VHDL performance-modeling technology.

Assignment Embedded System

b. Explain with an example the principle of priority inversion in interrupts in an embedded system. Answer: In computer science, priority inversion is a problematic scenario in scheduling when a higher priority task is indirectly preempted by a lower priority task effectively "inverting" the relative priorities of the two tasks. This violates the priority model that high priority tasks can only be prevented from running by higher priority tasks and briefly by low priority tasks which will quickly complete their use of a resource shared by the high and low priority tasks. Example of a priority inversion Consider a task L, with low priority that requires a resource R. Now, consider another task H, with high priority. This task also requires resource R. If H starts after L has acquired resource R, then H has to wait to run until L relinquishes resource R. Everything works as expected up to this point, but problems arise when a new task M (which does not use R) starts with medium priority during this time. Since R is still in use (by L) H cannot run. Since M is the highest priority unblocked task, it will be scheduled before L. Since L has been preempted by M, L cannot relinquish R. So M will run till it is finished, then L will run - at least up to a point where it can relinquish R - and then H will run. Thus, in the scenario above, a task with medium priority ran before a task with high priority, effectively giving us a priority inversion. In some cases, priority inversion can occur without causing immediate harm the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dog timer resetting the entire system. The trouble experienced by the Mars lender "Mars Pathfinder is a classic example of problems caused by priority inversion in real-time systems. Priority inversion can also reduce the perceived performance of the system. Low priority tasks usually have a low priority because it is not important for them to finish promptly (for example, they might be a batch job or another non-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to

Assignment Embedded System


strict time constraintsit may be providing data to an interactive user, or acting subject to real-time response guarantees. Because priority inversion results in the execution of the low priority task blocking the high priority task, it can lead to reduced system responsiveness, or even the violation of response time guarantees. A similar problem called deadline interchange can occur within earliest deadline first scheduling (EDF). Solutions The existence of this problem has been known since the 1970s, but there is no fool-proof method to predict the situation. There are however many existing solutions, of which the most common ones are: Disabling all interrupts to protect critical sections When disabled interrupts are used to prevent priority inversion, there are only two priorities: preemptible, and interrupts disabled. With no third priority, inversion is impossible. Since there's only one piece of lock data (the interrupt-enable bit), misordering locking is impossible, and so deadlocks cannot occur. Since the critical regions always run to completion, hangs do not occur. Note that this only works if all interrupts are disabled. If only a particular hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's prioritization of interrupts. A simple variation, "single shared-flag locking" is used on some systems with multiple CPUs. This scheme provides a single flag in shared memory that is used by all CPUs to lock all inter-processor critical sections with a busy-wait. Inter processor communications are expensive and slow on most multiple CPU systems. Therefore, most such systems are designed to minimize shared resources. As a result, this scheme actually works well on many practical systems. These methods are widely used in simple embedded systems, where they are prized for their reliability, simplicity and low resource use. These schemes also require clever programming to keep the critical sections very brief. Many software engineers consider them impractical in general-purpose computers. A priority ceiling With priority ceilings, the shared mutex process (that runs the operating system code) has a characteristic (high) priority of its own, which is assigned to the task locking the mutex. This works well, provided the other high priority task(s) that tries to access the mutex does not have a priority higher than the ceiling priority. Priority inheritance Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from preempting the (originally) low priority task, and thereby affecting the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level. Random boosting Ready tasks holding locks are randomly boosted in priority until they exit the critical section. This solution is used in Microsoft Windows. 3. a. What is an optimization? Explain the different optimization opportunities available to customize single-purpose processors. Answer: Optimizing Custom Single Purpose Processors Optimization of SPP is necessary to meet the design challenges. This involves removing some unnecessary states from FSMD to simplify the design. Removal of redundant functional units can be another approach. Thus, optimization is the task of making the design metric values the best possible. Optimizing GCD Program This optimization can be carried by Optimizing the initial program This can be done by developing more efficient algorithm( in terms of time and space complexities ) and then, converting it to FSMD. For example, a more efficient algorithm for the GCD program is given below: int x,y,r; while (1){ while ( ! go _ i ); if (x _ i >= y _ i ) { x = x _ i; y = y _ i } else { x = y _ i; y = x _ i ; } // x must be the larger number while ( y != 0 ) { r=x%y; x=y;

Assignment Embedded System


y=r; } d_o=x; } The above algorithm makes use of modulo operation % and uses fewer steps and far more efficient in terms in time. The choice of algorithm can have the biggest impact on the efficiency of the designed processor.

Optimizing the FSMD Template based procedure to convert a program into FSMD may result in an inefficient FSMD, as this procedure results in many unnecessary states. Scheduling, is the task of assigning operations from the original program to states in an FSMD. The scheduling obtained using template-based method ( shown in the figure below ) can be improved. Some states can be merged into one, when there are no loop operations between them. Unwanted states, can be removed whose outgoing transitions have constant values. The optimized (reduced) FSMD with only 6 states ( from 13 states ) is shown in the figure . In deciding the number of states in an FSMD, the consequent hardware constraint must also be looked into. For example, a particular program statement had the operation a = b*c*d*e. Generating a single state for this operation will require usage of three multipliers in the datapath. Multipliers are expensive. To avoid such usage, the operation can be broken down into smaller operations like,t1 = b*c, t2 = d*e, and a = t1 * t2, with each smaller operation having its own state. Then, only one multiplier would be needed in the data path. Multiplication operations can share the multiplier. While optimizing, time constraints must also should be considered.

Optimizing the data path In this optimization process redundancy of functional units can be checked by sharing them. With a number of RT components in the datapath, Allocation is the task of choosing which RT components to use in the datapath. Binding is the task of mapping operations from FSMD to allocated components. Scheduling, allocation, and binding are highly interdependent. Sometimes, these tasks may have to be considered simultaneously. Optimizing FSM This is done through State encoding and State Minimization . State encoding is the task of assigning a unique bit pattern to each state in an FSM. CAD tools can be of great aid in searching for the best encoding that decides the size of the state register and the size of the combinational logic. State minimization is the task of merging equivalent states into a single state. Two states are equivalent if for all possible input combinations, those two states generate the same outputs and transitions to the same next state.

Assignment Embedded System


State Merging, on the other hand, is different from State minimization. State merging that is used in optimizing FSMD, changes the output. b. Describe Pipelining, Superscalar and VLIW Architectures. Answer: A superscalar architecture is one in which several instructions can be initiated simultaneously and executed independently. Pipelining allows several instructions to be executed at the same time, but they have to be in different pipeline stages at a given moment. Superscalar architectures include all features of pipelining but, in addition, there can be several instructions executing simultaneously in the same pipeline stage.

Superscalar architectures allow several instructions to be issued and completed per clock cycle. A superscalar architecture consists of a number of pipelines that are working in parallel. Depending on the number and kind of parallel units available, a certain number of instructions can be executed in parallel. In the following example a floating point and two integer operations can be issued and executed simultaneously; each unit is pipelined and can execute several operations in different pipeline stages.

Assignment Embedded System

Limitations on Parallel Execution The situations which prevent instructions to be executed in parallel by a superscalar architecture are very similar to those which prevent an efficient execution on any pipelined architecture. The consequences of these situations on superscalar architectures are more severe than those on simple pipelines, because the potential of parallelism in superscalars is greater and, thus, a greater opportunity is lost. Limitations on Parallel Execution (contd) Three categories of limitations have to be considered: o Resource conflicts: They occur if two or more instructions compete for the same resource (register, memory, functional unit) at the same time; They are similar to structural hazards discussed with pipelines. Introducing several parallel pipelined units, superscalar architectures try to reduce a part of possible resource conflicts.

o Control (procedural) dependency The presence of branches creates major problems in assuring an optimal parallelism If instructions are of variable length, they cannot be fetched and issued in parallel; an instruction has to applicable to RISCs, with fixed instruction length and format. o Data conflicts Data conflicts are produced by data dependencies between instructions in the program. Because superscalar architectures provide a great liberty in the order in which instructions can be issued and completed, data dependencies have to be considered with much attention.

4. a. Compare the write ability and storage performance of popular memories.

Assignment Embedded System


Answer:

Assignment Embedded System


b. Implement a RS-232 interface with a microcontroller and explain the signals and commands in it. Answer:

It is one of the simplest forms of microcontroller networking. It is commonly known as serial or RS-232 communications. As you can see in Figure 1.2, RS-232 was designed to tie DTE (Data Terminal Equipment) and DCE (Data Communications Equipment) devices together electronically to effect bidirectional data communications between the devices. An example of a DTE device is the serial port on your personal computer. Under normal conditions, the DTE interface on your personal computer asserts DTR (Data Terminal Ready) and RTS (Request To Send). DTR and RTS are called modem control signals. A typical DCE device interface responds to the assertion of DTR by activating a signal called DSR (Data Set Ready). The DTE RTS signal is answered by CTS (Clear To Send) from the DCE device. A standard external modem that you would connect to your personal computer serial port is a perfect example of a DCE device. Lets look at them from a commented standards point of view. 1. Pin 1 (Protective Ground Circuit, AA). This conductor is bonded to the equipment frame and can be connected to external grounds if other regulations or applications require it. Comment: Normally, this is either left open or connected to the signal ground. This signal is not found in the DTE 9-pin serial connector. 2. Pin 2 (Transmitted Data Circuit BA, TD). This is the data signal generated by the DTE. The serial bit stream from this pin is the data thats ultimately processed by a DCE device. Comment: This is pin 3 on the DTE 9-pin serial connector. This is one of the three minimum signals required to effect an RS-232 asynchronous communications session. 3. Pin 3 (Received Data Circuit BB, RD). Signals on this circuit are generated by the DCE. The serial bit stream originates at a remote DTE device and is a product of the receive circuitry of the local DCE device. This is usually digital data thats produced by an intelligent DCE or modem demodulator circuitry. Comment: This is pin 2 on the DTE 9-pin serial connector. This is another of the three minimum signals required to effect an RS-232 asynchronous communications session. 4. Pin 4 (Request To Send Circuit CA, RTS). This signal prepares the DCE device for a transmit operation. The RTS ON condition puts the DCE in transmit mode, while the OFF condition places the DCE in receive mode. The DCE should respond to an RTS ON by turning ON Clear to Send (CTS). Once RTS is turned OFF, it shouldnt be turned ON again until CTS has been turned OFF. This signal is used in conjunction with DTR, DSR and DCD. RTS is used extensively in flow control. Comment: This is pin 7 on the DTE 9-pin serial connector. In simple 3-wire implementations this signal is left disconnected. Sometimes you will see this signal tied to the CTS signal to satisfy a need for RTS and CTS to be active signals in the communications session. You will also see RTS feed CTS in a null modem arrangement. 5. Pin 5 (Clear To Send Circuit CB, CTS). This signal acknowledges the DTE when RTS has been sensed by the DCE device and usually signals the DTE that the DCE is ready to accept data to be transmitted. Data is

Assignment Embedded System


transmitted across the communications medium only when this signal is active. This signal is used in conjunction with DTR, DSR and DCD. CTS is used in conjunction with RTS for flow control. Comment: This is pin 8 on the DTE 9-pin serial connector. In simple 3-wire implementations this signal is left disconnected. Otherwise, youll see it tied to RTS in null modem arrangements or where CTS has to be an active participant in the communications session. 6. Pin 6 (Data Set Ready Circuit CC, DSR). DSR indicates to the DTE device that the DCE equipment is connected to a valid communication medium and, in some cases, indicates that the line is in the OFF HOOK condition. OFF HOOK is an indication that the DCE is either in dialing mode or in session with another remote DCE. When this signal is OFF, the DTE should be instructed to ignore all other DCE signals. If this signal is turned off before DTR, the DTE is to assume an aborted communication session. Comment: This is pin 6 on the DTE 9-pin serial connector. DSR is sometimes used in a flow control arrangement with DTR. Some modems assert DSR when power to the modem is applied regardless of the condition of the communications medium. 7. Pin 7 (Signal Common Circuit, AB). This conductor establishes the common-ground reference for all interchange circuits, except Circuit AA, protective ground. The RS-232-B specification permits this circuit to be optionally connected to protective ground within the DCE device as necessary. Comment: This is pin 5 on the DTE 9-pin serial connector and is the only ground connection. This is the third wire of the minimal 3-wire configuration. Thus, an RS- 232 asynchronous communications session can be effected with only three signals: TX (Transmit Data), RX (Receive Data) and signal ground. 8. Pin 8 (Data Carrier Detect Circuit CF, DCD). This pin is also known as Received Line Signal Detect (RSLD) or Carrier Detect (CD). This signal is active when a suitable carrier is established between the local and remote DCE devices. When this signal is OFF, RD should be clamped to the mark state (binary 1). Comment: This is pin 1 on the DTE 9-pin serial connector. Normally in use only if a modem is in the communications signal path. You will also see this signal tied active in a null modem arrangement. 9. Pin 20 (Data Terminal Ready Circuit CD, DTR). DTR signals are used to control switching of the DCE to the communication medium. DTR ON indicates to the DCE that connections in progress shall remain in progress, and if no sessions are in progress, new connections can be made. DTR is normally turned off to initiate ON HOOK (hang-up) conditions. The normal DCE response to activating DTR is to activate DSR. Comment: This is pin 4 on the DTE 9-pin serial connector. Unless you specify differently or run a program that controls DTR, usually it is present on the personal computer serial port as long as the personal computer is powered on. Occasionally you will see this signal used in flow control. 10. Pin 22 (Ring Indicator Circuit CE, RI). The ON condition of this signal indicates that a ring signal is being received from the communication medium (telephone line). Its normally up to the control program to act on the presence of this signal. Comment: This is pin 9 on the DTE 9-pin serial connector. This signal follows the incoming ring to an extent. Normally, this signal is used by DCE auto-answer algorithms. That is all thats needed RS-232 signal-wise to establish a session between a DTE and a DCE device. Now that you have a feeling for what each RS-232 signal does, lets review how they react to each other with respect to the transfer of data between a DTE and DCE device. a. Local DTE (personal computer, microcontroller, etc.) is powered up and DTR is asserted. b. Local DCE (modem, data set, microcontroller, etc.) is powered up and senses the DTR from the local DTE. c. Local DCE asserts DSR. If the DCE device is a modem, it goes off-hook (picks up the line). If a dial-up session is to be established, the DTE sends a dial instruction and phone number to the modem. d. If the line is good and the other end (remote DCE) is ready or answers the dial-up from the local DCE, a carrier is generated/detected and the local and remote DCE devices assert DCD. The session is established. e. The transmitting DTE raises RTS. f. The transmitting DCE responds with CTS.

g. The control program transmits or receives data. To perform RS-232 asynchronous communications with microcontrollers, we must employ a Voltage translation scheme of our own. 5. Explain cache direct mapping, Fully associative and Set-associative mapping techniques. Answer:

Assignment Embedded System


Cache memory (also called buffer memory) is local memory that reduces waiting times for information stored in the RAM (Random Access Memory). In effect, the computer's main memory is slower than that of the processor. There are, however, types of memory that are much faster, but which have a greatly increased cost. The solution is therefore to include this type of local memory close to the processor and to temporarily store the primary data to be processed in it. The speed of CPU is extremely high compared to the access time of main memory. Therefore the performance of CPU decreases due to the slow speed of main memory. To decrease the mismatch in operating speed, a small memory chip is attached between CPU and Main memory whose access time is very close to the processing speed of CPU. It is called CACHE memory. CACHE memories are accessed much faster than conventional RAM. It is used to store programs or data currently being executed or temporary data frequently used by the CPU. So each memory makes main memory to be faster and larger than it really is. It is also very expensive to have bigger size of cache memory and its size is normally kept small.

Mapping Memory Lines to Cache Lines - Three Strategies As a working example, suppose the cache has 2 = 128 lines, each with 2 = 16 words. Suppose the memory has 16 a 16-bit address, so that 2 = 64K words are in the memory's address space.
7 4

Direct Mapping Under this mapping scheme, each memory line j maps to cache line j mod 128 so the memory address looks like this:

Here, the "Word" field selects one from among the 16 addressable words in a line. The "Line" field defines the cache line where this memory line should reside. The "Tag" field of the address is then compared with that cache line's 5-bit tag to determine whether there is a hit or a miss. If there's a miss, we need to swap out the memory line that occupies that position in the cache and replace it with the desired memory line. E.g., Suppose we want to read or write a word at the address 357A, whose 16 bits are 0011010101111010. This translates to Tag = 6, line = 87, and Word = 10 (all in decimal). If line 87 in the cache has the same tag (6), then memory address 357A is in the cache. Otherwise, a miss has occurred and the contents of cache line 87 must be replaced by the memory line 001101010111 = 855 before the read or write is executed. Direct mapping is the most efficient cache mapping scheme, but it is also the least effective in its utilization of the cache - that is, it may leave some cache lines unused.

Assignment Embedded System


Associative Mapping This mapping scheme attempts to improve cache utilization, but at the expense of speed. Here, the cache line tags are 12 bits, rather than 5, and any memory line can be stored in any cache line. The memory address looks like this:

Here, the "Tag" field identifies one of the 2 = 4096 memory lines; all the cache tags are searched to find out whether or not the Tag field matches one of the cache tags. If so, we have a hit, and if not there's a miss and we need to replace one of the cache lines by this line before reading or writing into the cache. (The "Word" field again selects one from among 16 addressable words (bytes) within the line.) For example, suppose again that we want to read or write a word at the address 357A, whose 16 bits are 0011010101111010. Under associative mapping, this translates to Tag = 855 and Word = 10 (in decimal). So we search all of the 128 cache tags to see if any one of them will match with 855. If not, there's a miss and we need to replace one of the cache lines with line 855 from memory before completing the read or write. The search of all 128 tags in the cache is time-consuming. However, the cache is fully utilized since none of its lines will be unused prior to a miss (recall that direct mapping may detect a miss even though the cache is not completely full of active lines). Set-associative Mapping This scheme is a compromise between the direct and associative schemes described above. Here, the cache is divided into sets of tags, and the set number is directly mapped from the memory address (e.g., memory line j is mapped to cache set j mod 64), as suggested by the diagram below:

12

The

memory

address

is

now

partitioned

to

like

this:

Here, the "Tag" field identifies one of the 2 = 64 different memory lines in each of the 2 = 64 different "Set" values. Since each cache set has room for only two lines at a time, the search for a match is limited to those two lines (rather than the entire cache). If there's a match, we have a hit and the read or write can proceed immediately. Otherwise, there's a miss and we need to replace one of the two cache lines by this line before reading or writing into the cache. (The "Word" field again select one from among 16 addressable words inside the line.) In set-associative mapping, when the number of lines per set is n, the mapping is called n-way associative. For instance, the above example is 2-way associative. E.g., Again suppose we want to read or write a word at the memory address 357A, whose 16 bits are 0011010101111010. Under set-associative mapping, this translates to Tag = 13, Set = 23, and Word = 10 (all in decimal). So we search only the two tags in cache set 23 to see if either one matches tag 13. If so, we have a hit. Otherwise, one of these two must be replaced by the memory line being addressed (good old line 855) before the read or write can be executed. 6. a. Explain the flow of actions in a peripheral to memory transfer with DMA in an embedded system. Give its advantages over the transfer taking place with vectored interrupts.

Assignment Embedded System


Answer: DMA (Direct Memory Access) provides an efficient way of Data Transfers across "a Peripheral and Memory" or across "two memory regions". DMA is a processing engine which can perform data transfer operations (to or from the Memory). In absence of DMA engine, the CPU needs to handle these data operations, and hence the overall system performance is heavily reduced. DMA is specifically useful in the system which involve huge data transfers (in absence of DMA, CPU will be busy doing these transfers most of the time and will not be available for other processing). DMA Parameters: DMA Transfers involve a Source and a Destination. DMA Engine Transfers the data from Source to Destination. DMA engine requires source and destination addresses along with the Transfer Count in order to perform the data transfers. The (Source or Destination) Address could be a physical address (in case of a memory) or logical (in case of a peripheral). Transfer Counts specifies number of words which need to be transferred. As we mentioned before, Data transfer could be either from a Peripheral to Memory (generally called Received DMA) or from a Memory to Peripheral (generally called Transmit DMA) or from a Memory to another Memory (Generally called Memory DMA). Some DMA engines support additional parameters like Word-Size, and Address-Increment in addition to the Start Address and Transfer Count. Word-Size specify the size of each transfer. Address-increment specifies the offset from current address (in memory), which the next transfer should use. This provides a way of transferring data from non-contiguous memory locations. DMA Channels: DMA engine can support multiple DMA Channels. This means that at a given time, multiple DMA Transfers can happen (though physically only one transfer may be possible, but logically DMA can handle many channels in parallel). This feature makes the life of software programmer very easy (as he does not have to wait for the current DMA operations to finish before he programs the next DMA operation). Each DMA channel will have control register where the DMA Parameters can be specified. DMA Channels also have an interrupt associated with it (on most processors) which (optionally) triggers after completion of DMA transfer. Inside the ISR, programmer can take specific action (e.g. do some processing on the data which has been just received through DMA, or program a new DMA transfer). Chained DMA: Certain DMA controllers support an option for specifying DMA parameters in a Buffer (or array) in memory rather than directly writing it to DMA control registers (This is mostly applicable for the second DMA operation - parameters for first DMA operation are still specified in the control registers). This Buffer is called DMA Transfer Control Block (TCB). DMA controller takes the address of DMA TCB as one of the parameters, (in addition to the control parameters for first DMA transfer) and loads the DMA parameters (for second DMA operation) automatically from the Memory (after first DMA Operation is over). The TCB also contains an entry for "Next TCB Address", which provides an easy way for chaining multiple DMA operations in an automatic fashion (rather than having to program it after completion of each DMA). The DMA chaining can be stopped, by specifying a ZERO address in Next TCB Address field. Multi-dimensional DMA: combined with Address-Increment gives many options. Why use DMA? The obvious benefit of moving data using DMA transfers is that the processor can be doing something else while the transfer is in progress. However, using DMA sometimes has other advantages depending on the hardware involved. These include: Data transformations application-specific processors, such as those targeted to video or digital signal processing, may be able to perform data transformations as part of the DMA transfer. These include byte-order changes and 2D block transfers (see below). Lower power if the processor load is reduced and there are fewer interrupts for example, one on completion of the whole transfer rather than one per item of data transferred it may be possible to run the processor at a lower clock rate or even to enter a low power mode while DMA transfers are in flight. Higher data throughput a given processor may be able to handle more external interfaces at higher data rates, or a low-end processor might be able to handle more complicated interfaces such as Ethernet or USB. DMA transfers are also commonly used for inter-processor communication between cores in a multi-core processor or processors in a multi-processor system. Types of DMA transfer To assess the benefits and consequences of using DMA it is necessary to know what is happening at the hardware level. DMA transfers can take different forms depending on the hardware design and the peripheral devices involved. The simplest is a known as a single-cycle DMA transfer and is typically used to transfer data between devices such as UARTs or audio codecs that produce or consume data a word at a time. In this situation the peripheral device uses a control line to signal that it has data to transfer or requires new data. The DMA controller obtains access to the system bus, transfers the data, and then releases the bus. Access to the bus is granted when the

Assignment Embedded System


processor, or another bus master, is not using the bus. Single-cycle DMA transfers are therefore interleaved with other bus transactions and do not much affect the operation of the processor. Another type of transfer is a burst transfer. This is used to transfer a block of data in a series of back-to-back accesses to the system bus. The transfer starts with a bus request; when this is granted, the data is transferred in bursts, for example 128 bytes at a time. The burst size depends on the processor architecture and the peripheral, and may be programmable depending on the details of the hardware. While a burst transaction is occurring the processor will not be able to access the system bus. However, preventing the processor from accessing the system bus for example to fetch new instructions or data from external memory may cause it to stall, which can reduce the system performance. To minimise the effects of this problem, the DMA controller may release the bus after a fixed number of burst transactions or when a predetermined bandwidth limit has been reached. The system bus arbitration logic then determines which bus master will next have access to the bus and when the DMA transfer will continue with the next block. The number of bus masters and their relative priority is a wider system design issue that will not be addressed here. However, if the system needs to perform large DMA block transfers the system designer needs to carefully work out the bus bandwidth requirements to ensure there are no performance bottlenecks in the hardware or system design. b. Compare the Processes and Threads. Answer: Processes and Threads In concurrent programming, there are two basic units of execution: processes and threads. In the Java programming language, concurrent programming is mostly concerned with threads. However, processes are also important. A computer system normally has many active processes and threads. This is true even in systems that only have a single execution core, and thus only have one thread actually executing at any given moment. Processing time for a single core is shared among processes and threads through an OS feature called time slicing. It's becoming more and more common for computer systems to have multiple processors or processors with multiple execution cores. This greatly enhances a system's capacity for concurrent execution of processes and threads but concurrency is possible even on simple systems, without multiple processors or execution cores. Processes A process has a self-contained execution environment. A process generally has a complete, private set of basic run-time resources; in particular, each process has its own memory space. Processes are often seen as synonymous with programs or applications. However, what the user sees as a single application may in fact be a set of cooperating processes. To facilitate communication between processes, most operating systems support Inter Process Communication (IPC) resources, such as pipes and sockets. IPC is used not just for communication between processes on the same system, but processes on different systems. Most implementations of the Java virtual machine run as a single process. A Java application can create additional processes using a ProcessBuilder object. Multiprocess applications are beyond the scope of this lesson. Threads Threads are sometimes called lightweight processes. Both processes and threads provide an execution environment, but creating a new thread requires fewer resources than creating a new process. Threads exist within a process every process has at least one. Threads share the process's resources, including memory and open files. This makes for efficient, but potentially problematic, communication. Multithreaded execution is an essential feature of the Java platform. Every application has at least one thread or several, if you count "system" threads that do things like memory management and signal handling. But from the application programmer's point of view, you start with just one thread, called the main thread. This thread has the ability to create additional threads, as we'll demonstrate in the next section. 7. a. How is an embedded system applied in telecommunication devices and systems? Illustrate with the help of a case study. Answer: Embedded Systems has witnessed tremendous growth in the last one decade. Almost all the fast developing sectors like automobile, aeronautics, space, rail, mobile communications, and electronic payment solutions have witnessed increased use of Embedded technologies. Greater value to mobility is one of the prominent reasons for the rise and development of Embedded technologies. Initially, Embedded Systems were used for large, safety-critical and business-critical applications that included

Assignment Embedded System



Rocket & satellite control Energy production control Telephone switches Air Traffic Control

Embedded Systems research and development is now concerned with a very large proportion of the advanced products designed in the world. In one way, Embedded technologies run global transport industry that includes avionics, space, automotive, and trains. But, it is the electrical and electronic appliances like cameras, toys, televisions, home appliances, audio systems, and cellular phones that really are the visual interface of Embedded Systems for the common consumer. Advanced Embedded Technologies are deployed in developing

Process Controls (energy production and distribution, factory automation and optimization) Telecommunications (satellites, mobile phones and telecom networks), Energy management (production, distribution, and optimized use) Security (e-commerce, smart cards) Health (hospital equipment, and mobile monitoring)

In the last few years the emphasis of Embedded technologies was on achieving feasibility, but now the trend is towards achieving optimality. Optimality or optimal design of embedded systems means

Targeting a given market segment at the lowest cost and delivery time possible Seamless integration with the physical and electronic environment Understanding the real-world constraints such as hard deadlines, reliability, availability, robustness, power consumption, and cost

VECTOR Institute provides enough exposure to students in all Embedded technologies by making them work on real-time and multi-domain Embedded projects. Automobile sector Automobile sector has been in the forefront of acquiring and utilizing Embedded technology to produce highly efficient electric motors. These electric motors include brushless DC motors, induction motors and DC motors, that use electric/electronic motor controllers. European automotive industry enjoys a prominent place in utilizing Embedded technology to achieve better engine control. They have been utilizing the recent Embedded innovations such as brake-by-wire and drive-bywire. Embedded technology finds immediate importance in electric vehicles, and hybrid vehicles. Here Embedded applications bring about greater efficiency and ensure reduced pollution. Embedded technology has also helped in developing automotive safety systems such as the

Anti-lock braking system (ABS) Electronic Stability Control (ESC/ESP) Traction control (TCS) Automatic four-wheel drive

VECTOR Institute has endeared itself to the Automotive industry by providing quality Embedded personnel. Aerospace & Avionics Aerospace and Avionics demand a complex mixture of hardware, electronics, and embedded software. For efficient working, hardware, electronics and embedded software must interact with many other entities and systems. Embedded engineers confront major challenges,

Creating Embedded systems on time Taking the budgetary constraints into consideration Ensuring that the complex software and hardware interactions are right Assembling components that meet specifications and perform effectively together

Assignment Embedded System



Understanding the larger context of the embedded software Adopting the latest in Embedded technology like the fly-by-wire

VECTOR Institute prepares embedded students for the challenges associated with Aerospace and Avionics industry. Telecommunications If ever there is an industry that has reaped the benefits to Embedded Technology, for sure, it is only Telecommunications. The Telecom industry utilizes numerous embedded systems from telephone switches for the network to mobile phones at the end-user. The Telecom computer network also uses dedicated routers and network bridges to route data. Embedded engineers help in ensuring high-speed networking. This is the most critical part of embedded applications. The Ethernet switches and network interfaces are designed to provide the necessary bandwidth. These will allow in rapidly incorporating Ethernet connections into advanced Embedded applications. VECTOR Institute provides enough exposure to Embedded students in a broad range of application types. These Embedded application types range from high availability telecom and networking applications to rugged industrial and military environments. We prepare Embedded students for the challenges associated with Telecom industry. Consumer Electronics Consumer electronics has also benefited a lot from Embedded technologies. Consumer electronics includes

Personal Digital Assistants (PDAs) MP3 players Mobile phones Videogame consoles Digital cameras DVD players GPS receivers Printers

Even the household appliances, that include microwave ovens, washing machines and dishwashers, are including embedded systems to provide flexibility, efficiency and features. The latest in Embedded applications are seen as advanced HVAC systems that uses networked thermostats to more accurately and efficiently control temperature. In the present times, home automation solutions are being increasingly built on embedded technologies. Home automation includes wired and wireless-networking to control lights, climate, security, audio/visual, surveillance, etc., all of which use embedded devices for sensing and controlling. VECTOR Institute prepares embedded students for the challenges associated with Consumer Electronics industry. Railroad Railroad signalling in Europe relies heavily on embedded systems that allows for faster, safer and heavier traffic. Embedded technology has brought a sea of change in the way Railroad Signals are managed and Rail traffic in large volumes is streamlined. The Embedded technology enabled Railroad Safety Equipment is increasingly being adopted by Railway networks across the globe, with an assurance of far lesser Rail disasters to report. VECTOR Institute prepares embedded students for the challenges associated with Railroad industry. Electronic payment solutions sector In the present times there is stiff competition amongst embedded solutions providers to deliver innovative, and high-performance electronic payment solutions that are easy to use and highly secure. Embedded engineers knowledgeable in trusted proprietary technology develop the secure, encrypted transactions between payment systems and major financial institutions.

Assignment Embedded System


The market for mobile payments systems is growing rapidly. It is driven by retailers, restaurants, and other businesses that want to service customers anywhere, anytime. With the use of mobile devices, mostly mobile phones becoming very popular, embedded technologies compatible with mobile are being developed to promote payment systems. VECTOR Institute prepares embedded students for the challenges associated with Electronic Payment solutions sector. Smart cards industry Smart cards, though began prominently as either a debit or a credit card, are now being introduced in personal identification and entitlement schemes at regional, national, and international levels. Smart cards are appearing now as Citizen Cards, drivers licenses, and patient cards. We also come across contactless smart cards that are part of ICAO biometric passports aim to enhance security for international travel. Europe enjoys precedence in the use of Smart cards. All the E-services (e-banking, ehealth, e-training) are based on the leading edge in smart-card related technologies. VECTOR Institute has endeared itself to the Smart cards industry by providing quality Embedded personnel. b. Write short notes on: i. Network-oriented arbitration

Answer: Multiple peripherals might request service from a single resource. For example, multiple peripherals might share a single microprocessor that services their interrupt requests. As another example, multiple peripherals might share a single DMA controller that services their DMA requests. In such situations, two or more peripherals may request service simultaneously. We therefore must have some method to arbitrate among these contending requests, i.e., to decide which one of the contending peripherals gets service, and thus which peripherals need to wait. Several methods exist. 1. 2. 3. Priority arbiter Daisy-chain arbitration Network-oriented arbitration methods

Network-oriented arbitration methods The arbitration methods described are typically used to arbitrate among peripherals in an embedded system. However, many embedded systems contain multiple microprocessors communicating via a shared bus; such a bus is sometimes called a network. Arbitration in such cases is typically built right into the bus protocol, since the bus serves as the only connection among the microprocessors. A key feature of such a connection is that a processor about to write to the bus has no way of knowing whether another processor is about to simultaneously write to the bus. Because of the relatively long wires and high capacitances of such buses, a processor may write many bits of data before those bits appear at another processor. For example, Ethernet and I2C use a method in which multiple processors may write to the bus simultaneously, resulting in a collision and causing any data on the bus to be corrupted. The processors detect this collision, stop transmitting their data, wait for some time, and then try transmitting again. The protocols must ensure that the contending processors dont start sending again at the same time, or must at least use statistical methods that make the chances of them sending again at the same time small. As another example, the CAN bus uses a clever address encoding scheme such that if two addresses are written simultaneously by different processors using the bus, the higher-priority address will override the lower priority one. Each processor that is writing the bus also checks the bus, and if the address it is writing does not appear, then that processor realizes that a higher-priority transfer is taking place and so that processor stops writing the bus. ii. Error detection and correction

Answer: Error detection and correction or error control These are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data Regardless of the design of the transmission system, there will be errors, resulting in the change of one or more bits in a transmitted frame. When a code word is transmitted one or more number of transmitted bits will be reversed due to transmission impairments. Thus error will be introduced. It is possible to detect these errors if the

Assignment Embedded System


received code word is not one of the valid code words. To detect the errors at the receiver, the valid code words should be separated by a distance of more than 1. The concept of including extra information in the transmission of error detection is a good one. But instead of repeating the entire data stream, a shorter group of bits may be appended to the end of each unit. This technique is called redundancy because the extra bits are redundant to the information; they are discarded as soon as the accuracy of the transmission has been determined. Error correction is the mechanism by which we can make changes in the received erroneous data to make it free from error. The two most common error correction mechanisms are: iii. iv. Error correction by Retransmission Forward Error Correction.

Вам также может понравиться