Академический Документы
Профессиональный Документы
Культура Документы
1. Answer the following a. Define embedded system. List common characteristics of embedded systems. Which distinguishes it from computing systems? Answer: An embedded system is some combination of computer hardware and software, either fixed in capability or programmable, that is specifically designed for a particular function. Industrial machines, automobiles, medical equipment, cameras, household appliances, airplanes, vending machines and toys (as well as the more obvious cellular phone and PDA) are among the myriad possible hosts of an embedded system. Definition of embedded system: An embedded system is a computer system designed to perform one or a few dedicated functions in real time and control a complete device . It is a system dedicated for an application(s) or is a specific part of an application or product or a part of a larger system. Typically an embedded system consists of a microcomputer with software in ROM / FLASH memory, which starts running a dedicated application as soon as power is turned on and does not stop until power is turned off. The program run by the processor is not generally reprogrammable by the end user. A general-purpose definition of embedded systems is that they are devices used to control, monitor or assist the operation of equipment, machinery or plant. "Embedded" reflects the fact that they are an integral part of the system that includes hardware and mechanical parts. Characteristics of Embedded system: An Embedded system is characterized by the following: Dedicated functions or tasks or application Real time response generally not reprogrammable by the end user part of the system that includes hardware and mechanical parts. b. How to measure performance of a system? List the important parameters required to measure performance of an embedded system? Answer: Performance measurement is another important area of SPE. This includes planning measurement experiments to ensure that results are both representative and reproducible. Software also needs to be instrumented to facilitate SPE data collection. Finally, once the performance critical components of the software are identified, they are measured early and often to validate the models that have been built and also to verify earlier predictions.
Figure: Output from a Performance Calculator used to identify and track key performance scenarios Step 1: Determine where you need to be Reject nonspecific requirements or demands such as "the system should be as fast as possible. Instead, use quantitative terms such as Packet throughput must be 600K packets per second for IP forwarding. Understand potential future use cases of the system and design the necessary scalability to handle them. Figure 9 shows an example of how to define these performance goals. To do this properly, the first step is to identify the system dimension. This is the context and establishes the what. Then the key attributes are identified. This identifies how good the system "shall be". The metrics are then identified that determine h ow well know. These metrics should include a should value and a must value. In the example, IP forwarding is the system dimension. For a networking application, IP forwarding is a key measurement focus for this application area. The key attribute is fast - the system is going to be measured based on how many packets can be forwarded through the system. The key metric is thousands of packets per second (KPPS). The system should be able to achieve 600 Kpps and must reach at least 550 Kpps to meet the minimum system requirements. Figure: Defining quantitative performance goals
Figure: A debug architecture for a Multicore SoC that can provide the visibility hooks into the device for performance analysis and tuning
Figure: A tools strategy for using embedded profiling and analysis tools to provide visibility into a SoC in order to collect the necessary information to quantify performance problems in an embedded system. Step 3: Decide if you can achieve the objectives There are several categories of performance optimization, ranging from the simple to the more complex: Low-cost/low ROI techniques Usually these techniques involve automatic optimization options. A common approach in embedded systems is the use of compiler options to enable more aggressive optimizations for the embedded software.
High-cost/high ROI techniques Re-designing or re-factoring the embedded software architecture. Intermediate cost/intermediate ROI techniques This category includes optimizing algorithms and data structures (for example using a FFT instead of a DFT) as well as approaches like modifying software to use more efficient constructs. Step 4: Develop a plan for achieving the objectives The first step is to pareto rank the proposed solutions based on return on investment. There are various ways to estimate resource requirements, including modeling and benchmarking. Once the performance targets have been determined, the tuning phase becomes iterative until the targets have been met. Figure shows an example of a process used in optimizing DSP embedded software. As this figure shows, there is a defined process for optimizing the application based on an iterative set of steps: Understand key performance scenarios for the application
Figure: A Process for Managing the Performance of an embedded DSP application The first step is to gather data that can be used to support the analysis. This data includes, but is not limited to, time and cost to complete the performance analysis, software changes required, hardware costs if necessary, and software build and distribution costs. The next step is to gather data on the effect of the improvements which include things like hardware upgrades that can be deferred, staff cost savings, etc
c. Explain an embedded system design life cycle model with a suitable example. Answer: Embedded Systems Design Embedded systems structural design is impending from systems engineering standpoint, more than a few representations (Embedded systems life cycle models) can be functional to illustrate the life cycle of embedded systems design. Most of these representations are based in the lead one or several mixture of the following development models: Big Bang Model: There is fundamentally No planning, No processes prepared earlier than and throughout the development life cycle of the system. Big Bang is a cosmological model of preliminary circumstances and succeeding development of world that is supported by the majority wide-ranging and precise enlightenment from present methodical facts and inspection. The term Big Bang commonly refers to the design that the cosmos has prolonged from primeval burning and thick preliminary circumstance at several restricted point in time in the history. Code and Fix Model: The requirements are defined, but no strict processes are prepared earlier than the beginning of development. It is a especially simple type of the model. Mainly it consists of two steps. Step 1: Writing the source code (development) Step 2: Find and Fix the bugs in that source code (Bug Fixing) Code and Fix Model is used in first phase of the software development. It can be used with small systems which do not necessitate maintenance. Waterfall Model: There is a process for developing a system design in steps, where outcome of one step drive into the subsequent step. The waterfall development life cycle model has it is beginning in the manufacturing and construction industries; it is extremely planned physical environments in which following the fact revolutionize are prohibitively expensive, if not impracticable. As no official software development methodologies survived at the occasion, this is hardware oriented model was simply custom-made for software development. Spiral Model: There is a process for developing a system design in steps, and all the way through the various steps, response (feedback) is obtain and implemented support into the process. The spiral model (Also known as spiral lifecycle model or spiral development) is a software development process, adding device of both in design and prototyping, in an attempt to unite advantages of top down and bottom up conceptions. Information technology (IT) uses systems development method (SDM) Spiral Model combines the characteristic of the waterfall model and the prototyping model. Embedded Systems Development Lifecycle Model
d. Draw and explain the block diagram of a two level bus architecture in a microprocessor based embedded system. Answer: The arbitration methods described are typically used to arbitrate among peripherals in an embedded system. However, many embedded systems contain multiple microprocessors communicating via a shared bus; such a bus is sometimes called a network. Arbitration in such cases is typically built right into the bus protocol, since the bus serves as the only connection among the microprocessors. A key feature of such a connection is that a processor about to write to the bus has no way of knowing whether another processor is about to simultaneously write to the bus. Because of the relatively long wires and high capacitances of such buses, a processor may write many bits of data before those bits appear at another processor. For example, Ethernet and I2C use a method in which multiple processors may write to the bus simultaneously, resulting in a collision and causing any data on the bus to be corrupted. The processors detect this collision, stop transmitting their data, wait for some time, and then try transmitting again. The protocols must ensure that the contending processors dont start sending again at the same time, or must at least use statistical methods that make the chances of them sending again at the same time small. As another example, the CAN bus uses a clever address encoding scheme such that if two addresses are written simultaneously by different processors using the bus, the higher-priority address will override the lower-priority one. Each processor that is writing the bus also checks the bus, and if the address it is writing does not appear, then that processor realizes that a higher-priority transfer is taking place and so that processor stops writing the bus.
e. Describe how wireless communication will be useful in embedded system. Give brief description of any two wireless protocols. Answer: Wireless communications is revolutionizing the world around us. Using wireless communications to send and receive messages, browse the Internet, and access corporate databases from any location in the world has already become commonplace. Bluetooth, Ultra Wide Band, satellite, cellular, wireless LAN, fixed broadband, mobile computing, and WWAN communications offer promise of ubiquitous applications with always-on capability anywhere anytime. Wireless networks are essential for the unified, efficient and cost-effective exchange of electronic information within embedded component systems. By freeing the user from the cord, personal communications networks, wireless LAN's, mobile radio networks and cellular systems, harbor the promise of fully distributed mobile computing and communications, anytime, anywhere. "Embedded in the system of life" - A new definition for the Embedded Systems in the near future! Indeed, embedded system applications are extending their scope and reach to every aspect of life including consumer electronics, medicine, communication, aviation, battlefield, transport, finance, education, environment monitoring etc.. Embedded Systems with Networking and Wireless Communication capability are now generating a new set of requirements and challenges in the field of Embedded System Design. An Embedded Wireless Application - An embedded wireless application usually runs on a small portable device that has a microprocessor with limited speed, little memory and little or no hard disk. The most common application is a cellular mobile phone that holds contact information in memory. Being compact within a device requires autonomy. You cannot access a large enterprise network and load applications and resources locally. The system is practically built-in. Both embedded and wireless systems require real-time performance. Some examples of wireless embedded applications are personal digital assistants, pagers, wireless mice, wireless keyboards, wireless laser printers and cordless bar code scanners. Bluetooth technology addresses the requirements of a few of these devices. Target Microprocessor - Both wireless and embedded applications must target their software towards specific boards or microprocessors such as Intel, PowerPC, ARM, HP and MIPS. Firmware is low-level code that runs on the raw processor. This firmware is CPU specific. Software runs on the firmware and is relatively independent of the underlying hardware.
Operating Systems and Software - Examples of embedded operating systems are Wind River's VxWorks, Microsoft Windows Embedded XP and Microsoft Windows CE. Examples of a wireless system are PalmOS for PDAs, Nokia's Symbian OS, Microsoft Windows Mobile and Microsoft Windows CE. Note how Windows CE is both embedded and compact, which serves as a potential choice for a light, portable, embedded and wireless real-time system. VxWorks and other embedded real-time operating systems have wireless security and Web service features in their middleware layers. Characteristics - To sum up the combined characteristics of embedded real time wireless systems, they require a CPU with a reduced speed running on an OS whose kernel takes up little memory when loaded. The OS implements wireless protocols at data-link, network, transport, session and application layers, which supports an application development environment built in a limited device configuration. Such a system is autonomous and communicates with a variety of devices at each layer of communication
Cost-driven system-level design To effectively address these system-level design challenges, product developers need a unified approach that considers the costs of both software and hardware options. This approach, which we call cost-driven systemlevel design, converge hardware and software design efforts into a methodology that improves cost, cycle time, and quality, and enhances design space exploration. We have developed such a methodology at Georgia Techs Centre for Signal and Image Processing under the auspices of the US Defence Advanced Research Projects Agencys RASSP (Rapid Prototyping of Application-Specific Digital Signal Processors) program. Aimed at COTSbased embedded systems, the methodology uses parametric cost and development time estimation models to drive the design process. It seamlessly integrates a cost-driven architecture design engine (CADE) with a librarybased co-simulation and co-verification environment for rapid prototyping. We use virtual prototypes3 to perform hierarchical design verification, with VHDL (VHSIC Hardware Description Language) software models of the hardware executing a representation of the application code. Figure 4 diagrams the overall process flow. Our research focuses on demonstrating how to implement the shaded process steps (system definition and architecture definition) using virtual prototyping in an automated environment. We believe that emphasizing cost-related issues benefits the cost-effectiveness of embedded micro-systems more in the early design stages than in the later stages. Figure 5,4 which depicts costs committed versus costs incurred over the product life cycle, illustrates the rationale for our belief. Although the front-end design process typically involves less than 10% of the total prototyping time and cost, it accounts for more than 80% of a systems life -cycle cost. For this reason, our research focuses on the front-end design process. Our approach uses cost estimation models as well as performance estimation models to facilitate system-level design exploration early in the design cycle. We model the architecture selection process using mathematical programming formulations. We implement the models with commercial optimization packages, which efficiently solve complex problems, enabling the user to concentrate on problem-specific issues rather than data structures and implementation details. As output, CADE produces candidate architectures that we verify using VHDL performance-modeling technology.
b. Explain with an example the principle of priority inversion in interrupts in an embedded system. Answer: In computer science, priority inversion is a problematic scenario in scheduling when a higher priority task is indirectly preempted by a lower priority task effectively "inverting" the relative priorities of the two tasks. This violates the priority model that high priority tasks can only be prevented from running by higher priority tasks and briefly by low priority tasks which will quickly complete their use of a resource shared by the high and low priority tasks. Example of a priority inversion Consider a task L, with low priority that requires a resource R. Now, consider another task H, with high priority. This task also requires resource R. If H starts after L has acquired resource R, then H has to wait to run until L relinquishes resource R. Everything works as expected up to this point, but problems arise when a new task M (which does not use R) starts with medium priority during this time. Since R is still in use (by L) H cannot run. Since M is the highest priority unblocked task, it will be scheduled before L. Since L has been preempted by M, L cannot relinquish R. So M will run till it is finished, then L will run - at least up to a point where it can relinquish R - and then H will run. Thus, in the scenario above, a task with medium priority ran before a task with high priority, effectively giving us a priority inversion. In some cases, priority inversion can occur without causing immediate harm the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dog timer resetting the entire system. The trouble experienced by the Mars lender "Mars Pathfinder is a classic example of problems caused by priority inversion in real-time systems. Priority inversion can also reduce the perceived performance of the system. Low priority tasks usually have a low priority because it is not important for them to finish promptly (for example, they might be a batch job or another non-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to
Optimizing the FSMD Template based procedure to convert a program into FSMD may result in an inefficient FSMD, as this procedure results in many unnecessary states. Scheduling, is the task of assigning operations from the original program to states in an FSMD. The scheduling obtained using template-based method ( shown in the figure below ) can be improved. Some states can be merged into one, when there are no loop operations between them. Unwanted states, can be removed whose outgoing transitions have constant values. The optimized (reduced) FSMD with only 6 states ( from 13 states ) is shown in the figure . In deciding the number of states in an FSMD, the consequent hardware constraint must also be looked into. For example, a particular program statement had the operation a = b*c*d*e. Generating a single state for this operation will require usage of three multipliers in the datapath. Multipliers are expensive. To avoid such usage, the operation can be broken down into smaller operations like,t1 = b*c, t2 = d*e, and a = t1 * t2, with each smaller operation having its own state. Then, only one multiplier would be needed in the data path. Multiplication operations can share the multiplier. While optimizing, time constraints must also should be considered.
Optimizing the data path In this optimization process redundancy of functional units can be checked by sharing them. With a number of RT components in the datapath, Allocation is the task of choosing which RT components to use in the datapath. Binding is the task of mapping operations from FSMD to allocated components. Scheduling, allocation, and binding are highly interdependent. Sometimes, these tasks may have to be considered simultaneously. Optimizing FSM This is done through State encoding and State Minimization . State encoding is the task of assigning a unique bit pattern to each state in an FSM. CAD tools can be of great aid in searching for the best encoding that decides the size of the state register and the size of the combinational logic. State minimization is the task of merging equivalent states into a single state. Two states are equivalent if for all possible input combinations, those two states generate the same outputs and transitions to the same next state.
Superscalar architectures allow several instructions to be issued and completed per clock cycle. A superscalar architecture consists of a number of pipelines that are working in parallel. Depending on the number and kind of parallel units available, a certain number of instructions can be executed in parallel. In the following example a floating point and two integer operations can be issued and executed simultaneously; each unit is pipelined and can execute several operations in different pipeline stages.
Limitations on Parallel Execution The situations which prevent instructions to be executed in parallel by a superscalar architecture are very similar to those which prevent an efficient execution on any pipelined architecture. The consequences of these situations on superscalar architectures are more severe than those on simple pipelines, because the potential of parallelism in superscalars is greater and, thus, a greater opportunity is lost. Limitations on Parallel Execution (contd) Three categories of limitations have to be considered: o Resource conflicts: They occur if two or more instructions compete for the same resource (register, memory, functional unit) at the same time; They are similar to structural hazards discussed with pipelines. Introducing several parallel pipelined units, superscalar architectures try to reduce a part of possible resource conflicts.
o Control (procedural) dependency The presence of branches creates major problems in assuring an optimal parallelism If instructions are of variable length, they cannot be fetched and issued in parallel; an instruction has to applicable to RISCs, with fixed instruction length and format. o Data conflicts Data conflicts are produced by data dependencies between instructions in the program. Because superscalar architectures provide a great liberty in the order in which instructions can be issued and completed, data dependencies have to be considered with much attention.
It is one of the simplest forms of microcontroller networking. It is commonly known as serial or RS-232 communications. As you can see in Figure 1.2, RS-232 was designed to tie DTE (Data Terminal Equipment) and DCE (Data Communications Equipment) devices together electronically to effect bidirectional data communications between the devices. An example of a DTE device is the serial port on your personal computer. Under normal conditions, the DTE interface on your personal computer asserts DTR (Data Terminal Ready) and RTS (Request To Send). DTR and RTS are called modem control signals. A typical DCE device interface responds to the assertion of DTR by activating a signal called DSR (Data Set Ready). The DTE RTS signal is answered by CTS (Clear To Send) from the DCE device. A standard external modem that you would connect to your personal computer serial port is a perfect example of a DCE device. Lets look at them from a commented standards point of view. 1. Pin 1 (Protective Ground Circuit, AA). This conductor is bonded to the equipment frame and can be connected to external grounds if other regulations or applications require it. Comment: Normally, this is either left open or connected to the signal ground. This signal is not found in the DTE 9-pin serial connector. 2. Pin 2 (Transmitted Data Circuit BA, TD). This is the data signal generated by the DTE. The serial bit stream from this pin is the data thats ultimately processed by a DCE device. Comment: This is pin 3 on the DTE 9-pin serial connector. This is one of the three minimum signals required to effect an RS-232 asynchronous communications session. 3. Pin 3 (Received Data Circuit BB, RD). Signals on this circuit are generated by the DCE. The serial bit stream originates at a remote DTE device and is a product of the receive circuitry of the local DCE device. This is usually digital data thats produced by an intelligent DCE or modem demodulator circuitry. Comment: This is pin 2 on the DTE 9-pin serial connector. This is another of the three minimum signals required to effect an RS-232 asynchronous communications session. 4. Pin 4 (Request To Send Circuit CA, RTS). This signal prepares the DCE device for a transmit operation. The RTS ON condition puts the DCE in transmit mode, while the OFF condition places the DCE in receive mode. The DCE should respond to an RTS ON by turning ON Clear to Send (CTS). Once RTS is turned OFF, it shouldnt be turned ON again until CTS has been turned OFF. This signal is used in conjunction with DTR, DSR and DCD. RTS is used extensively in flow control. Comment: This is pin 7 on the DTE 9-pin serial connector. In simple 3-wire implementations this signal is left disconnected. Sometimes you will see this signal tied to the CTS signal to satisfy a need for RTS and CTS to be active signals in the communications session. You will also see RTS feed CTS in a null modem arrangement. 5. Pin 5 (Clear To Send Circuit CB, CTS). This signal acknowledges the DTE when RTS has been sensed by the DCE device and usually signals the DTE that the DCE is ready to accept data to be transmitted. Data is
g. The control program transmits or receives data. To perform RS-232 asynchronous communications with microcontrollers, we must employ a Voltage translation scheme of our own. 5. Explain cache direct mapping, Fully associative and Set-associative mapping techniques. Answer:
Mapping Memory Lines to Cache Lines - Three Strategies As a working example, suppose the cache has 2 = 128 lines, each with 2 = 16 words. Suppose the memory has 16 a 16-bit address, so that 2 = 64K words are in the memory's address space.
7 4
Direct Mapping Under this mapping scheme, each memory line j maps to cache line j mod 128 so the memory address looks like this:
Here, the "Word" field selects one from among the 16 addressable words in a line. The "Line" field defines the cache line where this memory line should reside. The "Tag" field of the address is then compared with that cache line's 5-bit tag to determine whether there is a hit or a miss. If there's a miss, we need to swap out the memory line that occupies that position in the cache and replace it with the desired memory line. E.g., Suppose we want to read or write a word at the address 357A, whose 16 bits are 0011010101111010. This translates to Tag = 6, line = 87, and Word = 10 (all in decimal). If line 87 in the cache has the same tag (6), then memory address 357A is in the cache. Otherwise, a miss has occurred and the contents of cache line 87 must be replaced by the memory line 001101010111 = 855 before the read or write is executed. Direct mapping is the most efficient cache mapping scheme, but it is also the least effective in its utilization of the cache - that is, it may leave some cache lines unused.
Here, the "Tag" field identifies one of the 2 = 4096 memory lines; all the cache tags are searched to find out whether or not the Tag field matches one of the cache tags. If so, we have a hit, and if not there's a miss and we need to replace one of the cache lines by this line before reading or writing into the cache. (The "Word" field again selects one from among 16 addressable words (bytes) within the line.) For example, suppose again that we want to read or write a word at the address 357A, whose 16 bits are 0011010101111010. Under associative mapping, this translates to Tag = 855 and Word = 10 (in decimal). So we search all of the 128 cache tags to see if any one of them will match with 855. If not, there's a miss and we need to replace one of the cache lines with line 855 from memory before completing the read or write. The search of all 128 tags in the cache is time-consuming. However, the cache is fully utilized since none of its lines will be unused prior to a miss (recall that direct mapping may detect a miss even though the cache is not completely full of active lines). Set-associative Mapping This scheme is a compromise between the direct and associative schemes described above. Here, the cache is divided into sets of tags, and the set number is directly mapped from the memory address (e.g., memory line j is mapped to cache set j mod 64), as suggested by the diagram below:
12
The
memory
address
is
now
partitioned
to
like
this:
Here, the "Tag" field identifies one of the 2 = 64 different memory lines in each of the 2 = 64 different "Set" values. Since each cache set has room for only two lines at a time, the search for a match is limited to those two lines (rather than the entire cache). If there's a match, we have a hit and the read or write can proceed immediately. Otherwise, there's a miss and we need to replace one of the two cache lines by this line before reading or writing into the cache. (The "Word" field again select one from among 16 addressable words inside the line.) In set-associative mapping, when the number of lines per set is n, the mapping is called n-way associative. For instance, the above example is 2-way associative. E.g., Again suppose we want to read or write a word at the memory address 357A, whose 16 bits are 0011010101111010. Under set-associative mapping, this translates to Tag = 13, Set = 23, and Word = 10 (all in decimal). So we search only the two tags in cache set 23 to see if either one matches tag 13. If so, we have a hit. Otherwise, one of these two must be replaced by the memory line being addressed (good old line 855) before the read or write can be executed. 6. a. Explain the flow of actions in a peripheral to memory transfer with DMA in an embedded system. Give its advantages over the transfer taking place with vectored interrupts.
Embedded Systems research and development is now concerned with a very large proportion of the advanced products designed in the world. In one way, Embedded technologies run global transport industry that includes avionics, space, automotive, and trains. But, it is the electrical and electronic appliances like cameras, toys, televisions, home appliances, audio systems, and cellular phones that really are the visual interface of Embedded Systems for the common consumer. Advanced Embedded Technologies are deployed in developing
Process Controls (energy production and distribution, factory automation and optimization) Telecommunications (satellites, mobile phones and telecom networks), Energy management (production, distribution, and optimized use) Security (e-commerce, smart cards) Health (hospital equipment, and mobile monitoring)
In the last few years the emphasis of Embedded technologies was on achieving feasibility, but now the trend is towards achieving optimality. Optimality or optimal design of embedded systems means
Targeting a given market segment at the lowest cost and delivery time possible Seamless integration with the physical and electronic environment Understanding the real-world constraints such as hard deadlines, reliability, availability, robustness, power consumption, and cost
VECTOR Institute provides enough exposure to students in all Embedded technologies by making them work on real-time and multi-domain Embedded projects. Automobile sector Automobile sector has been in the forefront of acquiring and utilizing Embedded technology to produce highly efficient electric motors. These electric motors include brushless DC motors, induction motors and DC motors, that use electric/electronic motor controllers. European automotive industry enjoys a prominent place in utilizing Embedded technology to achieve better engine control. They have been utilizing the recent Embedded innovations such as brake-by-wire and drive-bywire. Embedded technology finds immediate importance in electric vehicles, and hybrid vehicles. Here Embedded applications bring about greater efficiency and ensure reduced pollution. Embedded technology has also helped in developing automotive safety systems such as the
Anti-lock braking system (ABS) Electronic Stability Control (ESC/ESP) Traction control (TCS) Automatic four-wheel drive
VECTOR Institute has endeared itself to the Automotive industry by providing quality Embedded personnel. Aerospace & Avionics Aerospace and Avionics demand a complex mixture of hardware, electronics, and embedded software. For efficient working, hardware, electronics and embedded software must interact with many other entities and systems. Embedded engineers confront major challenges,
Creating Embedded systems on time Taking the budgetary constraints into consideration Ensuring that the complex software and hardware interactions are right Assembling components that meet specifications and perform effectively together
VECTOR Institute prepares embedded students for the challenges associated with Aerospace and Avionics industry. Telecommunications If ever there is an industry that has reaped the benefits to Embedded Technology, for sure, it is only Telecommunications. The Telecom industry utilizes numerous embedded systems from telephone switches for the network to mobile phones at the end-user. The Telecom computer network also uses dedicated routers and network bridges to route data. Embedded engineers help in ensuring high-speed networking. This is the most critical part of embedded applications. The Ethernet switches and network interfaces are designed to provide the necessary bandwidth. These will allow in rapidly incorporating Ethernet connections into advanced Embedded applications. VECTOR Institute provides enough exposure to Embedded students in a broad range of application types. These Embedded application types range from high availability telecom and networking applications to rugged industrial and military environments. We prepare Embedded students for the challenges associated with Telecom industry. Consumer Electronics Consumer electronics has also benefited a lot from Embedded technologies. Consumer electronics includes
Personal Digital Assistants (PDAs) MP3 players Mobile phones Videogame consoles Digital cameras DVD players GPS receivers Printers
Even the household appliances, that include microwave ovens, washing machines and dishwashers, are including embedded systems to provide flexibility, efficiency and features. The latest in Embedded applications are seen as advanced HVAC systems that uses networked thermostats to more accurately and efficiently control temperature. In the present times, home automation solutions are being increasingly built on embedded technologies. Home automation includes wired and wireless-networking to control lights, climate, security, audio/visual, surveillance, etc., all of which use embedded devices for sensing and controlling. VECTOR Institute prepares embedded students for the challenges associated with Consumer Electronics industry. Railroad Railroad signalling in Europe relies heavily on embedded systems that allows for faster, safer and heavier traffic. Embedded technology has brought a sea of change in the way Railroad Signals are managed and Rail traffic in large volumes is streamlined. The Embedded technology enabled Railroad Safety Equipment is increasingly being adopted by Railway networks across the globe, with an assurance of far lesser Rail disasters to report. VECTOR Institute prepares embedded students for the challenges associated with Railroad industry. Electronic payment solutions sector In the present times there is stiff competition amongst embedded solutions providers to deliver innovative, and high-performance electronic payment solutions that are easy to use and highly secure. Embedded engineers knowledgeable in trusted proprietary technology develop the secure, encrypted transactions between payment systems and major financial institutions.
Answer: Multiple peripherals might request service from a single resource. For example, multiple peripherals might share a single microprocessor that services their interrupt requests. As another example, multiple peripherals might share a single DMA controller that services their DMA requests. In such situations, two or more peripherals may request service simultaneously. We therefore must have some method to arbitrate among these contending requests, i.e., to decide which one of the contending peripherals gets service, and thus which peripherals need to wait. Several methods exist. 1. 2. 3. Priority arbiter Daisy-chain arbitration Network-oriented arbitration methods
Network-oriented arbitration methods The arbitration methods described are typically used to arbitrate among peripherals in an embedded system. However, many embedded systems contain multiple microprocessors communicating via a shared bus; such a bus is sometimes called a network. Arbitration in such cases is typically built right into the bus protocol, since the bus serves as the only connection among the microprocessors. A key feature of such a connection is that a processor about to write to the bus has no way of knowing whether another processor is about to simultaneously write to the bus. Because of the relatively long wires and high capacitances of such buses, a processor may write many bits of data before those bits appear at another processor. For example, Ethernet and I2C use a method in which multiple processors may write to the bus simultaneously, resulting in a collision and causing any data on the bus to be corrupted. The processors detect this collision, stop transmitting their data, wait for some time, and then try transmitting again. The protocols must ensure that the contending processors dont start sending again at the same time, or must at least use statistical methods that make the chances of them sending again at the same time small. As another example, the CAN bus uses a clever address encoding scheme such that if two addresses are written simultaneously by different processors using the bus, the higher-priority address will override the lower priority one. Each processor that is writing the bus also checks the bus, and if the address it is writing does not appear, then that processor realizes that a higher-priority transfer is taking place and so that processor stops writing the bus. ii. Error detection and correction
Answer: Error detection and correction or error control These are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data Regardless of the design of the transmission system, there will be errors, resulting in the change of one or more bits in a transmitted frame. When a code word is transmitted one or more number of transmitted bits will be reversed due to transmission impairments. Thus error will be introduced. It is possible to detect these errors if the