Академический Документы
Профессиональный Документы
Культура Документы
Dinuka Wijesinghe
B00538290
In other words, there are no surprises to a user of average Inappropriate Architectural Features
sophistication. Deterministic systems hove instructions
whose behavior does not vary; in other words, the There are many reasons that an architectural feature is
execution speed of an instruction does not depend on included in a processor. Today, usually the feature is
execution history of the program. For real time control added in order to support "general purpose" computing,
with hard deadlines, a designer must be able to predict which usually means engineeri n g w o r k s t a t i o n
with absolute certainty the running time of the piece of a p p l i c a t i o n p r o g r a m s . T h e problem is that this
definition of "general purpose" is often at odds with
code responding to an external event. If a system has requirements for the systems we have just described. The
variable performance elements, such as cache memory, the list of architectural features that are inappropriate
designer must be extremely pessimistic about the for hard real time embedded control systems seems
performance of these features, and plan on the worst case. similar to a catalogue of modern processor design
techniques. They are listed here along with a brief
In some systems, it is important to exactly determine the explanation of the problems they can cause. The focus here
time for a sequence of instructions to execute, with no is on determinism and predictability, although the reader
possibility for variation allowed. Processors with should not forget that most of these features also
contribute substantially to system complexity as well.
extremely consistent execution speeds can use software to
reduce system complexity by replacing hardware such as
UART chips, disk controller chips, or video memory se- Cache memory: Data and instruction cache m e m o r i e s
quencers with programmed transfers using carefully timed a r e t h e b i g g e s t s o u r c e s o f u n p r e dictability and
code. indeterminacy. Modern compiler technology is just
beginning to address the issue of scheduling
Since program memory space may be extremely limited, instruction cache misses. Furthermore, it is
programs may be highly compacted by using subroutine impossible to characterize data cache misses for programs
calls to reuse common code sequences. In fact, many of interesting complexity. Because the time required to
embedded applications use threaded languages such as process a cache miss is approximately an order of
Forth because they produce exceedingly compact code. magnitude slower than the time required for a cache hit,
This suggests that an embedded processor should have significant execution speed degradation takes place
efficient support for subroutine calls as a means of with even a small percentage of cache misses. Figure 1
conserving program memory. shows relative execution time variations that may take
place with varying numbers of cache misses. In this memory in turn either requires the use of cache memory
five-instruction example, assuming that a cache or a significant expense in making all program memory
miss costs five clock cycles, execution time may vary out of cache-grade memory chips for high guaranteed
between 5 and 25 clock cycles for the same code. In hard system performance.
real-time systems, often the worst case of all cache misses
must be assumed for time-critical sections, resulting Variable length execution times of instructions: Many
in designing hardware with only fast static memory instructions can take a variable number of clock cycles to
chips that render the cache management hardware super- execute, often depending on the data input to the
fluous. On-chip caches add to system debugging concerns instruction. An example of such an instruction is a
because they are often not visible outside the chip. multiply routine that has an "early-out" feature.
If no spare bus cycles are forthcoming, the processor number of wait states or cache misses in program
must be stalled when the write buffer overflows. memory. For example, if there is latency for filling an
Additional stalls can be caused if a memory read could empty pre-fetch queue, adding a one- cycle wait state
possibly correspond to a memory location that has yet that causes the pre-fetch queue to be emptied may add
to be updated by the write buffer. Interaction between more than one clock cycle to program execution time.
the write buffer and cache misses for instruction and
data fetches can cause indeterminacy in program
execution. Pipelines: Deep instruction pipeline increases interrupt
response latency. Even if the first instruction of an
Branch target buffers: This is a special case of an interrupt service routine can be inserted into the
instruction cache, in which past program execution pipeline within one clock of the interrupt being asserted, it
history is used to cache instructions at branch targets for still takes several clock cycles for the instruction to
quick access. This type of instruction cache operates pass through the pipeline and actually do its job. A
in a manner dependent on input data, and so is pipeline also requires some sort of handling of
beyond the ability of compilers to manage effectively. data dependencies and delays for memory access, which
Branch target buffers are sometimes used in results either in compiler-generated flops, instruction
conjunction with branch prediction strategies, in which rearrangement, or hardware-generated pipeline
the compiler encodes a guess as to whether a stalls. All of these dependency-resolution techniques
particular branch is likely to be taken into the decrease predictability, determinacy, or both.
instruction itself, causing either the branch target or
the next consecutive instruction to be fetched before
the outcome of the branch is resolved. The actual time Pure load/store architectures: Load/store RISC
to perform a branch then depends on whether the architectures can by their very nature makes no
compiler guessed the branch outcome correctly, and provision for atomic read/modify/write instructions. This
whether the branch target buffer value corresponds to generally required the addition of external hardware
the guess made by the compiler. for implementing semaphores, locks, and other
inter-process communication devices, increasing
Pre-fetch queues: Pre-fetch queues greatly affect the system complexity.
predictability of execution because the execution time
for an instruction depends on whether or not the Scoreboards: Scoreboards attempt to speed up program
preceding instructions were slow enough to allow the execution by allowing several instructions to be in
pre-fetch queue to accumulate new instructions. Thus, execution concurrently. Score boarding usually implies
determining whether a particular instruction will out-of-order execution, which may make it impossible to
execute quickly from the pre-fetch queue or slowly from correctly determine the correct system state in the event of
program memory requires cycle counting of several an exception or interrupt (the term usually used here
preceding instructions to see if spare memory cycles would is that the system has imprecise interrupts).
have been available for the pre-fetch queue to fill. This Furthermore, the execution time of an instruction
cycle counting is subtly affected by data-dependent depends on the resources being used by the previous
execution paths through the program as well as the several instructions, and can vary considerably.
Supers cater instruction issue: Newer-generation implies the use of a cache memory to perform
microprocessors are beginning to implement address translation (a Translation Look aside Buffer),
"superscalar" instruction execution, in which multiple which has the same problems as other caches. If a
instructions may be issued in a single clock cycle. disk-paging system is used, the problem of speed
Unfortunately, the number of instructions that may be degradation can become much worse than for simple
issued depends on the types of the instructions, the cache misses.
available hardware resources, and the execution
history of the program. All these factors make it Use of DRAM access technology: Some processor
very difficult to determine any single instruction's implementations exploit special access techniques to
execution time. DRAM chips in order to achieve high average
throughput with a low system coat. The use of static-
column, paged, and video DRAMs creates real
Dependence on sophisticated compiler technology: Most problems in predictability because access to these
RISC designs depend implicitly on complex and DRAMs becomes much slower whenever page boundaries
ambitious optimizing compilers for fast op erating must be crossed. The use of DRAM memory chips also
speeds. This t radeoff between hardware and requires performing memory refresh, which can
software complexity is a part of the RISC design interact with other accesses to cause performance
philosophy. The problem is that the optimizing compilers degradations that are worse than expected.
make it difficult to establish a correspondence between
source code and the code that actually is executed Autonomous I/O processors: The use of
because of the large number of program transformations autonomous I/O processors and DMA hardware that
performed. This c o m p l i ca t es p r o g ra m d eb u g g i n g , steal bus cycles can cause non-determinism as the
a s w el l a s decouples the programmer's source code processor is stalled for bus accesses. Consequently, it is
changes f ro m eff ect s on f in a l p ro g ra m sometimes desirable to perform I/O directly from the
p erf o rm a n ce, decreasing predictability. A surprising CPU.
side-effect of the complexity of these compilers is that
they add a level of indeterminacy in the mapping of
source to object code. The use of heuristics for The Challenge to Processor Designers
block-level and global optimization techniques can
actually cause the same exact sequence of source code The above list of architectural characteristics contains
statements to generate significantly different object code elements of design found in virtually every RISC or
in two places of the same program module (Razouk CISC processor made today. These architectural
etal. 1986). characteristics are truly valuable for speeding up average
system performance, especially in workstation
Large register file: The use of a large number of environments. The problem is that hard embedded
registers allows programs to execute quickly, hut real time control applications do not have the same
greatly increases the amount of information (the characteristics and requirements as workstation-based
machine state) that must be saved on context switches and engineering design applications, and so these
interrupts. This adversely affects responsiveness to characteristics actually hurt s ystem p erformance.
external events. Therefo re, what is needed are new CPU design's that
make tradeoffs in favor of the requirements of real time
embedded control, even at the expense of performance of
Virtual memory: The use of virtual memory workstation-type applications.
The most common operating system for process a huge number of instructions within a given time
personal computer include Windows from Microsoft, OS X span. Examples of which are computer that scan levels and
from Apple, and the wide variety of Linux variants that can states in a facility. It is important that the monitors see
be obtained from their respective developers. What most changes occur at the instant that they do.
people do not know are Real-time Operating Systems or
generally referred to by the acronym RTOS. These are Most operating systems use a time sharing architecture
operating systems that are used for more specialized where each task is assigned a small slice of time to execute
applications that demand response that is as close to real its instructions before switching to another task. The
time as possible. The most significant difference between switching process is too fast that it often appears as real
the two is in how they approach each task. Standard time to users. Some RTOS’s also use this design but with
operating systems focus on doing as much computation in much lower density of tasks to ensure that the processor
the shortest span of time while RTOS’s emphasize on never gets to loaded, which can increase the response time.
having a predictable response time. Another design that is used for an RTOS is an event-driven
architecture. In this design, the system only switches tasks
Standard operating systems are widely used nowadays, once an event or interrupt occurs.
partly due to the rapid spread of personal computers.
Devices that use standard operating systems, aside from Coding practices for an RTOS is much stricter compared to
computers and laptops, are also beginning to appear. a standard OS as the code needs to perform consistently all
RTOS’s are used in more specialized fields where the the time. Standard OS’s are not that concerned since
response time is much more important than the ability to response time is not of great importance in its application.
An RTOS is not required for an embedded system but it A well-architected RTOS will handle these functions much
can offer powerful advantages to the system developer. more efficiently that a programmer could write the code.
Without an RTOS the developer must write his own code RTOS developers are expert in how to handle operations
with a minimum of processor cycles.
to handle all of these functions.
There are many questions when adapting a GPOS (General
Purpose Operating System) RTOS:
• Enables real-time, deterministic scheduling and task
prioritization
• What does an RTOS have that a GPOS does not?
• Abstracts away the complexities of the processor • How useful are the real-time extensions now
• Provides a solid infrastructure constructed of rules available for some GPOSs?
and policies • Can such extensions provide a reasonable
• Simplifies development and improves developer facsimile of RTOS performance?
productivity
• Integrates and manages resources needed by Task scheduling
communications stacks and middleware
Let’s begin with task scheduling. In a GPOS, the scheduler
• Optimizes use of system resources typically uses a fairness policy to dispatch threads and
• Improves product reliability, maintainability and processes onto the CPU. Such a policy enables the high
quality overall throughput required by desktop and server
• Promotes product evolution and scaling applications, but offers no guarantees that high-priority,
time critical threads will execute in preference to lower-
priority threads. For instance, a GPOS may decay the Avoiding priority inversion
priority assigned to a high-priority thread, or otherwise
dynamically adjust the priority in the interest of fairness to Even in an RTOS, a lower-priority thread can
other threads in the system. A high-priority thread can, as inadvertently prevent a higher priority thread from
a consequence, be pre-empted by threads of lower priority. accessing the CPU – a condition known as priority
In addition, most GPOSs have unbounded dispatch inversion. Generally speaking, priority inversion occurs
latencies: the more threads in the system, the longer it when two tasks of differing priority share a resource, and
takes for the GPOS to schedule a thread for execution. Any the higher priority task cannot obtain the resource from
one of these factors can cause a high-priority thread to the lower-priority task. To prevent this condition from
miss its deadlines – even on a fast CPU. exceeding a fixed and bounded interval of time, an RTOS
In an RTOS, on the other hand, threads execute in order may provide a choice of mechanisms including priority
of their priority. If a high-priority thread becomes ready to inheritance and priority ceiling emulation. We could not
run, it will, within a small and bounded time interval, take possibly do justice to both mechanisms here, so let us
over the CPU from any lower-priority thread that may be focus on a simple example of priority inheritance. To
executing. Moreover, the high-priority thread can run begin, we first must look at the blocking that occurs from
uninterrupted until it has finished what it needs to do – synchronization in systems, and how priority inversion
unless, of course, it is pre-empted by an even higher can occur as a result. Let us say two jobs are running, and
priority thread. This approach, known as priority-based that Job 1 has the higher priority. If Job 1 is ready to
pre-emptive scheduling, allows high-priority threads to execute, but must wait for Job 2 to complete an activity,
meet their deadlines consistently, no matter how many we have blocking. This blocking may occur as a result of
other threads are competing for CPU time. synchronization – waiting for a shared resource controlled
by a lock or a semaphore – or as a result of requesting a
Pre-emptily kernel service.
The blocking allows Job 2 to run until the condition that
For most GPOSs, the OS kernel is not pre-emptily. Job 1 is waiting for occurs (for instance, Job 2 unlocks the
Consequently, a high-priority user thread can never pre- resource that both jobs share). At that point, Job 1 gets to
empty a kernel call, but must instead wait for the entire execute. The total time that Job 1 must wait may vary, with
call to complete – even if the call was invoked by the a minimum, average, and maximum time. This interval is
lowest-priority process in the system. Moreover, all known as the blocking factor. If Job 1 is to meet any of its
priority information is usually lost when a driver or other timeliness constraints, this factor cannot vary according to
system service, usually performed in a kernel call, executes any parameter, such as the number of threads or an input
on behalf of a client thread. Such behaviour causes into the system. In other words, the blocking factor must
unpredictable delays and prevents critical activities from be bounded. Now let us introduce a third job that has a
completing on time. In an RTOS, on the other hand, kernel higher priority than Job 2 but a lower priority than Job 1
operations are pre-emptily. There are still windows of time (Figure 1). If Job 3 becomes ready to run while Job 2 is
in which pre-emption may not occur, but in a well- executing, it will pre-empty Job 2, and Job 2 will not be
designed RTOS, those intervals are extremely brief, often able to run again until Job 3 blocks or completes. This will,
on the order of hundreds of nanoseconds. Moreover, the of course, increase the blocking factor of Job 1; that is, it
RTOS will impose an upper bound on how long pre- will further delay Job 1 from executing. The total delay
emption is held off and interrupts disabled; this allows introduced by the pre-emption is a priority inversion. In
developers to ascertain worst-case latencies. fact, multiple jobs can pre-empt Job 2 in this way,
resulting in an effect known as chain blocking. Under
To achieve this goal, the RTOS kernel must be simple and these circumstances, Job 2 might be pre-empted for an
as elegant as possible. indefinite period of time, yielding an unbounded priority
Only services with a short execution path should be inversion, causing Job 1 to fail to meet any of its timeliness
included in the kernel itself. Any operations that require constraints. This is where priority inheritance comes in. If
significant work we return to our scenario and make Job 2 run at the
(For instance, process loading) must be assigned to priority of Job 1 during the synchronization period, then
external processes or threads. Such an approach helps Job 3 will not be able to pre-empty Job 2, and the resulting
ensure that there is an upper bound on the longest non- priority inversion is avoided (Figure 2).
pre-emptily code path through the kernel. In a few GPOSs,
such as Linux 2.6, some degree of pre-emptibility has been Duelling kernels
added to the kernel. However, the intervals during which
pre-emption may not occur are still much longer than GPOSs – Linux, Windows, and various flavors of UNIX –
those in a typical RTOS; the length of any such pre- typically lack the Mechanisms we have just discussed.
emption interval will depend on the longest critical section Nonetheless, vendors have developed a number of real-
of any modules incorporated into the kernel (for example, time extensions and patches in an attempt to fill the gap.
networking and file systems). Moreover, a pre-empt kernel There is, for example, the dual-kernel approach, in which
does not address other conditions that can impose the GPOS runs as a task on top of a dedicated real-time
unbounded latencies, such as the loss of priority kernel (Figure 3). Any tasks that require deterministic
information that occurs when a client invokes a driver or scheduling run in this kernel, but at a higher priority than
other system service. the GPOS kernel. These tasks can thus pre-empt the GPOS
whenever they need to execute and will yield the CPU to
the GPOS only when their work is done.
Unfortunately, tasks running in the real-time kernel can
make only limited use of existing system services in the Modified GPOSs
GPOS – file systems, networking, and so on. In fact, if a
real-time task calls out to the GPOS for any service, it will Rather than use a second kernel, other approaches modify
be subject to the same pre-emption problems that prohibit the GPOS itself, such as by adding high-resolution timers
GPOS processes from behaving deterministically. As a or a modified process scheduler. Such approaches have
result, new drivers and system services must be created merit, since they allow developers to use a standard kernel
specifically for the real-time kernel – even when (albeit with proprietary patches) and programming model.
equivalent services already exist for the GPOS. Moreover, they help address the requirements of reactive,
Also, tasks running in the real-time kernel do not benefit event-driven systems.
from the robust Memory Management Unit (MMU) Unfortunately, such low-latency patches do not address
protected environment that most GPOSs provide for the complexity of most real-time environments, where
regular, non-real-time processes. Instead, they run real-time tasks span larger time intervals and have more
unprotected in kernel space. dependencies on system services and other processes than
Consequently a real-time task that contains a common do tasks in a simple event-driven system. For instance, in
coding error, such as a corrupt C pointer, can easily cause systems where real-time tasks depend on services such as
a fatal kernel fault. To complicate matters, different device drivers or file systems, the problem of priority
implementations of the dual kernel approach use different inversion would have to be addressed. In Linux, for
APIs. In most cases, services written for the GPOS cannot example, the driver and Virtual File System (VFS)
be easily ported to the real-time kernel, and tasks written frameworks would effectively have to be rewritten along
for one vendor’s real-time extensions may not run on with any device drivers and file systems employing them.
another’s real-time extensions. Without such modifications, real-time tasks could
experience unpredictable delays when blocked on a by Linux and UNIX – is an important first step. So is
service. As a further problem, most existing Linux drivers providing well-documented source and customization kits
are not pre-empting. To ensure predictability, that address the specific needs and design challenges of
programmers would also have to insert pre-emption embedded developers. The architecture of the RTOS also
points into every driver in the system. All this points to comes into play. An RTOS based on a microkernel design,
the real difficulty and immense scope of modifying a GPOS for instance, can make the job of OS customization
so it is capable of supporting real-time behaviour. fundamentally easier to achieve. In a microkernel RTOS,
However, this is not a matter of RTOS good, GPOS bad. only a small core of fundamental OS services (such as
GPOSs such as Linux, Windows XP, and UNIX all serve signals, timers, and scheduling) reside in the kernel itself.
their intended purposes extremely well. They only fall All other components (such as drivers, file systems,
short when they are forced into deterministic protocol stacks, and applications) run outside the kernel as
environments they were not designed for, such as those separate, memory protected processes (Figure 4). As a
found in automotive telemetric systems, medical result, developing custom drivers and other application-
instruments, and continuous media applications. specific OS extensions does not require specialized kernel
debuggers or kernel gurus. In fact, as user space programs,
What about an RTOS? such extensions become as easy to develop as regular
applications, since they can be debugged with standard
Still, there are undoubted benefits to using a GPOS, such source level tools and techniques. For instance, if a device
as support for widely used APIs, and in the case of Linux, driver attempts to access memory outside its process
the open source model. With open source, a developer can container, the OS can identify the process responsible,
customize OS components for application-specific indicate the location of the fault, and create a process
demands and save considerable time troubleshooting. The dump file viewable with source-level debugging tools. The
RTOS vendor cannot afford to ignore these benefits. dump file can include all the information the debugger
Extensive support for POSIX APIs – the same APIs used needs to identify the source line that caused the problem,
Along with diagnostic information such as the contents of Consequently, does the RTOS support primitive graphics
data items and a history of function calls. This architecture libraries, or does it provide an embeddable windowing
also provides superior fault isolation. If a driver, protocol system that supports 3D rendering, multi-layer interfaces,
stack, or other system service fails, it can do so without and other advanced graphics? Can you customize the
corrupting other services or the OS kernel. In fact, GUI’s look-and feel? Can the GUI display and input
software watchdogs can continuously monitor for such multiple languages simultaneously? And does the GUI
events and restart the offending service dynamically, support an embeddable web browser? The browser should
without resetting the entire system or involving the user in have a scalable footprint, and be capable of rendering web
any way. Likewise, drivers and other services can be pages on very small screens. It should also support current
stopped, started, or upgraded dynamically, again without a standards such as HTML 4.01, XHTML 1.1, SSL 3.0, and
system shutdown. WML 1.3.
An RTOS can help make complex applications both On the tools side, does the RTOS vendor offer diagnostic
predictable and reliable. In fact, the predictability made tools for tasks such as trace analysis, memory analysis,
possible by an RTOS adds a form of reliability that cannot application profiling, and code coverage? What of the
be achieved with a GPOS (if a system based on a GPOS development environment? Is it based on an open
does not behave correctly due to incorrect timing platform like Eclipse, which lets you readily plug in third-
behaviour, then we can justifiably say that the system is party tools for modelling, version control, and so on? Or is
unreliable). Still, choosing the right RTOS can itself be a it based on proprietary technology? On one point, there is
complex task. The underlying architecture of an RTOS is no question. The RTOS can play a key role in determining
an important criterion, but so are other factors. Consider how reliable your system will be, how well it will perform,
Internet support. Does the RTOS support an up-to-date and how easily it will support new or enhanced
suite of protocol stacks such as IPv4, IPv6, IPSec, SCTP, functionality. And it can support many of the rich services
and IP filtering with NAT? What about scalability? Does traditionally associated with GPOSs, but implemented in a
the RTOS support a limited number of processes, or does way to address the severe processing and memory
it allow hundreds or even thousands of processes to run restraints of embedded systems.
concurrently? And does it provide support for distributed
or symmetric multiprocessing?
GUI considerations
Windows XP Embedded also includes embedded-enabling 802.1X 802.1X provides secure access to the network to
features, such as Flexible Boot and Storage Options, and support wireless LANs and Ethernet. It enables
Enhanced Write Filter. Rich networking capabilities and interoperable user identification, centralized
management features provide Windows® XP Embedded authentication, and dynamic key management and can
devices seamless integration with PCs, servers, Web secure both wired and wireless LAN access.
services, and other devices. Comprehensive networking
protocol support includes Infrared Data Association
(IrDA) Support, 802.11 and 802.1x, and Universal Plug Infrared Data Association (IrDA) Support Windows
and Play. XP Embedded fully supports standards for this low-cost,
low-power, cable replacement technology that enables any
device to communicate from point-to-point when two
Windows® XP Embedded includes a completely devices are in line sight of each other.
redesigned tool set that enables devices based on Windows
XP Embedded to be brought to market faster. The new
Windows Embedded Studio streamlines the end-to-end Embedded-Specific Features Windows XP Embedded
development process enabling developers to rapidly with Service Pack 1 incorporates the latest embedded-
configure, build, and deploy smart designs with rich enabling capabilities, focusing on remote deployment and
applications. maintenance scenarios.
Windows CE is the best choice if the device Extensive and Extensible Device Support – Windows CE
requires semi-real-time capabilities, instant on, directly supports many kinds of hardware peripherals and
and the ability to operate in a disconnected devices, such as keyboards, mouse devices, touch panels,
state. Windows NT Embedded would be the serial ports, Ethernet, modems, USB devices, audio
better choice if the device requires the full devices, parallel ports, printer devices, and storage devices
Win32 API, built-in networking, Windows NT (ATA or flash media). At the same time, as Windows CE
security, remote administration and extends to new industries and device categories, there is
management, POSIX support, and extensive tremendous potential for embedded developers to easily
device driver support. Real-time capabilities for add new device types and peripherals. This is made
NT and NT Embedded can be added using third possible through the simple and well-defined Windows CE
party solutions. There is no USB support for Device Driver model, which provides a well-documented
Windows NT Embedded, all drivers for USB set of device driver interfaces (DDIs) and sample code that
support must be custom written. demonstrates implementation. This model enables
embedded developers (both OEMs and IHVs) to easily
implement their own driver software for a variety of
devices that run on the Microsoft Windows CE platform.
Shell
Object Store
§ Includes a minimum shell that supports WINDOWS CE - Based HARDWARE
application launching and switching REQUIREMENTS
§ The shell can be included to serve as the basis of
an embedded application At a minimum, a Windows CE-based device must have a
§ Key UI components included to allow rapid supported processor, memory and an internal timer for
development of custom embedded shells scheduling. No other hardware is specifically referenced by
the operating system, but most devices will have a number
of peripherals.
Internationalization/Localization
Tied for second place are the Microsoft twins, Windows XP several popular embedded OSes with single-digit market
for Embedded and Windows CE (there were separated by shares. ThreadX is deliberately listed twice: once for the
one vote). In a predictable reversal, WinXP and WinCE Green Hills distribution and once for Express Logic's
were most popular where VxWorks was weakest: product. Oddly, Green Hills appears to sell more copies of
industrial and computer-related applications. XPe was ThreadX than the company that developed it.
especially strong in manufacturing industries, where Looking ahead, we asked those same survey takers what
VxWorks also made a good showing. OS they'd consider using in their next project. Would they
In the number four spot we have Texas Instruments' stay with their current OS or switch to another? We also
DSP/BIOS, a product with obvious hardware ties. That's asked non-OS users the same question to see if they were
followed by Red Hat's Linux, the first of the many Linux considering a commercial OS for the first time. The results
variants to make the list. are summarized in Figure 7.
After that, the numbers all drop to below 8%. QNX, RTX,
C/OS, and Mentor Graphics' Nucleus RTOS head the list of
The chart shows the delta between the "would consider"
responses and the "using now" responses from the
previous graph. In brief, the winners lose and the losers
win. How is that possible and what does it all mean?
First, the responses don't necessarily mean that VxWorks
users are unhappy with their current RTOS and are ready
to jump to (for example) LynxOS or Wind
River's Platform NE Linux. We can't correlate the
responses from the first question to the second; the person
who said he's using VxWorks now may not be the same
person who said he'd consider EPOC in the future. What
we can say with certainty is that about 8% fewer people
said they'd consider VxWorks compared with the number
using it now. Whether or not those are the same people is
impossible to tell.
Microsoft's two embedded OSes didn't fare well, either. In
fact, the eight top choices all lost ground in the game of
"guess tomorrow's operating system."
Reflecting a kind of Zen balance, almost all of the middle About two-thirds of the OS defections are what we might
and low scorers gained share of mind in roughly equal call voluntary changes. Twenty-seven percent said they'd
proportion to the share lost by the big players. Whether switched operating systems because the new one "had
accidentally or by design, the numbers reflect a zero-sum better features," with 21% opting for "a better roadmap for
game. This gives us a hint as to the true nature of the data. growth" and 20% jumping ship for "better development
Recall that more than 28% of embedded projects use no tools." Another 15% said they bailed because the new OS
OS at all, and that this proportion has been shrinking was cheaper. Very few (8%) said their old OS was no
steadily for years. Those developers have to go somewhere; longer available and even fewer (6%) switched because
what OS will they choose as their first? Logically they're they were unhappy with their previous OS supplier. Fewer
going to choose a smaller, more inexpensive OS over a than 5% switched because the old system was too slow.
large and fully featured one. If you're upgrading your The overall picture is therefore good for commercial OS
system from no OS at all it makes sense to start small. suppliers. As the number and type of embedded systems
Thus, we see that the smaller RTOS vendors gain share of increase, so will the total available market. As in-house
mind among potential next-generation projects. and proprietary OS’s slowly ride into the sunset,
Will the current commercial vendors lose some development teams will be filling the gap mostly with
customers? Sure, just as every supplier occasionally loses commercial alternatives. And although open-source
customers to a competitor. But they'll probably gain even operating systems are well and truly established, with one-
more new customers. Nothing sinister lurks in the data fifth of developers using one now, their growth seems to
here; nothing to trouble the sleep of the friendly have flattened. Only about 17% of embedded systems
neighbourhood RTOS marketing staff. OS loyalty is fairly developers would consider Linux who aren't already using
low anyway. As the chart in Figure 8 shows, more than it. What's certain is that the variety of embedded systems—
one-third (36%) of developers are using a different OS and the operating systems that serve them—will continue
than they did in their previous project. Usually that's just to flourish.
because of a change in hardware (44%) or, more
ominously, that the new OS was not the developers' choice
and was forced upon them.
challenges for designers and technologists. A key issue is
Conclusion: the definition of the right methodologies to translate
Embedded Systems will play a key role to drive the system knowledge and competences into complex
technological evolution in the next decades. In this respect embedded systems, taking into account many system
they stand on the same level as Nano technologies, requirements and constraints. The key factor to win this
bioelectronics, and photonics. The central role of challenge is to build the right culture. This means to be
embedded systems in the economy grows stronger and able to build the right environment to exploit existing
stronger. The starting point is the convergence between design, architectural and technological solutions, and to
storage, security, video, audio, mobility and connectivity. favour the transfer of knowledge from one application field
Systems are converging and ICs are more and more into another.
converging with systems. This poses a number of
Reference:
Requirements Engineering for Embedded Systems1) Manfred Broy Technische Universität München, Institut für
Informatik
Capturing Requirements for Real-Time and Embedded Systems- Bruce Powel Douglass, Chief Evangelist, I-Logix
Razouk, It., Stewart, T. & Wilson, M. (1986) Measuring Operating System Performance on Modern Micro-Processors,
In: Perfor-mance 86, ACM, NY, pp. 193-202.
VHDL vs. Bluespec System Verilog: A Case Study on a Java Embedded Architecture
Flavius Gruian, Lund University, Sweden
Mark Westmijze, University of Twente, The Netherlands
http://wiki.answers.com/Q/What_is_the_difference_between_RTOS_and_OS#ixzz1HLKhzdCP
http://wiki.answers.com/Q/Advantages_disadvantages_of_embedded_operating_system#ixzz1HLkFchT6
http://www.control.com/thread/1026205354
http://belhob.wordpress.com/2007/10/12/rtos-vs-general-purpose-os/
http://www.emdebian.org/about/why.html
http://www.microsoft.com/windowsembedded/en-us/develop/windows-embedded-products-for-developers.aspx
http://www.answers.com/topic/embedded-system
http://www.netrino.com/Embedded-Systems/Glossary
http://embedsoftdev.com/
http://www.microcontroller.com/
http://www.flexerasoftware.com/promolanding/embedded-software-
licensing.htm?gclid=CJf6wuz24qcCFUdP4QodFngV-A
http://www.ni.com/academic/embedded.htm
http://www.esacademy.com/
Assessment of Technical Report
Criterion Weight
Demonstration of knowledge
25
and understanding
Critical evaluation 30
TOTAL
Comments.