Академический Документы
Профессиональный Документы
Культура Документы
Anish Arora
CIS788.11J Introduction to Wireless Sensor Networks
Outline
Discussion includes OS and also prog. methodology
some environments focus more on one than the other
Platforms
TinyOS
EmStar SOS Contiki
(applies to XSMs)
(applies to XSSs)
References
NesC, Programming Manual, The Emergence of Networking Abstractions and Techniques in TinyOS, TinyOS webpage EmStar: An Environment for Developing Wireless Embedded Systems Software, Sensys04 Paper, EmStar webpage SOS Mobisys paper, SOS webpage Contiki Emnets Paper, Sensys06 Paper, Contiki webpage Mate ASPLOS Paper, Mate webpage, (SUNSPOT)
3
Traditional Systems
Application Application
Well established layers of abstractions Strict boundaries Ample resources Independent applications at endpoints communicate pt-pt through routers
User
System
Network Stack Threads Address Space Transport Network Data Link Physical Layer Drivers
Files
Well attended
Routers
Need a framework for: Resource-constrained concurrency Defining boundaries Appln-specific processing allow abstractions to emerge
5
network traffic
Robust
inaccessible, critical operation
Traditional approaches
command processing loop (wait request, act, respond) monolithic event processing full thread/socket posix regime
Alternative
provide framework for concurrency and modularity never poll, never block interleaving flows, events
6
TinyOS
Microthreaded OS (lightweight thread support) and efficient network interfaces
events
Small, tightly integrated design that allows crossover of software components into hardware
Tiny OS Concepts
init
Component:
no heap
TX_packet_done (success)
RX_packet_done (buffer)
Commands
Events
application
Route map
Router
Sensor Appln
Active Messages
Radio Packet
Serial Packet
Temp
Photo
SW
packet
byte
Radio byte
UART
ADC
HW
bit
RFM
clock
application comp
data processing
byte
Radio byte
post tasks
bit
RFM
clock event handler initiates data collection sensor signals data ready event data event handler calls output command
12
events
commands
Interrupts
Hardware
Events generated by interrupts preempt tasks Tasks do not preempt tasks Both essentially process state transitions
13
14
Tasks
provide concurrency internal to a component
longer running operations
15
}
task void processData() { int16_t i, sum=0; for (i=0; i maxdata; i++)
128 Hz sampling rate simple FIR filter dynamic software tuning for centering the magnetometer signal (1208 bytes) digital control of analog, not DSP ADC (196 bytes)
16
Task Scheduling
Typically simple FIFO scheduler Bound on number of pending tasks When idle, shuts down node except clock
Retain event-driven structure throughout application Tasks extend processing outside event window All operations are non-blocking
18
SenseToRfm
generic comm
AMStandard
IntToRfm
RadioCRCPacket
UARTnoCRCPacket
packet
CRCfilter
byte
SecDedEncode
ChannelMon
SW HW
SPIByteFIFO
RandomLFSR
UART
ClockC
ADC
bit
SlavePin
19
Programming Syntax
Compositional support
separation of definition and linkage
robustness through narrow interfaces and reuse interpositioning
20
Component Interface
Clock.nc
interface Clock {
command result_t setRate(char interval, char scale); event result_t fire(); }
21
Component Types
Configuration
links together components to compose new component configurations can be nested complete main application is always a configuration
Module
provides code that implements one or more interfaces and
internal behavior
22
configuration SenseToRfm { // this module does not provide any interface } implementation { components Main, SenseToInt, IntToRfm, ClockC, Photo as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToRfm; SenseToInt.Clock -> ClockC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToRfm;
Main
StdControl
SenseToInt
Clock ADC ADCControl IntOutput
ClockC
Photo
IntToRfm
23
Nested Configuration
includes IntMsg; configuration IntToRfm { provides { interface IntOutput; interface StdControl; } } implementation
SubControl StdControl IntOutput
IntToRfmM
SendMsg[AM_INTMSG];
GenericComm
{
components IntToRfmM, GenericComm as Comm; IntOutput = IntToRfmM; StdControl = IntToRfmM;
24
IntToRfm Module
includes IntMsg; module IntToRfmM { uses { interface StdControl as SubControl; interface SendMsg as Send; } provides { interface IntOutput; interface StdControl; } } implementation { bool pending; struct TOS_Msg data; command result_t StdControl.init() { pending = FALSE; return call SubControl.init(); }
command result_t IntOutput.output(uint16_t value) { ... if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data) return SUCCESS; ... }
...
} }
25
Supporting HW Evolution
top-level applications
shared application components hardware independent system components hardware dependent system components
may abstract particular channel of ADC on the microcontroller may be a SW I2C protocol to a sensor board with digital sensor or ADC
Pipelines transmission: transmits byte while encoding next byte Trades 1 byte of buffering for easy deadline
Encode Task
Byte 1
Byte 2
Byte 3
Byte 4
Bit transmission
RFM Bits
start
Byte 1
Byte 2
Byte 3
28
Sending a Message
bool pending; struct TOS_Msg data; command result_t IntOutput.output(uint16_t value) { IntMsg *message = (IntMsg *)data.data; if (!pending) { pending = TRUE; message->val = value; message->src = TOS_LOCAL_ADDRESS; if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data)) return SUCCESS; pending = FALSE; } return FAIL; } destination
length
30
Send done event fans out to all potential senders Originator determined by match
free buffer on success, retry or fail on failure
Receive Event
Sending
declare buffer storage in a frame request transmission name a handler handle completion signal
Receiving
declare a handler
firing a handler: automatic
Buffer management
strict ownership exchange
tx: send done event
reuse
33
transmit packet
send command schedules task to calculate CRC task initiates byte-level data pump events keep the pump flowing
receive packet
receive event schedules task to check CRC task signals packet ready if OK
byte-level tx/rx
task scheduled to encode/decode each complete byte must take less time that byte data transfer
34
TinyOS tools
ListenRaw, SerialForwarder: java tools to receive raw packets on PC from base node
Oscilloscope: java tool to visualize (sensor) data in real time
Memory usage: breaks down memory usage per component (in contrib)
Peacekeeper: detect RAM corruption due to stack overflows (in lib) Stopwatch: tool to measure execution time of code block by timestamping at entry and exit (in osu CVS server) Makedoc and graphviz: generate and visualize component hierarchy
35
Simulation Scaling
37
Network types enable link level cross-platform interoperability Generic (instantiable) components, attributes, etc.
Structure OS to leverage code reuse Decompose h/w devices into 3 layers: presentation, abstraction, device-independent Structure common chips for reuse across platforms so platforms are a collection of chips: msp430 + CC2420 +
Power mgmt architecture for devices controlled by resource reservation Self-initialisation App-level notion of instantiable services
38
TinyOS Limitations
Static allocation allows for compile-time analysis, but can make programming harder No support for heterogeneity
Support for other platforms (e.g. stargate) Support for high data rate apps (e.g. acoustic beamforming) Interoperability with other software frameworks and languages
Limited visibility
Em*
Software environment for sensor networks built from Linux-class devices Claimed features:
Simulation and emulation tools
45
Similar design choices programming framework Component-based design Wiring together modules into an application
Differences
operating system and language choices Emstar: easy to use C language, tightly coupled to linux (devfs,redhat,) TinyOS: an extended C-compiler (nesC), not wedded to any OS
46
Pure Simulation
Deployment
Data Replay Ceiling Array Portable Array Reality
47
Scale
Em* Modularity
Dependency DAG Each module (service)
Manages a resource &
resolves contention
Has a well defined interface Has a well scoped task Encapsulates mechanism Exposes control of policy
Neighbor Discovery Leader Election Topology Discovery
State Sync
3d MultiLateration
Acoustic Ranging
Reliable Unicast
Time Sync
library
Hardware
Radio
Audio
Sensors
48
Em* Robustness
Fault isolation via multiple processes Active process management (EmRun) Auto-reconnect built into libraries
Crashproofing prevents cascading failure
scheduling
path_plan
49
Em* Reactivity
path_plan
notify filter
motor_y
e.g.
neighbor list membership hysteresis time synchronization linear fit and outlier rejection
50
EmStar Components
Tools
EmRun
EmProxy/EmView EmTOS
Standard IPC
FUSD Device patterns
Common Services
NeighborDiscovery TimeSync
Routing
51
Designed to start, stop, and monitor services EmRun config file specifies service dependencies Starting and stopping the system
Starts up services in correct order
Can detect and restart unresponsive services Respawns services that die Notifies services before shutdown, enabling graceful
Error/Debug Logging
Per-process logging to in-memory ring buffers
52
EmSim/EmCee
Em* supports a variety of types of simulation and emulation, from simulated radio channel and sensors to emulated radio and sensor channels (ceiling array) In all cases, the code is identical Multiple emulated nodes run in their own spaces, on the same physical machine
53
EmView/EmProxy: Visualization
Emulator
emproxy neighbor linkstat motenic
emview
Mote
Mote
Mote
54
User
/dev/servicename /dev/fusd
Kernel
kfusd.o
55
Device Patterns
objects, with method calls and callback functions priority: ease of use
56
Status Device
Server
Config Handler State Request Handler
O
Status Device
Client configurable
client can write command string server parses it to enable per-client
Client1
Client2
Client3
57
behavior
Packet Device
Server
Packet Device
F O I O
F I O
Client-configurable:
Input and output queue lengths Input filters Optional loopback of outputs to
Client1
Client2
Client3
58
Require locking semantics to prevent race conditions between readers and writers
Support status semantics but not queuing No support for notification, polling only
Device files:
59
IPC channels used internally can be viewed concurrently for debugging Live state can be viewed in the shell (echocat w) or using emview
60
upgrade features
remove bugs re-task system
Architecture Overview
Dynamically Loadable Modules
Tree Routing
Application
Light Sensor
Dynamic Memory
Timer Clock Serial Framer UART
Scheduler
Comm. Stack ADC
SPI
Static Kernel
Dynamic Modules
Provides hardware abstraction & common services Maintains data structures to enable module loading Costly to modify after deployment
Drivers, protocols, and applications Inexpensive to modify after deployment Position independent
62
SOS Kernel
Kernel services
63
Guard bytes for run-time memory overflow checks Ownership tracking Garbage collection on completion
64
SOS implements non-preemptive priority scheduling via priority queues Event served when there is no higher priority event
High priority queue for time critical events, e.g., h/w interrupts &
sensitive timers
65
Modules
Each module is uniquely identified by its ID or pid
66
Orthogonal to module distribution protocol Kernel stores new module in free block located in program memory and critical information about module in the module table
char tmp_string = {'C', 'v', 'v', 0}; ker_register_fn(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_header_size);
char tmp_string = {'C', 'v', 'v', 0}; s->get_hdr_size = (func_u8_t*)ker_get_handle(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string);
67
ModuletoKernel Communication
Module A
System Call
SOS Kernel
Hardware
Interrupts & messages from kernel dispatched by a high priority message buffer
68
Inter-Module Communication
Module A Module B Module A Module B
Synchronous communication
Kernel stores pointers to functions registered by modules
Asynchronous communication Messages dispatched by a twolevel priority scheduler Suited for services with long latency
69
Synchronous Communication
Module can register function for low latency blocking call (1) Modules which need such function can subscribe to it by getting function pointer pointer (i.e. **func) (2) When service is needed, module dereferences the function pointer pointer (3)
Module A
Module B
70
Asynchronous Communication
3
2 Network
Module A
Msg Queue
Module B
Send Queue
Module is active when it is handling the message (2)(4) Message handling runs to completion and can only be interrupted by hardware interrupts
Module can send message to another module (3) or send message to the network (5)
Message can come from both network (1) and local host (3)
71
Module Safety
Problem: Modules can be remotely added, removed, & modified on deployed nodes
Accessing a module
If module doesn't exist, kernel catches messages sent to it & handles dynamically allocated memory If module exists but can't handle the message, then module's default handler gets message & kernel handles dynamically allocated memory
Publishing a function includes a type description that is stored in a function control block (FCB) table
Module Library
Some applications created by combining already written and tested modules SOS kernel facilitates loosely coupled modules
Memory Debug
73
Module Design
#include <module.h>
Uses standard C
Programs created by wiring modules together
74
Sensor Manager
Module A
Periodic Access Signal Data Ready
Module B
Polled Access
Enables sharing of sensor data between multiple modules Presents uniform data access API to diverse sensors
Sensor Manager
getData
dataReady
Underlying device specific drivers register with the sensor manager Device specific sensor drivers control
MagSensor
ADC
I2C
75
ROM
Memory footprint for base operating system with the ability to distribute and update node programs.
Active Time Active Time Overhead relative to (in 1 min) (%) TOS (%) 3.31 sec 5.22% NA 3.50 sec 5.84% 5.70% 3.68 sec 6.13% 11.00%
76
Reconfiguration Performance
Code Size Write Cost Write System (Bytes) (mJ/page) Energy (mJ) SOS 1316 0.31 1.86 TinyOS 30988 1.34 164.02 Mate VM NA NA NA
Module size and energy profile for installing surge under SOS
Platform Support
Atmel Atmega128
Chipcon CC1000
BMAC
Oki ARM
Chipcon CC2420
Simulation Support
Pthread simulates hardware concurrency UDP simulates perfect radio channel Supports user defined topology & heterogeneous software configuration Useful for verifying the functional correctness
Instruction cycle accurate simulation Simple perfect radio channel Useful for verifying timing information See http://compilers.cs.ucla.edu/avrora/
79
Contiki
Key ideas
Dynamic loading of programs
Selective reprogramming
Static/pre-linking (early work: EmNets) Dynamic linking (recent work: SENSYS)
Loadable programs
One-way dependencies
Core
84
Core
85
Offers API to linker to search registry and to update registry Created when Contiki core binary image is compiled
multiple pass process
86
Parse payload into code, data, symbol table, and list of relocation entries which
correspond to an instruction or address in code or data that needs to be
updated with a new address consist of
o
o
o
a pointer to a symbol, such as a variable name or a function name or a pointer to a place in the code or data address of the symbol a relocation type which specifies how the data or code should be updated
2.
Allocate memory for code & data is flash ROM and RAM Link and relocate code and data segments
for each relocation entry, search core symbol table and module symbol table if relocation is relative than calculate absolute address
3.
4.
87
Code MSP430
Code AVR
1044
678
RAM
10 + e + p 8 8+s 0 0
810
658 582 60 170
Timer library
Memory manager
90
226
1656 4146
1934 5218
200
18 + b
88
Thread
Kernel
own stack
90
Event-driven vs multi-threaded
Event-driven
- No wait() statements - No preemption - State machines + Compact code + Locking less of a problem + Memory efficient
Multi-threaded
+ wait() statements
+ Preemption possible + Sequential code flow - Larger code overhead - Locking problematic - Larger memory requirements
Preemption possible
92
Responsiveness
Computation in a thread
93
Thread
Event
94
95
Event handler
yield()
96
Memory management
Memory allocated when module is loaded
97
Allows blocked waiting Requires per-thread no stack Each protothread runs inside a single C function 2 bytes of per-protothread state
98
Mate features
Small Concise
10
Mate in a Nutshell
Stack architecture Three concurrent execution contexts Execution triggered by predefined events Tiny code capsules; self-propagate into network Built in communication and sensing instructions
10
For small number of executions GDI example: Bytecode version is preferable for a program running less than 5 days In energy constrained domains
10
Mate Architecture
gets/sets
Subroutines Events
Three events:
Receive
Clock
Send
Message send
gets/sets
Operand Stack Return Stack
Hides asynchrony
PC
Code
10
Instruction Set
Instruction polymorphism
10
Code Example(1)
# # # # # # # # #
Push heap variable on stack Push 1 on stack Pop twice, add, push result Copy top of stack Pop, set heap Push 0x0007 onto stack Take bottom 3 bits of value Pop, set LEDs to bit pattern
10
Code Capsules
One capsule = 24 instructions Fits into single TOS packet Atomic reception Code Capsule
Type and version information
10
Viral Code
Capsule transmission: forw Forwarding other installed capsule: forwo (use within clock capsule)
10
Component Breakdown
Mate runs on mica with 7286 bytes code, 603 bytes RAM
10
10
Mate IPS: ~10,000 Overhead: Every instruction executed as separate TOS task
11
Installation Costs
Bytecodes have computational overhead But this can be compensated by using small packets on upload (to some extent)
11
Customizing Mate
Mate is general architecture; user can build customized VM User can select bytecodes and execution events Issues:
Flexibility vs. Efficiency
Customizing increases efficiency w/ cost of changing requirements
Javas solution:
General computational VM + class libraries
Mates approach:
More customizable solution -> let user decide
11
How to
Select a language
11
Constructing a Mate VM
This generates a set of files -> which are used to build TOS application and to configure script program
11
Bombilla Architecture
Once context: perform operations that only need single execution 16 word heap sharing among the context; setvar, getvar Buffer holds up to ten values; bhead, byank, bsorta
11
basic: arithmetic, halt, sensing m-class: access message header v-class: 16 word heap access j-class: two jump instructions x-class: pushc
11
Capsule Injector: programming environment Synchronization: 16-word shared heap; locking scheme Provide synchronization model: handler, invocations,
Resource management: prevent deadlock Random and selective capsule forwarding Error State
11
Discussion
Comparing to traditional VM concept, is Mate platform independent? Can we have it run on heterogeneous hardware? Security issues: How can we trust the received capsule? Is there a way to prevent version number race with adversary?
In viral programming, is there a way to forward messages other than flooding? After a certain number of nodes are infected by new version capsule, can we forward based on need? Bombilla has some sophisticated OS features. What is the size of the program? Does sensor node need all those features?
11
.NET MF is a bootable runtime environment tailored for embedded development MF services include:
Boot Code Code Execution Thread Management Memory Management Hardware I/O
12
Interfaces include:
Clock Management Core CPU Communications External Bus Interface Unit (EBIU) Memory Management
Power
12
Runtime (CLR)
In turn calls HAL drivers to access hardware
Threading Model
the CLR
Time sliced context switching with (configurable) 20ms
quantum
Threads may have priorities
Timer Module
MF provides support for accessing timers from C# Enables execution of a user specified method
constructed
by the system
12
Timer Interface
Callback: user specified method to be executed State: information used by callback method
May be null
Extended the MF HAL to support radio APIs Supported API functions include
On: powers on radio, configures registers, SPI bus, initializes clocks Off: powers off radio, resets registers, clocks and SPI bus Configure: sets radio options for 802.15.4 radio BuildFrame: constructs data frame with specified parameters
12
Receiver centric MAC protocol Highly efficient for low duty cycle applications
Implemented as a PAL component natively on top of HAL