Вы находитесь на странице: 1из 27

Channel

A Name Space Based C++ Template Framework For


Asynchronous Distributed Message Passing and Event
Dispatching

Overview | Boost Channel | ACE Channel | Download at Sourceforge | Supported Platforms | Build | Contact / Support

Overview
Channel is a C++ template library to provide name spaces for asynchronous, distributed message
passing and event dispatching. Message senders and receivers bind to names in name space; binding
and matching rules decide which senders will bind to which receivers; then message passing and event
dispatching could happen among bound senders and receivers.

Channel's signature:
template <
typename idtype,
typename platform_type = boost_platform,
typename synchpolicy = mt_synch<platform_type>,
typename executor_type = abstract_executor,
typename name_space = linear_name_space<idtype,executor_type,synchpolicy>,
typename dispatcher = broadcast_dispatcher<name_space,platform_type>
>
class channel;

Various name spaces (linear/hierarchical/associative) can be used for different applications. For
example, we can use integer ids as names to send messages in linear name space, we can use path name
ids (such as "/sports/basketball") to send messages in hierarchical name space and we can use regex
patterns or Linda tuple-space style tuples to send messages in associative name space; User can
configure name space easily by setting a channel template parameter.

Channel's other major components are dispatchers; which dispatch messages/events from senders to
bounded receivers. Dispatcher is also a channel template parameter. Sample dispatchers includes :
synchronous broadcast dispatcher, buffered asynchronous dispatchers,...

Name space and dispatchers are orthogonal; they can mix and match together freely; just as STL
algorithms can be used with any STL containers.

By combining different name space and dispatching policies, we can achieve various models:
synchronous event dispatching
associative name space based on matching/look-up rules similar to Linda tuple space
asynchronous messaging model similar to Microsoft CCR (Concurrency Coordination Runtime)
Similar to distributed files systems, distributed channels can be connected or "mounted" to allow
transparent distributed message passing. Filters and translators are used to control name space changes.

For tightly coupled single-address-space applications/modules, Channel's "unnamed" in/out objects :


ports and signal/slots support fine grained and local message passing model without the hassle of
setting up a name space and assigning names.

Boost Channel
Boost Channel is the latest implementation of Channel framework for Boost. Boost provides free peer-
reviewed portable C++ source libraries. It is emphasized that libraries work well with the C++ Standard
Library. Boost libraries are intended to be widely useful, and usable across a broad spectrum of
applications. Boost Channel is solely based on standard boost facilities:
Boost::shared_ptr for message/event data life-time management
Boost.Bind, Boost.Function for callback
Boost.Thread for synchronization
Boost.Serialization for message marshaling/demarshaling
Boost.Regex and Boost.Tuple for associative name-matching
Boost.Asio and Boost.Shmem are used to build transports among remote channels.
Detailed Info:
Design Document
Browse CVS Source Code

ACE Channel
ACE Channel is the first implementation of Channel framework on top of ACE (Adaptive
Communication Environment) . ACE is a powerful and portable OO/C++ framework for system
programming. It provides not only wrapper facade classes to abstract the complete OS facilities, but
also frameworks and design patterns for developing multithreaded and distributed applications. ACE
Channel uses several key ACE facilities including Reactor, Service Configurator, Task and Acceptor-
Connector.
Design Docs and more info...
Browse CVS Source Code

Source Code Download


Download
All released files are hosted at sf.net: http://sourceforge.net/projects/channel
Supported Platforms
Theoretically Boost Channel can work on any platforms where Boost is supported, since it is solely
dependent on Boost facilities. Currently Boost Channel is being actively developed and tested in the
following platforms:
Linux (Fedora, Ubuntu) with gcc
Windows XP with Visual C++ 2005 Express
It will be tested on other platforms (Solaris, NetBSD...) when time and resources are available.

Build
checkout boost cvs source code
download latest boost_channel_x_x.tar.gz
tar xvzf boost_channel_x_x.tar.gz
cd <top directory of channel>
copy subdirectory boost/channel/ to <boost root directory>/boost/
copy subdirectory libs/channel/ to <boost root directory>/libs/
cd to <boost root directory>/libs/channel/exmaple/<specific samples such as ping_pong>
bjam

Contact / Support
yigongliu@gmail.com

This page is maintained by: yigongliu@gmail.com

Channel - A Name Space Based C++ Framework For


Asynchronous Distributed Message Passing and Event
Dispatching

Yigong Liu (9/24/2006)

1. Introduction
2. Build
3. Tutorials
3.1 gui event handling
3.2 gui event handling with 2 local channels
3.3 distributed gui events
3.4 chat with direct connection
3.5 buffered channel with blocking active receiver (synchronous choice, join
synchronization patterns)
3.6 buffered channel with async receivers (asynchronous choice, join synchronization
patterns)
3.7 distributed chat thru a central server
3.8 channel connection thru shared memory
3.9 channel using regex name matching
3.10 channel using Linda-style associative lookup
3.11 channel name space management and security with filters and translators
3.12 port and signal: unnamed point of tightly-coupled local interactions
4. Design
4.0 Overall Design Idea
4.1 Name space
4.1.1 What's in a name?
4.1.2 Types of name space
4.1.3 Name binding set and Name matching algorithm, binding rules
4.1.4 Name spaces merge and connections
4.2 Dispatching
4.2.1 How message data move: push/pull, buffering
4.2.2 How operations are performed: synchronous/asynchronous
4.2.3 Message passing coordination patterns
4.2.4 Messages handling
4.3 Connection related
4.3.1 Connections
4.3.2 Peer
4.4 "Unnamed" binding of output/input or points of tightly-coupled local interactions
4.5 Application architecture and integration
5. Classes
5.1 name space related
5.1.1 name spaces
5.1.2 id_type and id_trait
5.1.3 name and name binding callback
5.1.4 named_out and named_in; publisher and subscriber
5.1.5 unnamed in/out: port and signal/slot
5.1.6 binder, filter and translator
5.2 dispatching related
5.2.1 dispatchers
5.2.2 messages
5.2.3 queues
5.2.4 executors
5.3 connection related
5.3.1 global functions for connecting channels
5.3.2 connection
5.3.3 peer and interface
5.3.4 streams
5.3.5 marshaling registry
5.4 platform abstraction policy and synchronization policy
5.4.1 platform abstraction
5.4.2 synchronization policy
6. Class Concepts and How to extend Channel framework
6.1 id_type and id_trait
6.2 name space
6.3 dispatcher
6.4 executor
6.5 queue
6.6 streams/connectors (or integrate into new architecture)
7. Compare Channel to others (plan9, STL)
7.1 Compare Unix/Plan9/Inferno file-system name space and Channel's name space
7.2 compare STL and Channel
8. Reference Links

1. Introduction
In Unix and most OSes, file systems allow applications to identify, bind to and operate on system
resources and entities (devices, files,...) using a "name" (path name) in a hierarchical name space
(directory system) which is different from variables and pointers in flat address space. Many
interprocess communication facilities (IPC) often depend on some kind of "names" to identify them
too, such as the pathname of FIFO or named-pipe, pathname for unix domain socket, ip-address and
port for tcp/udp socket , and keys for System V shared memory, message queue and semaphores. "The
set of possible names for a given type of IPC is called its name space. The name space is important
because for all forms of IPC other than plain pipes, the name is how the client and server "connect" to
exchange messages." (quote from W. Richard Stevens "Unix Network Programming").

Channel is a C++ template library to provide name spaces for asynchronous, distributed message
passing and event dispatching. Message senders and receivers bind to names in name space; binding
and matching rules decide which senders will bind to which receivers; then message passing and event
dispatching could happen among bound senders and receivers.
Channel's signature:
template <
typename idtype,
typename platform_type = boost_platform,
typename synchpolicy = mt_synch<platform_type>,
typename executor_type = abstract_executor,
typename name_space = linear_name_space<idtype,executor_type,synchpolicy>,
typename dispatcher = broadcast_dispatcher<name_space,platform_type>
>
class channel;
Various name spaces (linear/hierarchical/associative) can be used for different applications. For
example, we can use integer ids as names to send messages in linear name space, we can use string
path name ids (such as "/sports/basketball") to send messages in hierarchical name space and we can
use regex patterns or Linda tuple-space style tuples to send messages in associative name space; User
can configure name space easily by setting a channel template parameter.
Channel's other major components are dispatchers; which dispatch messages/events from senders to
bounded receivers. Dispatcher is also a channel template parameter. The design of dispatchers can vary
in several dimensions:
how msgs move: push or pull;
how callbacks executed: synchronous or asynchronous.
Sample dispatchers includes : synchronous broadcast dispatcher, buffered asynchronous dispatchers,...

Name space and dispatchers are orthogonal; they can mix and match together freely; just as STL
algorithms can be used with any STL containers by means of the iterator range concept, name space
and dispatchers can be used together because of the name binding set concept.

By combining different name space and dispatching policies, we can achieve various models:
synchronous event dispatching
associative name space based on matching/look-up rules similar to Linda tuple space
asynchronous messaging model similar to Microsoft CCR (Concurrency Coordination Runtime)
Similar to distributed files systems, distributed channels can be connected or "mounted" to allow
transparent distributed message passing. Filters and translators are used to control name space changes.

For tightly coupled single-address-space applications/modules, Channel's "unnamed" in/out objects :


ports and signal/slots support fine grained and local message passing model without the hassle of
setting up a name space and assigning names.

Channel is built on top of Boost facilities:


Boost::shared_ptr for message/event data life-time management
Boost.Bind, Boost.Function for callback
Boost.Thread for synchronization
Boost.Serialization for message marshaling/demarshaling
Boost.Regex and Boost.Tuple for associative name-matching
Boost.Asio and Boost.Shmem are used to build transports among remote channels.

2. Build
Channel is continuously being developed and tested in linux (ubuntu8.04/g++4.2.4 - ubuntu9.04/g+
+4.3.3) and Windows (Visual C++ 2005 - Visual C++ 2008). The implementation is solely based on
standard boost facilities plus Boost.Asio and Boost.Interprocess.
Download: http://channel.sourceforge.net
Build: Channel is a header only library. There is no need to build the library itself to use it. Please
following these steps:
download or checkout boost distribution
download latest boost_channel_x_x.tar.gz
tar xvzf boost_channel_x_x.tar.gz
add boost's directory and Channel's directory to compilers' include path
cd to <channel_top_directory>/libs/channel/exmaple/<specific samples such as ping_pong>
bjam
3. Tutorials
the following are a few samples showing how different name spaces and dispatchers can be used in
various situations:

3.1 gui event handling


A simple sample shows that a gui window send (broadcast) simple events to callbacks (either free
functions or object members). details...

3.2 gui event handling with 2 local channels


This sample shows how 2 channels can be connected to allow gui events propagate from one channel to
another. Also we use a POD struct as message id/name. details...

3.3 distributed gui events


A sample shows how events can be sent (broadcast) to callbacks in remote process by connecting local
channel to remote channels thru Boost.Asio. details...

3.4 chat with direct connection


This sample shows the usage of hierarchical name space by defining chat subjects as string path names.
For demo, chat peers directly connect to each other, subscribing to the subjects they are interested and
send messages with each other. Since it a hierarchical name space, peers can subscribe to wildcard ids
such as "all sports related subjects". details...

3.5 buffered channel with blocking active receiver (synchronous choice, join synchronization
patterns)
A sample shows the usage of buffered channels implemented thru a synchronous pull dispatcher. In this
channel configuration, messages are buffered inside channel at sender side. The receiver is active, a
thread blocking waiting for the arrival of messages at synchronous join/choice arbiters and then
processing the messages. details...

3.6 buffered channel with async receivers (asynchronous choice, join synchronization patterns)
This sample shows a buffered channel support asynchronous receivers using asynchronous
coordination patterns: choice and join. The callback actions are dispatched thru a thread pool executor.
details...

3.7 distributed chat thru a central server


This sample shows simple chat client and server design. Clients connect to server to chat with each
other in seperate chat groups identified by subject. The chat subject (a string) is the ids in name space.
Clients can join/leave chat groups identified by subject ids and send messages to chat groups. If the
chat group (subject) doesn't exist yet, the first member's "join" will make it created. details...
3.8 channel connection thru shared memory
This sample shows that remote channels at 2 processes (chat1, chat2) can be connected thru shared
memory message queues based on Boost.Interprocess. details...

3.9 channel using regex name matching


This sample demos channels using regex pattern matching for name-matching and message
dispatching. Peers can use regex patterns to bind/subscribe to names/ids. Boost.Regex is used for
implementation. details...

3.10 channel using Linda-style associative lookup


This sample demos channels using Linda-style associative name space. Tuples are used as names/ids
and associative lookup is used for name-matching. Boost.Tuple is used for implementation. details...

3.11 channel name space management and security with filter and translator
This sample demos how we can use filters and translators to achieve name space management and
security. details...

3.12 port and signal: unnamed point of tightly-coupled local interactions


This tutorial explains 3 samples based on port and signal. details...

4. Design

4.0 Overall Design Idea


"Names" play important role in distributed computing:
Name space plays a central role in plan9/inferno distributed OS as described in [1][2].
quote:
"... a new kind of system, organized around communication and naming ..."
"A single paradigm (writing to named places) unifies all kinds of control and interprocess
signaling."
It can be briefly summerized as following:
Every resource, either local or remote, is represented as a hierarchical file system (name
space).
Each process can assemble a private view of the system by constructing a local/private
name space that connect these resources thru mount, bind and umount on demand.
The interface to named resources are file-oriented; they are accessed thru
open/close/read/write calls.
Robin Milner[3][4] gives a detailed discussion about what we can do with a name in
asynchronous message passing, or the interactions thru "name":
The following operations on names are identified:
use name: call, co-calling (response)
call - vocative use of name by one agent
co-call/response - reaction by other
Synchronized action is the coming together (binding) of calling and co-calling (thru name).
The reason to distinquish between "calling" and "response" is that, in describing any
agents ( processes/ threads/ ...), we define its potential behaviour (or capabilities) by what
calls and responses it can make - this is the basic idea underlying the active objects or
communicating processes based design which will be detailed in later section.
mention name: quote/co-quote, match
quote, co-quote:
quote/co-quote is refering to the way we can pass names as/inside message content. We
can simulate function call (call-and-return) by packing a "return" name/id inside message
and waiting on this name for result (from remote).
match:
test a name for equality with another name. Name matching algorithms are mostly
dependent on name space structure. In following sections, we will expand "matching" to
include wildcards and regex matching

The design of Channel is based on the following integration of Plan9/Inferno's name space idea
and Robin Milner's interaction thru names:
Channel provides process local private name_space which is customizable thru
connecting to other channels.
The semantics of names are changed; names do not refer to named resources (which are
relatively static entities), but to named "points of interaction" (which are mostly
dynamic); thus file-oriented api and semantics are dropped and Robin Milner's
operations/interactions on names are adopted as the api: calling/co-calling/matching.

4.1 Name space

4.1.1 What's in a name?


In Channel, to facilitate name-matching/binding operations, a name has the following attributes:
id_type
Id is the main content of names. Various types of ids can be used for different applications:
integer, strings, POD structs etc can be used for linear name space; string path names can be used
for hierarchical name spaces and regex patterns and Linda style tuples can be used for associative
name space.
id_trait and id-matching algorithms
Id_trait defines the attributes of an id type. A major feature of id_trait is the id-matching
algorithm, which partially decides name-matching and thus which senders will bind which
receivers and be able to send messages to them. For example, exact matching can be used for
linear name space ids; prefix matching can be used for path name ids; while in associative name
spaces, regex pattern matching and Linda style associative lookup can be used for id-matching.
membership
A channel is a process local name space which can be connected to other local or remote
channels. So we have 2 types of communicating peers:
MEMBER_LOCAL (local peers): communication peers inside the same channel
MEMBER_REMOTE (remote peers): communication peers from different channels
sending and receiving scope
When sending/receiving messages, we can specify the scope of operations:
SCOPE_LOCAL:
publish/send specified messages to local peers;
subscribe/receive specified messages from local peers
SCOPE_REMOTE:
publish/send specified messages to remote peers;
subscribe/receive specified messages from remote peers
SCOPE_GLOBAL:
publish/send specified messages to both local and remote peers;
subscribe/receive specified messages from both local and remote peers

4.1.2 Types of name space


There are 3 types of name spaces based on its id-matching algorithms and naming structures:
linear:
There are ordering relationship among ids, so they can be arranged in linear range. Exact
matching is used for id-matching.
hierarchical:
There are containment relationship among ids, so they can be arranged in tree/trie structures.
Prefix matching is used for id-matching
associative:
Id-matching is based on associative lookup similar to Linda's tuple space or regular expression
matching algorithms

4.1.3 Name binding set and Name matching algorithm, binding rules
No pure name exist; Names are ONLY created into name space when bound for sending/receiving
msgs:
Named_Out: output/send interface bound with name
Named_In: input/receiv interface bound with name
Name binding sets:
for Named_Out, its binding_set is the set of bound Named_Ins to which to send messages
for Named_In, its binding_set is the set of bound Named_Outs from which to receive messages
There are 2 aspects to the name matching algorithms and binding rules to decide binding_sets:
id matching: the id of Named_Out must "match" the id of Named_In based on matching
operation defined in id_trait
scope & membership matching: the membership and scope of both Named_Out and Named_In
must match. This doesn't mean they must be the same. For example, a local sender
(MEMBER_LOCAL) with SCOPE_LOCAL can bind to receivers with <MEMBER_LOCAL,
SCOPE_LOCAL> or <MEMBER_LOCAL, SCOPE_GLOBAL>. There is an internal table
recording all such valid combinations.
Named_Out and Named_In don't bind to each other directly (as in most event dispatching systems).
Instead, they bind to names in name space. Based on binding and matching rules, their binding set will
be resolved which will contain the direct pointers to their counterpart. Actual message passing and
dispatching happen on the binding set; never need to go thru name space again. So the actual message
passing and dispatching behaviour and performance should be the same as we have registered the
Named_In directly with Named_Out ( as we would have done in normal event dispatching systems ).
Based on name-matching, there are possibly the following 4 kinds of binding sets:
1 - 1: one named_out binds with exactly one named_in
1 - N: one named_out binds with a group of named_ins (e.g. when many subscribers subscribe
to the same name)
N - 1: one named_in binds with a group of named_outs (e.g. when a subscriber subscribes using
a wildcard name or regex pattern, it could receive from multiple sources)
N - M: both named_out and named_in bind with a group of counterparts.

4.1.4 Name spaces merge and connections


When 2 channels (A & B) are connected/mounted, their name spaces are merged as following:
names flowing from B->A: the intersection of A's set of Named_In with global/remote scope
(global subscriptions) and B's set of Named_Out with global/remote scope (global
publications)
names flowing from A->B: the intersection of B's set of Named_In with global/remote scope
(global subscriptions) and A's set of Named_Out with global/remote scope (global
publications)
newly created names/ids are automatically propogated to connected channels in the following
manner, so that peers in channel A can communicate with peers in channel B transparently the
same way as with the local peers:
if a new local (MEMBER_LOCAL) Named_Out with id "N" is added (name "N" is
published) with global/remote scope in channel A, channel A will send
publication_info_msg containing "N" to all connected channels. If channel B receives
this message, it will check its name space. If there is local (MEMBER_LOCAL)
Named_In with id matching "N" (using the above discussed id matching algorithms
defined with id_trait) and global/remote scope, the following will happen at channel A
and B:
at channel B:
a remote (MEMBER_REMOTE) Named_Out with id "N" and
SCOPE_LOCAL will be added at channel B which will forward messages
from channel A to local peers
a subscription_info_msg with id "N" will be sent to channel A.
at channel A:
after receiving subscription_info_msg with "N" from channel B, a remote
(MEMBER_REMOTE) Named_In with id "N" and SCOPE_LOCAL will
be added at channel A, which will forward messages from local
Named_Ins with id "N"
if a new local (MEMBER_LOCAL) Named_In with id "N" is added (name "N" is
subscribed) with global/remote scope in channel A, channel A will send
subscription_info_msg containing "N" to all connected channels. If channel B receives
this message, it will check its name space. If there is local (MEMBER_LOCAL)
Named_Out with id matching "N" (using the above discussed id matching algorithms
defined with id_trait) and global/remote scope, the following will happen at channel A
and B:
at channel B:
a remote (MEMBER_REMOTE) Named_In with id "N" and
SCOPE_LOCAL will be added at channel B which will forward messages
from local peers to channel A
a publication_info_msg with id "N" will be sent to channel A.
at channel A:
after receiving publication_info_msg with "N" from channel B, a remote
(MEMBER_REMOTE) Named_Out with id "N" and SCOPE_LOCAL
will be added at channel A, which will forward messages from channel B
to local Named_Ins with id "N"
Please note that channel A will not automatically propogate the names/ids it receive from
channel B to channel C (suppose that channel A connects to channel B and to channel C, and
there is no connection between channel B and C). If peers in channel C need to talk to peers in
channel B, there are two choices:
a peer (thread/process) in channel A subscribe to all channel B's names/ids with
global/remote scope, and re-publish them so that channel C can get them; and the peer
has the code to receive messages from Named_Outs with these names and immediately
send messages on Named_Ins with the same name (forwarding from channel B to
channel C), smiliar to what chat server has done in sample 3.7 distributed chat thru a
central server
or channel B connect to channel C directly.
Filter, translator can be specified at connections among channels to control name space merge:
filter: decide which ids are allowed to be exported/sent to (visible at) remote channels and
which remote ids are allowed be imported to local name space
translator: allow translation of ids imported to local name space and ids exported to remote
name space; so we can relocate the imported remote name space ids to a specific subspace in
the local name space, similar to the way that in distributed file systems, a remote file system can
be mounted to a specific point in local file system.
Based on applications name space management requirements, we may need to "relocate"/"mount" the
names imported (from a connection to a remote name space) to a specific sub-region in name space.
For example, if we have a name space in desktop computer and connect to a PDA and a laptop, we can
set translators at connections so that names imported from PDA will appear under "/pda/" and names
from laptop will appear under "/laptop/". Or if our application use integer as ids/names, we may want
to relocate ids from the 1st connection to [1000-1999] and ids from next connection to [2000-2999] and
so on. That is similar to the way how we mount remote file-systems to local file-system.
Based on security requirements, we may need to use filters to restrict the valid range of names allowed
to pass in/out specific channel connections. For example, a server's name space connect to 2 clients and
we want that these clients' name spaces and messaging are totally separate, so that one client is unware
of anything happening inside the other client's name space such as new name-publications and message
passing. That is also similar to the way we protect network by firewalls and NATs.

4.2 Dispatching
Dispatchers or dispatching policies are operations or algorithms defined over name binding set. They
define the semantics of "interactions thru names". Based on RobinMilner's separation of calling and co-
calling, there are 2 parts defined for each dispatching algorithm:
sending (or sender) algorithm: corresponding to calling
defined over the set of bound Named_In (receiver) objects
may contain message buffering mechanism inside channel (Named_Outs)
receiving (or receiver) algorithm: corresponding to co-calling
defined over the set of bound Named_Out (sender) objects
may support high level messaging coordination patterns (such as Choice and Join)
The following are major design considerations for dispatchers.

4.2.1 How message data move: push/pull, buffering


There are 2 basic models of passing messages/events from senders to receivers:
push model: message data are pushed by senders to receivers
pull model: message data are pulled by receivers from senders
Since Channel is for asynchronous messaging, mostly the following 2 dispatching categories are used:
push:
Dispatching variations can be: broadcast, round-robin,...
Execution variations can be:
synchronous: the sending threads will push messages all the way to receviers and invoke
receiving callbacks directly
asynchronous: the sending threads will push messages to receivers and dispatch the
receiving callbacks to some executors for later executions
buffering+pull:
Messages are buffered inside channel at Named_Out side; receivers pull message data in two
ways:
sync/blocking receiver: in this model, receivers are active receiving threads block-waiting at
Named_In, unblocked when message data are available and pull data from Named_Out
async receiver: async callback operations are registered to logical conditions of message arrivals
(choice and join) and will be dispatched to executors depending on message arrivals.
Message coordination patterns (choice and join) are applied for both pull synchronous and
asynchronous model to decide when a synchronous receiving thread can be unblocked or
asynchronous callback can be fired based on the available messages.
For message buffering inside channel, we can have various design choices:
synchronized queues with flow control: low water mark and high water mark
timeouts for when messages will expire
4.2.2 How operations are performed: synchronous/asynchronous
When messages arrive, we have 2 choices for how dispatching operations and callbacks are performed:
synchronous: the sending thread will carry out dispatching and callbacks
asynchronous: dispatching or callbacks will be scheduled in an executor; and later executed by
the executor in a different calling context or thread
There are various designs of executors. [7] provides detailed discussion about Java's executor design.
Different executors can run threads in different scheduling priorities and we can assign callbacks to run
in propr executors according to applications' requirements

4.2.3 Message passing coordination patterns


Join-calculus, Comega, and CCR [5][6] define a few messaging coordination patterns regarding to
when and how messages will be consumed and callbacks be fired:
Choice: a combo registration of a group of <name, callback>; whenever any "name" has
message delivered, its associated callback will be fired
Join: a callback is registered with a set of names; when messages become available on all
names, the messages will be taken from all names "atomically" and the registered callback will
be invoked.
In channel, both choice and join are applied in synchronous and asynchronous forms.

4.2.4 Messages handling


life-time management of messages:
avoid data copying, pass pointers to messages instead of duplicating message data
wrap message data pointer inside boost::shared_ptr so that message data's life time management
is automatic
marshaling/demarshaling of messages:
use Boost.Serialization for marshaling/demarshaling
a channel could have multiple remote connections, each of them could use different transport
(tcp/ip, soap, shared-memory) and on-wire message format (text/binary/xml). In Channel based
applications, a marshaler_registry can be created for each specific transport type and format. By
registering message data type with a marshaler_registry using ids as key, internal
marshaling/demarshaling functions will be created for the registered message data types and
invoked automatically when messages are sent-to and received-from remote channels. When a
channel is connected to a remote channel thru "streams", we must specify which
marshaler_registry to use for marshaling.
for each id/msg-type, we can have different settings of marshaling:
explicitly register a message data type: internally a marshaler object will be created and
registered with ids
use a globally registered default marshaler (if all ids use the same data structure)

4.3 Connection related


The following are design considerations related to channel connections.
4.3.1 Connections
there are 2 kinds of connections:
local connection: inside the same process, we can have multiple channels for different
purposes; we can connect these local channels to facilitate the communication among their
peers.
remote connection: we can connect channel to a remote channel inside a different process or
different machines; the remote interface is represented as "streams": socket stream, pipe, or a
message queue inside shared memory;
connection object is a simple object, just containing the two peers/ends of the connection.
ways to break a connection:
delete connection object; peers are deleted and channels disconnect
one peer channel is destroyed, the connection will be destroyed automatically and the other
channel disconnect

4.3.2 Peer
the common interface for connection peers: interfaces and streams
interface:
proxy of peer channel
core of channel connection logic:
. how remote binding/unbinding events will effect local name space
. how messages are propagated from and to remote channels
stream:
Stream is used to wrap a remote transport connection (socket, pipe or message queue inside
shared memory).
In earlier implementation of Channel on ACE[8], a Connector class is included as one of the core
classes; which will connect local and remote channels. the disadvantage of this design is that
Channel is tied to a specific architecture (such as thread per connection); making it difficult to
integrate channel with other existing servers.
In plan9/inferno, when we mount a remote name space to local, the real function is to mount a
descriptor (file, pipe, or socket connection) to a specific point in name space.
Following this style, remote channel connection is to connect/mount a "stream" to a local
channel/name_space; the stream wraps a socket, pipe, or shared_memory message queue
connecting a remote channel in another process or machine. thus avoid interfering with servers'
internal design: such as threading; so that channel can work well with both single-thread async
and multi-thread sync server design.

4.4 "Unnamed" binding of output/input or points of tightly-coupled local interactions


As discussed in the "Overall Design Idea" section, message passing happens on the binding of
calling(the sender of dispatcher) and co-calling(the receiver of dispatcher).
All the above discussions focus on setting up this binding thru name-matching in name spaces.
"Binding thru names" provides a loosely coupled computing model. A agent or thread can perform or
provide its functionality thru the "names" which it publishes and subscribs in application channel. It
can be moved to another process or another machine and continue functioning as before as long as in
its new enviroment there is a channel connected to the original channel and the moved agent attaches to
the new channel with the same set of "names". However sometimes it may be too much
burden/overhead than benefit to set up a name space and assign proper names, if all we want is
performing "localized" computation based on message passing model.
In many message passing based systems, threads(or processes in CSP meaning) communicate thru
"ports" or "channels" which are normal local objects with possible internal message queue. Choice/Join
arbiters work directly with these local objects. Pointers to these objects can be passed inside messages
to enable various message-passing based idioms. These provide a tightly coupled localized models
inside the same address space.
From channel's design perspective, these localized communication primitives can be encoded as
special kinds of binding set of senders (named_out) and receivers (named_in). They are "unnamed",
not set up thru name matching in name space. For example in C++CSP, there are One2OneChannel,
Any2OneChannel and One2AnyChannel. One2OneChannel can be encoded as the binding of one
"unnamed_out" and one "unnamed_in"; Any2OneChannel can be encoded as the binding of a group of
"unnamed_out" and a single "unnamed_in"; One2AnyChannel can be encoded as the binding of a
single "unnamed_out" and a group of "unnamed_in" (please note that CSP requires synchronous
rendezvous of sender and recevier which can be implemented thru a special dispatcher). There is a
similar case with normal event dispatching systems where application code directly attaches event
receivers (slots) to event sources (signals), not thru name-matching in name space.
Channel provides generic functions to set up and break binding among any pair of named_out and
named_in:
template <typename name> void bind(name *named_out, name *named_in);
template <typename name> void unbind(name *named_out, name *named_in);
By means of these functions, we can setup any imaginable bindings (1-N, N-1, N-M) among
named_out and named_in.
Various message passing systems use different idioms or patterns of binding sets. Channel provides
the following sample tightly coupled idioms thru "unnamed" in/out objects (unnamed_in_out.hpp):
Ports are objects with internal message buffering. Applications can send messages thru ports.
Choice/Join arbiters coordinate how messages are consumed from ports and how handlers are
called. (similar to CCR's ports)
Signals and Slots objects provide direct support for "localized" event dispatching. Slots objects
identify the bindings between event source (signal) and callbacks; Deletion of either signal or
slots will remove these bindings automatically.
Ports and Signals are simple template specialization of Named_Out and Named_Ins with null_id
(unnamed!) and proper dispatchers (pull dispatcher for Port and push dispatcher for Signal/Slot). They
can be customized by template parameters just as normal channel entities, e.g. Port can be customized
with different queue types and Signal/Slot can be customized with different dispatching algorithms
(broadcast, round-robin ...). Ports and Signals are well integrated with "named" entities:
Ports can participate in the same Choice/Join arbiter as "ids/names"; so arbiters can coordinate
both local and remote messages. Some important usage of this is to implement the timeout or
exception exit of Choice/Join. A timer generate events into a timeout port which can be included
in a Choice. Choice could receive timeout message and exit when it happens.
Ports and Signals can later be attached to names in name space; so they become normal
"named" entities and can enjoy binding thru name-matching and remote message passing as
usual.
4.5 Application architecture and integration
Channel is intentionally designed to be indepedent of threading models and connection strategies. So
channel can be helpful to implement various applications with different designs of threading and
connection:
event dispatching/callbacks based applications: such as GUI frameworks where there is no
explicit mentioning of thread (single threaded)
asynchronous I/O based server applications: such as servers based on Boot.Asio, eventlib or
libasync; in these kind of servers, there may be single or a few "main" threads driving the
reactive or proactive main event loop; all threads play equal role.
active objects or communicating processes based design: this is the original target applications
of Channel; they are popular in large scale distributed embedded systems design. In these
designs, the whole system is partitioned into many interacting processes/threads; each of which
performs a specific functionality (defined mostly by its publishing/sending names-set and
subscribing/receiving names-set), maintains/own a part of system state (finite state machines)
and interacting with each other ONLY thru message passing. No one can directly change the
system state owned by another process/thread; Change requests can only be sent to owner
process/thread as messages and owner can reasonably reject them.
thread pool based server design: In these systems, servers' job can be partitioned into scheduling
units - tasks, which are dispatched to a dedicated pool of threads for execution. There are two
kind of threads in system:
master threads: listening for incoming client requests and dispatch tasks to pool
worker threads: member of pool, waiting for tasks and execute them
Channel's independence of threading and connection also makes it easy to integrate Channel with
exising server applications of various designs. Basically we'll write wrapper classes to glue channel to
existing server mechanisms:
"executors" wrappers to integrate channel's asynchronous operations execution into servers'
threading model
"streams" wrappers to implement remote channel connection thru server existing connection
strategies

5. Classes

5.1 name space related

5.1.1 name spaces


The major purpose of name space is to set up the bindings among named_outs and named_ins based
on id-matching and scoping rules. There are 3 kinds of name spaces:
linear name_space (with exact matching)
hierarchical name_space (with prefix matching)
associative name_space (implemented as linear name space with associative matching)
name space API are fixed, must support the following methods:
void bind_named_out(name *n);
void unbind_named_out(name *n);
void bind_named_in(name *n);
void unbind_named_in(name *n);

5.1.2 id_type and id_trait


As described above, various id_types (integer, string, PODS, pathnames, tuples etc) can be used for
different applications and name spaces. To support name binding operations, id_type should support the
following operations:
id_types used for linear name space support operator<() for linear "ordering" among ids
id_types used for hierarchical name space support "containment" operation to decide among 2
ids which will contain the other
all id_type should define "match" operation in its id_trait classes:
For ids in linear name space, we define exact match in linear_id_trait
For ids in hierarchical name space, we define prefix match in hierarchical_id_trait (ie.
"/sports/*" will match both "/sports/basketball" and "/sport/baseball")
For ids in associative name space, ids will be either regex or tuples containing multiple
fields; for regex ids, regex pattern matching is used; for tuple ids, 2 ids will match if all
their fields match or some fields are wildcards. regex ids and tuple ids are defined in
assoc_id_trait.
To be able to use primitive data types as name space ids, containment and matching operations are
defined inside id_trait classes.
For channels to be connected with remote name spaces, non-primitive id_type should define serialize()
methods to allow ids be marshaled and demarshaled using Boost.Serialization.
Id_trait classes also contain definitions of the following 8 system ids:
static id_type channel_conn_msg;
static id_type channel_disconn_msg;
static id_type init_subscription_info_msg;
static id_type connection_ready_msg;
static id_type subscription_info_msg;
static id_type unsubscription_info_msg;
static id_type publication_info_msg;
static id_type unpublication_info_msg.
These ids are used internally for channel name space management. Applications can also subscribe to
these system ids to receive notifications about name space changes and add application logic to react
properly; for example:
start communication when remote channels connect in or remote peers join in.
perform special handlings when channel disconnect.

5.1.3 name and name binding callback


Class name is an internal class; application code will not use names directly however. Applications will
instantiate named_out and named_in to set up message passing logic.
Class name contains the most important information in name space: id, scope, membership and
binding_set.
When Named_In and Named_Out are instantiated, a name binding callback can be specified to allow
applications to be notify when peers bind to the name. Its signature :
void binding_callback(name *n, typename name::binding_event e).

5.1.4 named_out and named_in; publisher and subscriber


Class named_out and named_in are where name space and dispatcher meets. In fact, they inherit from
both class name and dispatcher.
Class named_out_bundle and named_in_bundle are helper classes to conveniently use a group of name
bindings.
On top of named_out_bundle and named_in_bundle, class publisher and subscriber provide direct
support for publish/subscribe model.

5.1.5 unnamed in/out: port and signal/slot


Class port provides direct support for localized tightly coupled message passing model. Port inherit
pull_dispatcher's sender which inherit queue class. So port can be used directly as a message queue and
application can put messages into it and get messages from it. However ports are mostly used with
choice/join arbiters.
Class signal/slot support localized "unnamed" event dispatching model.

5.1.6 binder, filter and translator


Filters and translators are defined to control name space changes during name space connection and
binding. Binders contain both filters and translators and are specified in channel connection function
calls. APIs and dummy operations of binders, filters and translators are defined here,

5.2 dispatching related

5.2.1 dispatchers
As we discussed above, dispatchers have 2 parts: sending and receiving algorithms. Dispatcher's API
are not fixed, depending on whether it uses push or pull model and it is synchronous or asynchronous.
The following sample dispatchers are provided:
push dispatchers:
broadcast_dispatcher:
senders/named_out broadcast messages/events to all bound receivers/named_in. This is the
most common event dispatching semantics.
round_robin_dispatcher:
senders/named_out send messages/events to bound receivers/named_in in round-robin
manner. Simple server load balance can be achieved thru this.
always_latest_dispatcher:
senders/named_out always send messages/events to the latest bound receivers/named_in.
This is a dispatcher to simulate plan9's union directory (though most semantics is achieved
thru nam space binding/connection). Suppose we use an id (such as "/dev/printer")
represent a printer resource. To print something, we send a message to that id. On another
machine there is another printer bound to the same id in their local channel. To be able to
use the 2nd printer, we could connect or mount the remote channel to local channel. Then
if always_latest_dispatcher is used, all following printouts (sent to /dev/printer) will come
from the remote printer. The local printer will get print messages again after the channels
disconnect.
pull dispatcher:
In pull dispatcher, messages/events are buffered inside channel at Named_Outs, high level
messaging coordination patterns - "arbiters" are defined at Named_Ins to decide when and how
messages are pulled from Named_Outs and consumed by receiving threads or callbacks.
synchronous arbiters (choice_sync, join_sync):
Both senders and receivers are active threads. Messages are buffered inside channel at
sender/named_out side and sending thread returns right away. Receiving threads block
waiting messages at synchronous arbiters. They unblock and process messages when
messages are available at named_outs and their associated arbiters fired.
asynchronous arbiters (choice_async, join_async):
Callbacks are registered with asynchronous arbiters. Messages are buffered inside channel
at sender/named_out side and sending thread will notify receivers before return.
Depending on arriving messages, asynchronous arbiters will decide which callbacks will
fire; and schedule them to execute in an executor. Join arbiters will quarantee that related
messages are consumed atomically.

5.2.2 messages
Application message/event data can be any data type : primitives, structs and classes. For remote
message passing, proper serialization functions must be defined using Boost.Serialization:
free serialization functions for non-intrusive serialization
message struct and classes can define serialize() methods; and should define default
construtor for serialization, otherwise save_construct/load_construct need to be
overwritten.
Please refer to the tutorials for sample message definitions.

5.2.3 queues
Queues are used for inside channel message buffering. One of pull dispatcher's template parameter is
queue type. Various applications can specify and use different queue types based on applications'
requirements and queues' capability. Queues will support the following common interface:
void put(elem_type & e);
void get(elem_type & e);

The following sample queue implementations are or will be provided:


unbounded_queue: a simple synchronized queue with unlimited buffer size, so senders are
never blocked.
bounded_queue: a synchronized queue bounded with a maximum number of buffered messages;
when queue buffer is full, the senders will be blocked till some messages are removed from the
queue.
dropping_queue: a synchronized queue bounded with a maximum number of buffered
messages; when queue buffer is full, newly added messages will force oldest messages to be
dropped and senders will never be blocked.
flow_controlled_queue (coming): a flow controlled queue supporting priority based message
enqueue and dequeue; modelled after ACE's message queue
timed_queue (coming): modelled after JavaSpace's "Lease on entries" mechanism; for each
message inserted into queue, a time-out value will be specified; the message will be dropped if
it is not consumed by receivers when its time-out expires.

5.2.4 executors
Executors allow us avoid explicitly spawning threads for asynchronous operations; thus avoiding
thread life cycle overhead and resource consumption. Executors should support the following
common interface to allow application register asynchronous operations to be executed later and cancel
this registration:
template <typename task_type>
async_task_base * execute(task_type task);
bool cancel(async_task_base *task);

The following sample executors are provided:


in_place_executor: just run the asynchronous task in current thread and current calling context.
delayed_executor: asynchronous task is queued and calling thread returns. The queued tasks can
be executed later by calling run().
asio_executor: a wrapper over asio's io_service.post() method. asynchronous tasks will be
dispatched to asio's main thread to execute.
thread_pool_executor: a dedicated pool of threads executing submited tasks.
threadpool_executor: a wrapper over Philipp Henkel's threadpool library
There are two places to plugin executors in framework:
channel-wide.
specifying an executor when channel is created. By default, all asynchronous operations
(event/message callbacks, name binding callbacks,...) will be scheduled and executed in this
executor.
where callbacks are bound.
For example, some applications may want to give different priorities to handling different event
ids (or message types). We can create several executors with their threads running in different
scheduling priority; and specify proper executors when named_in and named_out are created.

5.3 connection related

5.3.1 global functions for connecting channels


There are 3 overloaded global functions for connecting channels:
one for connecting 2 local channels of the same type:
template <typename channel>
typename channel::connection* connect(channel &peer1, channel &peer2,
typename channel::binder_type *binder1 = NULL,
typename channel::binder_type *binder2 = NULL)
connecting 2 local channels so that peers at both channels can communicate to each other
transparently. binders1 (containing filter and translator) defines how channel peer1's name space
will be changed.
one for connecting 2 local channels of different types:
template <typename channel1, typename channel2>
typename channel1::connection* connect(channel1 &peer1, channel2 &peer2,
typename channel1::binder_type *binder1 = NULL,
typename channel2::binder_type *binder2 = NULL);
one for connecting local channel to remote channels (streams):
template <typename channel, typename stream_t>
connection* connect(channel &peer,
stream_t * stream,
bool active,
typename channel::binder_type *binder = NULL)

Normally a connection to remote channel is represented as a "stream" objtect (tcp/ip socket


connection or shared memory connection). This connect() function is used to connect a local
channel to a remote channel represented by the stream.

5.3.2 connection
Class connection represent the connection between 2 channels. Deleting a connection object will break
the connection between 2 channels; and deleting any of the member channels will result in connection
object being deleted.

5.3.3 peer and interface


Class peer defines the common base class of connection proxies such as interface and streams.
Normally application code will not need class peer, unless creating a new channel connection
mechanism such as SOAP based streams.
Class interface is the proxy between its owner channel and a peer channel. It contains all the logic for
how remote name space will be "mounted" at local name space and how local name space change will
propagate to the remote name space and vice versa. It is here that filters filt message ids and translators
translate incoming and outgoing messages

5.3.4 streams
Streams are proxies for remote channels and wrap transport mechanisms. The following streams are
and will be provided:
asio_stream: a stream class using Boost.Asio socket to connect to peer channels.
asio_connector: a helper class providing 2 functions:
publishing local channels at specific ports (so that remote peer channel can
connect)
template <typename sock_conn_handler>
void async_accept(int port, sock_conn_handler hndl) ;
connecting to remote channels at their publication addresses (host, port):
template <typename sock_conn_handler>
void sync_connect(std::string host, std::string port, sock_conn_handler
hndl) ;
template <typename sock_conn_handler>
void async_connect(std::string host, std::string port, sock_conn_handler
hndl) ;
shmem_stream: a stream class using Boost.Interprocess shared memory message queues to
connect to channels in a separate process inside the same node.
soap_stream (coming): use SOAP protocol to connect to remote channels

5.3.5 marshaling registry

5.4 platform abstraction policy and synchronization policy

5.4.1 platform abstraction


Platform independence is one key factor for Channel's portability. Channel's internal implementation
depends on some system facilities, such as mutex, condition, timers and logging. Various platforms
have different levels of support and different APIs for these system facilities. Some boost libraries
already provide nice wrappers over system facilities such as Boost.Thread and Boost.Date_Time.
However for some system functions, boost doesn't have an approved library yet, such as logging. Class
boost_platform is a platform policy class defined to support platform independence. All the system
facilities Channel uses for internal implementation are either defined as nested classes wrapped inside it
or its static methods. To port Channel to a different software/hardware platform, one major work is to
reimplement the platform policy class using native functions (another is coping with compiler
difference). Take logging for example, in future if we have a portable boost library for it, we could
redefine boost_platform class to interface to it. Otherwise for a Windows specific application, we can
implement platform class logging API using Windows event log facility; for linux based application,
we can use syslog.

5.4.2 synchronization policy


Modeled after ACE's synchronization wrapper facades (ACE_Thread_Mutex, ACE_Null_Mutex,
ACE_Null_Condition,...) and Null Object pattern, two "no-op" classes null_mutex and null_condition
are defined. They follow the same interface as their counterparts in Boost.Thread and implement the
methods as "no-op" inline functions, which can be optimized away by compilers. Also modeled after
ACE's Synch_Strategy classes (MT_SYNCH, NULL_SYNCH) and Strategized Locking pattern, two
synchronization policy classes are defined: mt_synch and null_synch. mt_synch is for multithreaded
applications which contains Boost.Thread's mutex/condition classes as nested types. null_synch is for
single-threaded applications whose nested types are "null" types we mentioned above. synchronization
policy class is one of channel template parameters which we can use either mt_synch for channel to be
used in multithreaded application or use null_synch for single threaded application (such as event
dispatching) without incurring overhead. The usage is different from the above mentioned platform
independence, it is for application requirement and efficiency.
6. Class Concepts and How to extend Channel framework
One essential task of Generic Programming is to find the set of requirements for each class/type so that
the template framework can compile and operate properly. These requirements are called "concepts"
and include the following:
valid expressions
associated types
invariants
complexity guarantees
To extend Channel framework, new classes / types must satisfy the requirements of its "concept" so
that code can compile and run.
In the following discussions, we classify 2 kinds of requirements:
Primary requirements: must be satisfied for the main framework design idea
Secondary requirements: should be satisfied per local implementations

6.1 id_type and id_trait


1. Primary requirements
For each id_type, a partially specialized template class id_trait should be defined with the
following definitions:
nested/associated types: id_type
system internal message ids/names:
static id_type channel_conn_msg;
static id_type channel_disconn_msg;
static id_type init_subscription_info_msg;
static id_type connection_ready_msg;
static id_type subscription_info_msg;
static id_type unsubscription_info_msg;
static id_type publication_info_msg;
static id_type unpublication_info_msg
match() method: define the id-matching algorithm
serialize() method: marshaling/demarshaling method for passing id to remote channel
(only need for user defined class / structs as id_type)
for the code to work in Windows/VC++, the class definition should contain
BOOST_CHANNEL_DECL.
2. Secondary requirements
Per implementations, there are the following secondary requirements:
linear name space
Since current implementation use std::map to implement linear name space, user defined
id_type must define the following methods to satisfy the requirements of std::map :
bool operator< (const struct_id &id) const
bool operator== (const struct_id &id) const
bool operator!= (const struct_id &id) const
hierarchical name space
Hierarchical name space is implemented using trie data structure; to support trie related
operations, id_trait should add the following definitions:
static token_type root_token; //just a name for root trie node, not in name_space
static token_type wildcard_token;
static bool id1contains2(id_type id1, id_type id2)
Here is a detailed description of how to add id_type and id_trait for associative name_space
based on Linda-style associative lookup.

6.2 name space


1. Primary requirements
nested/associated types:
id_type;
id_trait;
synch_policy;
executor;
platform;
name;
name space management methods:
void bind_named_out(name *n)
void unbind_named_out(name *n)
void bind_named_in(name *n)
void unbind_named_in(name *n)
2. Secondary requirements
name space query related:
template <typename Predicate>
void bound_ids_for_in(Predicate p, std::vector<id_type> &ids)
template <typename Predicate>
void bound_ids_for_out(Predicate p, std::vector<id_type> &ids)
executor_type * get_exec(void)
Please refer to linear_name_space.hpp and hierarchical_name_space.hpp for detailed code.

6.3 dispatcher
Dispatchers are used as policy classes for channel template. As discussed above, each dispatcher
contains 2 algorithms: sending and receiving.
Dispatchers' API are not fixed, depending on whether it uses push or pull model and it is synchronous
or asynchronous. The API of provided dispatchers follow the general convention of providing various
send() and recv().
1. Primary requirements
Each dispatcher class should define 2 nested types:
sender
recver
These nested types are the parent classes of named_in and named_out.
Inside dispatcher nested types (sender and receiver classes), dispatching algorithms retrieve name
binding set from associated "name" object.
2. Secondary requirements
For dispatchers which are used in channel types with possible remote connections, the nested
receiver classes will expect the callback function's signature as :
void callback(id_type id, boost::shared_ptr<void> msg).
This requirement is because of the implementation of "interface" class.
Here is a detailed description of a sample pull dispatcher.

6.4 executor

6.5 queue

6.6 streams/connectors (or integrate into new architecture)

7. Compare Channel to others (plan9, STL)

7.1 Compare Unix/Plan9/Inferno file-system name space and Channel's name space
In Unix and other OSes, file system provides the machine-wide hierarchical name space for most
system resources. Applications use resources mostly by the standard file system calls:
open/close/read/write. By mounting remote file-systems, remote name space (and resources) can be
imported and accessed transparently by local applications.
Plan9/inferno push this idea further by 3 ideas: 1. all resources are represented as files. 2. each process
has its own private name space which can be customized according to applications' requirements. 3. an
uniform protocol - 9P is used for all remote message passings.[1][2]

Channel provides a process local name space for asynchronous message passing and event dispatching.
Compared to unix/plan9 name space:
Channel's name space is a light-weight user-space data structure; A process can create and use
multiple name spaces for different purpose. Unix(plan9/inferno)'s name space is a more
fundamental kernel feature well integrated with all system facilities (shell, window system,..).
Each process has only one.
file-system name spaces are based on function-call (local or remote procedure call) or (request-
response) semantics. Channel's name space is for asynchronous message passing or one way
request.
file-system name spaces are based on normal client-server model: file servers purely serve; ie.
clients will import names from servers, but servers never import names from clients. Channel is
peer-peer model; connected channels will import names from each other for communication.
In file-system name spaces, names refer to assumely stable/permanent entites (either disk files
or long-running servers); file name spaces are relatively static, ie, a specific name mostly refer
to the same resource either local or from a specific server; operations on names with stale/empty
binding will result in serious errors. Channel name spaces are purely dynamic. It is totally valid
to have a Named_Out object in name space without bound Named_In object (since message
subscribers may join in later). The binding of names (bound senders or receivers) can be
different between this and next invocations. Just as RobinMilner has clarified [3]:
"... is built upon the idea that the respondent to (or referent of) a name
exists no more persistently than a caller of the name. In other words, the notions of
calling and responding are more basic than the notions of caller and respondent; every
activity contains calls and responses, but to have a persistent respondent to x one that
responds similarly to every call on x is a design choice that may be sensible but is
not forced."
File-systems identify entities by string path names in a hierachical directories. Channel use
different naming schemes in different applications: linear name space (such as integer ids),
hierarchical name space (such as string path names), and associative name space (linda style)
in plan9, request dispatching is unicast - only one server get req and serve it. Channel can
support various dispatching policies - broadcast, unicast, buffered,...
file-system api is stream oriented: byte-streams are read from file or write to file. channel's api
is discrete message oriented.

7.2 compare STL and Channel


Some mapping between STL and Channel's concepts:
containers (sequence, assoc) <=> name spaces (linear/hier/assoc)
elements in container <=> names (unit/element in name space)
iterator_range (target of algorithms) <=> name binding set (sender->receiver(s), receiver-
>sender(s)) target of dispatchers
algorithms <=> dispatchers
Dispatchers are defined using named bindings of senders and receivers, which is provided by name
space; similar to that STL algorithms are defined in iterator range of [begin_iterator, end_iterator),
while iterator range is provided by containers.

8. Reference Links
[1] Preface to the Second (1995) Edition (Doug McIlroy)
[2] The Use of Name Spaces in Plan 9 (Rob Pike,...)
[3] What's in a name? (Robin Milner)
[4] Turing, Computing and Communication (Robin Milner)
[5] Comega
[6] CCR
[7] Java's executor
[8] http://channel.sourceforge.net

Вам также может понравиться