Академический Документы
Профессиональный Документы
Культура Документы
iDirect, Inc.
International Headquarters
13865 Sunrise Valley Drive
Herndon, VA 20171
www.iDirect.net
(888) 362-5475
(703) 648-8080
Figures
Figure 1. Sample iDirect Network ...................................................... 3
Figure 2. iDirect IP Architecture – Multiple VLANs per Remote .................... 5
Figure 3. iDirect IP Architecture – VLAN Spanning Remotes ........................ 6
Figure 4. iDirect IP Architecture – Classic IP Configuration ......................... 7
Figure 5. iDirect IP Architecture - TDMA and iSCPC Topologies .................... 8
Figure 6. Double-Hop Star Network Topology ........................................10
Figure 7. Single-Hop Mesh Overlay Network Topology ..............................11
Figure 8. Integrated Mesh and Star Network .........................................13
Figure 9. Segregated Mesh and Star Networks .......................................14
Figure 10. Mesh Private Hub ...........................................................15
Figure 11. Mesh VSAT Sizing ............................................................22
Figure 12. Uplink Power Control .......................................................26
Figure 13. Remote and QoS Profile Relationship ....................................37
Figure 14. iDirect Packet Scheduling Algorithm .....................................40
Figure 15. Sticky CIR Custom Keys .....................................................46
Figure 16. C/N Nominal Range .........................................................53
Figure 17. TX Initial Power Too High ..................................................54
Figure 18. TX Initial Power Too Low ..................................................55
Figure 19. Global NMS Database Relationships ......................................58
Figure 20. Sample Global NMS Network Diagram ...................................59
Figure 21. Protocol Processor Architecture ..........................................64
Figure 22. Sample Distributed NMS Configuration ..................................66
Figure 23. dbBackup and dbRestore with a Distributed NMS .....................68
Figure 24. Downstream Data Path .....................................................73
Figure 25. SCPC TRANSEC Frame .......................................................74
Figure 26. Upstream Data Path ........................................................75
Figure 27. TDMA TRANSEC Slot .........................................................76
Figure 28. Key Distribution Protocol ..................................................78
Figure 29. Key Rolling and Key Ring ...................................................79
Figure 30. Host Keying Protocol ........................................................80
Figure 31. Overlay of Carrier Spectrums ............................................ 100
Figure 32. Adding an Upstream Carrier By Reducing Carrier Spacing .......... 103
PURPOSE
The purpose of this Technical Reference Guide is to give you detailed technical information
about iDS Release 7 (major) features. It progressively explains new and enhanced features that
are available in the different 7.x (minor) releases.
INTENDED AUDIENCE
The intended audience for this guide is network operators using the iDirect iDS system, network
architects, and anyone upgrading to iDS Release 7.x.
Note: It is expected that the user of this material has attended the iDirect IOM training
course and is familiar with the iDirect Star network solution and associated
equipment.
DOCUMENT CONVENTIONS
This section illustrates and describes the conventions used throughout the manual. Take a look
now, before you begin using this manual, so that you can correctly interpret the information
presented.
There are important notes throughout this guide that are presented as follows:
Note: Text written with bold italic indicates note information. This information is of
interest to you, and pertains to the part of the procedure in which it is listed. Make
sure you review this information before you proceed.
GETTING HELP
The iDirect Technical Assistance Center (TAC) is available to help you 24x7x365. Software
releases, upgrades and patches, documentation that supports our products, and an FAQ page are
available on the TAC web page. Please access our TAC web page at: http://tac.idirect.net.
If you are unable to find the answers or information you need, you can contact the TAC at (703)
648-8151.
SYSTEM OVERVIEW
An iDirect network is a satellite based TCP/IP network with a Star topology in which a Time
Division Multiplexed (TDM) broadcast downstream channel from a central hub location is shared
by a number of remote nodes. An example iDirect network is shown in Figure 1.
The iDirect Hub equipment consists of an iDirect Hub Chassis with Hub Line Cards, a Protocol
Processor (PP), a Network Management System (NMS) and the appropriate RF equipment. Each
remote node consists of an iDirect broadband router and the appropriate external VSAT
equipment. The remotes transmit to the hub on one or more shared upstream carriers using
Deterministic-Time Division Multiple Access (D-TDMA), based on dynamic timeplan slot
assignment generated at the Protocol Processor.
Beginning with iDirect Release 7, a Mesh overlay can be added to the basic Star network
topology, allowing traffic to pass directly between remote sites without traversing the hub. This
allows real-time traffic to reach its destination in a single satellite hop, significantly reducing
delay. It also saves the bandwidth required to retransmit Mesh traffic from the hub to the
destination remote. For a description of the iDirect Mesh overlay architecture, see “Mesh
Technical Description” on page 11.
The choice of upstream carriers is determined either at network acquisition time or dynamically
at run-time, based on a network configuration setting. iDS software has features and controls
that allow the system to be configured to provide QoS and other traffic engineered solutions to
remote users. All network configuration, control, and monitoring functions are provided via the
integrated NMS. The iDS software provides packet-based and network-based QoS, TCP
acceleration, 3-DES or AES link encryption, local DNS cache on the remote, end-to-end VLAN
tagging, dynamic routing protocol support via RIPv2 over the satellite link, multicast support via
IGMPv2, and VoIP support via voice optimized features such as cRTP.
An iDirect network interfaces to the external world through IP over Ethernet via 10/100 Base-T
ports on the remote unit and the Protocol Processor at the hub.
IP ARCHITECTURE
The following figures illustrate the basic iDirect IP Architecture with different levels
configuration available to you:
• Figure 2, “iDirect IP Architecture – Multiple VLANs per Remote”
• Figure 3, “iDirect IP Architecture – VLAN Spanning Remotes”
• Figure 4, “iDirect IP Architecture – Classic IP Configuration”
iDS Release 7.0 allows you to mix traditional IP routing based networks with VLAN based
configurations. This capability provides support for customers that have conflicting IP address
ranges in a direct fashion, and multiple independent customers at a single remote site by
configuring multiple VLANs directly on the remote.
In addition to end-to-end VLAN connection, the system supports RIPv2 in an end-to-end manner
including over the satellite link; RIPv2 can be configured on per-network interface.
In addition to the network architectures discussed so far, the iDirect iSCPC solution allows you to
configure, control and monitor point-to-point Single Carrier per Channel (SCPC) links. These
links, sometimes referred to as “trunks” or “bent pipes,” may terminate at your teleport, or
may be located elsewhere. Each end-point in an iSCPC link sends and receives data across a
dedicated SCPC carrier. As with all SCPC channels, the bandwidth is constant and available to
both sides at all times, regardless of the amount of data presented for transmission. SCPC links
are less efficient in their use of space segment than are iDS TDMA networks. However, they are
very useful for certain applications. Figure 5 shows an iDirect system containing an iSCPC link
and a TDMA network, all under the control of the NMS.
The iDirect Broadband VSAT network is a complete turn-key solution for broadband IP
connectivity over satellite. The iDirect technology combines the industry’s fastest data rates
with the leading satellite access methods to provide the most reliable and bandwidth efficient
solutions to meet voice, video, and data transmission requirements. An iDirect networking
solution can be implemented for Point-to-Point (SCPC), Star, or Mesh topology application
requirements.
The iDirect Mesh offering functions as a full-Mesh solution. However, it is implemented as a Mesh
overlay network superimposed on your iDirect Star network. The Mesh overlay provides the
direct connectivity between remote terminals with a single trip over the satellite, cutting the
latency in half and reducing satellite bandwidth requirements.
For example, consider a Voice over IP (VoIP) call from remote User A to remote User B in a Star
network (Figure 6).
In the network shown in Figure 6, the one-way transmission delay from user A to user B over a
geosynchronous satellite averages 500 ms. The extended length of the delay is due to the
double-hop nature of the transmission: remote A to the satellite, the satellite to the hub, the
hub back to the satellite, and the satellite to remote B. This transmission delay, added to the
voice processing and routing delays in each terminal, results in an unacceptable quality of
service for voice. In addition, the remote-to-remote transmission requires twice as much
satellite bandwidth than is required for a single-hop call.
A more cost effective use of satellite bandwidth and improved quality of service for real-time
traffic can be achieved by providing remote-to-remote connections over a single satellite hop,
as provided in Mesh networks (Figure 7).
In a full-Mesh network, all remotes can communicate directly with one another. Such a network
typically consists of a master terminal, which provides network management and network
synchronization, and remote user terminals.
One advantage of the iDirect Mesh topology is that all remote terminals are part of the Star
network. This allows the monitor and control functions and the timing reference for the Mesh
network to be provided by the existing hub equipment over the SCPC downstream carrier.
In an iDirect Mesh network, the hub broadcasts to all remotes on the Star TDMA outbound
channel. This broadcast sends user traffic and the control and timing information for the entire
network of inbound Mesh and Star channels. The Mesh remotes transmit user data on Mesh TDMA
inbound channels, which other Mesh remotes are configured to receive.
The Mesh remotes receive and listen to a single Mesh inbound using the second demodulator on
the Indoor Unit (IDU), allowing them to communicate directly.
Since the hub also receives and listens to the Mesh inbound channel, communication between
Mesh remotes and the hub is the same for Mesh remotes as for Star remotes.
Note: iDirect Mesh technology is logically a full-Mesh network topology. All Mesh remotes
can communicate directly with each other (and the hub) in a single-hop. This is
achieved with Mesh channel(s) laid over a single Star outbound channel and is
referred to as a Star/Mesh configuration. When referring to the iDirect product
portfolio, “Star/Mesh” and “Mesh” are synonymous.
Physical Topology
You can design and implement a Mesh network topology as either Integrated Mesh and Star, or
Segregated Mesh and Star.
On an existing hub outbound and infrastructure, the Network Operator uses a current outbound
channel for the Star network, but adds additional Mesh inbound channel(s). The existing
outbound is used for current remotes in a Star network, and for newly added remotes in the
Mesh configuration. The resulting hybrid network that includes Star and Mesh sub-networks is
shown in Figure 8.
Star Inbound
Star/ Mesh
Outbound Star
Mesh Inbound And
Mesh Star Mesh
Outbound In In
Hub
Star Remote
Group Mesh Remote
Group
Multiple Mesh and Star inroute groups may co-exist in a single network. Each Mesh inroute group
uses its own inbound channel for remote-to-remote traffic within the respective group and for
Star return traffic. There are no limitations on the number or combination of inroute groups in a
network, other than the bandwidth required and slot availability in a hub chassis for each
inroute.
In Phase 1, the Network Operator creates a new outbound channel and one or more inbound
channels, resulting in a totally segregated Mesh network. This can be achieved on two product
platforms:
• Hub Mesh: Separate outbound carriers and separate inbound carrier(s) on the iDirect 15000
series™ Satellite Hub (see Figure 9).
Star Outbound
Star Inbound
Mesh Outbound
Mesh Inbound
Hub
Star Remote
Group Mesh Remote
Group
• Mesh Private and Mini Hub: A standalone segregated Mesh option with a single outbound
carrier and a single inbound carrier only on the iDirect 10000 series™ Private and Mini
Satellite Hub (see Figure 10).
Mesh Outbound
Private
Hub Mesh Remote
Group
Network Topology
iDirect Mesh technology is a full-Mesh network combining a mixture of a Star outbound channel
with Mesh inbound channel(s). It provides options for you to design your network topologies to
meet the traffic requirements.
In phase 1 of the Mesh product, a single inbound channel is used for both the Star return traffic
and remote-to-remote traffic. (Future releases support multiple inbound carriers per Mesh
inroute group). It is important to realize that in Phase 1, all unreliable (un-accelerated) traffic
takes a single hop, whereas all accelerated traffic takes a double-hop between remote peers.
Note: This applies only if TCP acceleration is enabled. This must be taken into
consideration when sizing the outbound channel bandwidth.
A network may consist of multiple Star and Mesh inroute groups. All un-accelerated traffic
within a Mesh inroute group takes a single hop. However, all traffic between inroute groups
takes a double-hop via the hub. For example, in Figure 9, “Segregated Mesh and Star Networks,”
on page 16, a remote in Mesh inroute group A can communicate with a remote in Mesh inroute
group B via the hub.
In certain networks, the additional outbound traffic required for double-hop traffic may not be
acceptable. For example, in a network where almost all the traffic is remote-to-remote, there is
no requirement for a large outbound channel, other than for the accelerated traffic. In these
cases, a mode of operation is supported that disables TCP acceleration for the entire Mesh
inroute group. When this option is selected, TCP acceleration is disabled both from the hub to
the remotes and between the remotes. In this mode, all remote-to-remote traffic takes a single
hop.
Note: Because acceleration (sometimes called “spoofing”) is disabled in this mode, each
TCP session is limited to a maximum of 128 kbps due to latency introduced by the
satellite path.
HARDWARE REQUIREMENTS
This section describes the hardware requirements for iDirect networks. Where possible, the iDS
iBuilder support software enforces the hardware requirements during network configuration.
The Outbound TDM loopback channel and the Inbound D-TDMA channel must take the same RF
path at the Hub. The Uplink Control Protocol (UCP) assumes that the frequency offsets that are
introduced in the Hub down-conversion equipment and the signal strength degradations in the
downlink path are common to both the Outbound TDM loopback channel and the Inbound D-
TDMA channel. UCP does not work correctly and Mesh peer remotes can not communicate
directly with each other if the hub RFT uses different equipment for each channel.
• The inbound carrier must be demodulated by either an M1D1 or M0D1 iNFINITI line card hub.
Only iNFINITI Private Hubs support Mesh for both the outbound an inbound carriers. Minihub-15,
Minihub-30 and Netmodem II Plus Private Hubs do not support Mesh.
If an LNB is used at the hub (Hub Chassis or Private Hub), it must be an externally referenced
PLL downconverter LNB.
Only iNFINITI series 53xx, 73xx, and iCONNEX-R remote models support Mesh.
In addition to the correct sizing of the ODU equipment (remote antenna and remote BUC), a PLL
LNB must be used for the downconverter at the remote.
Note: Compared to Star VSAT networks, where the small dish size and low power BUC are
acceptable for many applications, a Mesh network typically requires both larger
dishes and BUC. (See “Network Considerations” on page 23.)
Transponder Usage
The Outbound and Inbound channels must use the same transponder.
Note: The outbound echo is demodulated on the same line card (M1D1 only) that modulates
the outbound channel. This line card is capable of demodulating a Star or Mesh
inbound channel using its primary demodulator.
• The outbound channel supporting a Mesh network carries both user data and the network
monitoring and control information used to control the Mesh inbound channels, including
timing, slot allocation, and others.
• The hub is the only node in a Mesh network that transmits on the Mesh Outbound channel.
Data and VoIP packets that need to be sent from the hub to remotes are sent on this shared
broadcast channel. The outbound channel is also used to route network control information
from the centralized Network Management System (NMS) and to send dynamic bandwidth
allocation changes to the remotes.
Each Mesh inroute group supports one inbound D-TDMA channel. This shared access channel
provides data and VoIP connectivity between Mesh remotes and from the remotes to the hub.
Although the hub receives and demodulates the Mesh inbound, it does not transmit on this
channel. The remote terminals are assigned transmit time slots on the inbound channels based
on the dynamic bandwidth allocation algorithms provided by the hub.
Dynamic Allocation: Dynamic Bandwidth is only assigned to remote terminals that need to
transmit data; this bandwidth is taken away from terminals that are idle. These allocation
decisions are made several times a second by the hub which is constantly monitoring the
bandwidth demands of the remote terminals. The outbound channel is then used to transmit the
dynamic bandwidth allocation of the Mesh inbound carriers.
Single-Hop: Data is able to traverse the network directly from a remote terminal to another
remote terminal with a single trip over the satellite. This is critical for delay-sensitive
applications, such as voice and video connections.
Within the Mesh topology, all such iDirect features are valid and available. The iDirect Mesh
technology respects application and system QoS rules, such as minimum information rate,
committed information rate and maximum information rate.
Note: Allowing only non-TCP traffic to be transmitted directly from one remote to another
adds to the QoS functionality within the iDirect platform. By default, only allowing
the traffic that benefits from a single hop between remote results in less
configuration issues for the Network Operator. Mesh inbound channels can be scaled
appropriately for time-sensitive traffic such as voice and video.
Routing
Prior to the Mesh feature, all upstream data on a remote was routed over the satellite to the
Protocol Processor. With the introduction of Mesh, additional routing information is provided to
each remote in the form of a routing table. This table contains routing information for all
remotes in the Mesh inroute group and the subnets behind those remotes. The table is
periodically updated based on the following:
• Additions and deletions of new remotes to the Mesh inroute group
• Additions and deletions of static routes in the NMS if RIP is enabled
• Failure conditions of the remote or Hub Line Card
The Mesh routing table is periodically multicast to all remotes in the Mesh inroute group.
In the event of a failed outbound loopback signal at the hub, the Mesh routing table is updated
to reflect that all traffic to and from all remotes is routed through the hub. This allows the
remote to stay in the network but the remote operates as if in a Star network. It reverts to the
double-hop method of remote-to-remote connectivity. Once the hub detects the outbound
loopback signal again, the Mesh routing table is updated and the remote rejoins the Mesh
network.
true in situations in which a central call manager is used at the hub location to coordinate call
setup.
NETWORK CONSIDERATIONS
This section discusses the following topics with respect to iDirect Mesh networks: “Link Budget
Analysis”, “Uplink Control Protocol (UCP)”, and “Bandwidth Considerations”.
In a Star network, the inbound channel is configured to operate at a point lower than the EPEBW
point. A Mesh inbound channel operates at or near EPEBW. The link budget analysis provides a
per-carrier percentage of transponder power or Power Equivalent Bandwidth (PEB) where the
availability of the remote-remote pair is met. For a given data rate, this PEB is determined by
the worst-case remote-to-remote (or possibly remote-to-hub) link. The determination of BUC
size, antenna size, FEC rate and data rate is an iterative process designed to find the optimal
solution.
Once determined, the PEB is used as the target or reference point for sizing subsequent Mesh
remotes. It can be inferred that a signal reaching the satellite from any other remote at the
operating or reference point is detected by the remote in the worst-case EIRP contour (assuming
fade is not greater than the calculated fade margin). Remote sites in more favorable EIRP
contours may operate with a smaller antenna/BUC combination.
Note: iDirect recommends that an LBA be performed for each site to determine optimal
network performance and cost.
This section outlines the general tasks for determining a Mesh link budget. Refer to Figure 11.
1. Reference Mesh VSAT: Using the EIRP and G/T footprints of the satellite of interest and the
region to be covered, determine the current or future worst-case site (Step 1). The first link
calculation is this worst-case site back to itself (Step 2). Using various combinations of
antenna size, HPA size, and FEC that provides the most efficient transponder usage and
practical VSAT sizing for the desired carrier rate (Steps 3 and 4). The percentage of
transponder power or Power Equivalent Bandwidth PEB required is the reference point for
subsequent link budgets.
2. Forward/Downstream Carrier: Using the reference site and its associated antenna size
determined in Task 1, determine the combination of modulation and FEC that provides the
most efficient transponder usage.
3. Successive Mesh VSATs: The sizing of additional sites is a two step process with the first link
budget sizing the antenna and the second sizing the HPA.
• Antenna Size: Calculate a link budget using the Reference VSAT as the transmit site and
the new site as the receive site. Using the same carrier parameters as those for the
Reference site, the antenna size is correctly determined when the PEB is less than or
equal to the reference PEB.
• HPA Size: Using the same carrier parameters as those used for the Reference site, this
analysis determines the required HPA size.
Frequency
In a Star configuration, frequency offsets introduced to the upstream signal (by frequency down-
conversion at remote’s LNB, up-conversion at remote’s BUC, satellite frequency translation, and
down-conversion at the hub) are all nulled out via Uplink Control Protocol messages from the
hub to each remote every 20 seconds. Short-term frequency drift by each remote can be
accommodated by the hub because it uses a highly stable reference to demodulate each burst. A
remote does not have such a highly stable local reference source. The remote uses the outbound
channel as a reference source for the inbound channel. A change in temperature of a DRO LNB
can cause a significant frequency drift to the reference. In a Mesh network, this can have
adverse effects on both the SCPC outbound and TDMA inbound carriers, resulting in a remote
demodulator that is unable to reliably recover data from the Mesh channel. A PLL LNB offers
superior performance, since it is not subject to the same short term frequency drift.
Power
A typical iDirect Star network consists of a hub with a large antenna, and multiple remotes with
small antennas and small BUCs. In a Star network, UPC adjusts each remote transmit power on
the inbound channel until a nominal carrier-to-noise signal strength of approximately 9 dB is
achieved at the hub. Because of the large hub antenna, the operating point of a remote is
typically below the contracted power (EPEBW) at the satellite. For a Mesh network, where
remotes typically have smaller antennas than the hub, a remote does not reliably receive data
from a another remote using the same power. It is therefore important to maximize the use of
all available power.
UPC for a Mesh network adjusts the remote Tx power so that it always operates at the EIRP at
beam center on the satellite to close the link, even under rain fade conditions. This can be
equal to or less than the contracted power/EPEBW. Larger antennas and BUCs are required to
meet this requirement. The EIRP at beam center and the size of the equipment are calculated
based on a link budget analysis.
The UPC algorithm uses a combination of the following parameters to adjust each remote
transmit power to achieve the EIRP@BC at the satellite:
• Clear-sky C/N for both the TDMA inbound and SCPC outbound loopback channels
(obtained during hub commissioning)
• The hub UPC margin (how much external hub-side equipment can accommodate hub
UPC1)
• The outbound loopback C/N at the hub
• Each remote inbound C/N at the hub
The inbound UPC algorithm determines hub-side fade, remote-side fade, and correlated fades
by comparing the current outbound and inbound signal strengths against those obtained during
clear sky calibration. For example, if the outbound loopback C/N falls below the clear sky
condition, it can be assumed that a hub-side fade (compensated by hub side UPC) occurred.
Assuming no remote side fade, an equivalent downlink fade of the inbound channel would be
expected. No power correction is made to the remote. If hub-side UPC margin is exceeded, then
outbound loopback C/N is affected by both uplink and downlink fade and a significant difference
compared to clear sky would be observed.
Similarly if the inbound C/N drops for a particular remote and the outbound loopback C/N does
not change compared to the clear sky value, UPC increases the remote transmit power until the
inbound channel clear sky C/N is attained. Similar C/N comparisons are made to accommodate
correlated fades.
1. iDirect equipment does not support hub-side UPC. Typical RFT equipment at a teleport installation uses
a beacon receiver to measure downlink fade. An algorithm running in the beacon receiver calculates
equivalent uplink fade and adjusts an attenuator to ensure a constant power (EPEBW) at the satellite for
the outbound carrier. The beacon receiver and attenuator is outside of iDirect’s control. For a hub with-
out UPC, the margin is set to zero.
Note: In a Mesh network, for each remote the inbound C/N at the hub is likely to be greater
than that typically observed in a Star network. Also, when a remote is in the Mesh
network, the nominal C/N signal strength value for a Star network is not used as the
reference.
In the event of an outbound loopback failure, the UPC algorithm reverts to Star mode. This
redundancy allows remotes in a Mesh inroute group to continue to operate in Star only mode.
Figure 12 illustrates Uplink Power Control.
Timing
An inbound channel consists of a TDMA frame with an integer number of traffic slots. In a Star
network, during the acquisition process, the arrival time of the Start of the TDMA frame/
inbound channel at the hub is determined. The acquisition algorithm adjusts in time the start of
transmission of the frame for each remote such that it arrives at the satellite at exactly the
same time. The burst scheduler in the protocol processor ensures that two remotes do not burst
at the same time. With this process the hub line card knows when to expect each burst relative
to the outbound channel transmit reference. As the satellite moves within its station keeping
box, the uplink control protocol adjusts the Start timing of a frame for each remote, so that the
inbound channel frame always arrives at the hub at the same time.
A similar mechanism that informs a remote when to expect the start of frame for the inbound
channel is required. This is achieved by determining the round trip time for hub-to-satellite-to-
hub from the outbound channel loopback. This information is relayed to each remote. An
algorithm determines when to expect the Start of the inbound channel, and determines burst
boundaries.
Note: In Phase 1, a Mesh remote listens to all inbound channel bursts, including bursts it
originates. Only those bursts transmitted from other remotes and destined for that
remote, and bursts transmitted by the remote itself, are processed by software. All
other traffic is dropped.
Bandwidth Considerations
When determining bandwidth requirements for a Mesh network, it is important to understand
that there are a number of settings that must be applied to all remotes in an inroute group. In a
Star network, SAR and VLAN can be configured on a hub-remote pair basis. For a Mesh network,
all remotes in the inroute group must have a common SAR configuration. SAR is always enabled
and enforced in the NMS (two bytes required). The same argument applies to VLAN IDs (two
bytes are required). Additional header information (two bytes are required) indicating the
destination applies to Mesh traffic only. Star traffic is unaffected; however, SAR and VLAN are
also always enabled back to the hub.
In a Star network, remote status is periodically sent to the hub and reported in iMonitor. With
the same periodicity, additional status information is reported on the health of the Mesh link.
MESH COMMISSIONING
The commissioning of a Mesh network is straightforward and requires only a few additional steps
compared to the commissioning of a Star network.
Note: In a Mesh network, where relatively small antennas (compared to the hub antenna)
are used at remote sites, additional attention to Link Budget Analysis (LBA) is
required. Each remote requires an LBA to determine antenna and BUC size for the
intended availability and data rate.
Due to the requirement that the Mesh inbound channel operates at the contracted power point
on the satellite, calibration of both the outbound loopback and the Mesh inbound channels at
the hub during clear sky conditions is required during commissioning. Signal strength
measurements (C/N) of the respective channels observed in iMonitor are recorded in iBuilder.
The clear sky C/N values obtained during commissioning are used for uplink power control of
each remote.
Note: In order for a Mesh network to operate optimally and to prevent over driving of the
satellite, commissioning must be performed in clear sky conditions. See the iBuilder
User Guide for more information.
Pre-Migration Tasks
Prior to migrating an existing Star network to a Mesh network, iDirect recommends that you
perform the following:
• A link budget analysis comparison for Mesh Star versus Star network.
• Verification of the satellite transponder configuration for hub and each remote. All hubs
and remotes must be in the same the geographic footprint. They must be able to receive
their own transmit signals. This precludes the use of the majority of spot beam and hemi-
beam transponders for Mesh networks.
• Verification of ODU hardware requirements are met: externally referenced PLL LNBs for
Private Hubs, PLL LNB for all remotes, and BUC and antenna sizing for a given data rate.
• Each outbound and inbound channel must be calibrated to determine clear sky C/N
values.
• Re-commissioning of each remote. This applies to initial transmit power only, and can be
achieved remotely.
Migration Tasks
The remote C/N at the hub is higher in a Mesh network. UPC adjusts the transmit power of all
remotes in order to operate at a common C/N range at the hub. A remote with a C/N
significantly higher or lower than this range can not acquire into the network. For a Mesh
network, a remote has a higher initial Tx power setting than used for a Star network. Note that
the same rationale applies when changing a remote from a Mesh network to a Star network for
example the initial Tx power needs to be adjusted to accommodate the Star requirements.
Modulation Mode
iSCPC Block Payload
Hardware Support BPSK QPSK 8PSK
FEC Size Bytes §
iSCPC
Links
.431 53
M1D1-iSCPC, 5xxx, 73xx Yes Yes X 1K
.533 66
M1D1-iSCPC, 5xxx, 73xx Yes Yes X
.495 4K 251
II+ X Yes X
M1D1-iSCPC, 5xxx, 73xx Yes Yes Yes
.793 4K 404
II+ X Yes X
.879 M1D1-iSCPC, 5xxx, 73xx Yes Yes Yes 16K 1800
§ SCPC channel framing uses a modified HDLC header, which requires bit-stuffing to prevent
false end-of-frame detection. The actual payload is variable, and always slightly less than the
numbers indicated in the table.
§§ The TDMA Payload Bytes value removes the TDMA header overhead of 10 bytes:
Demand=2 + LL=6 + PAD=2. SAR, Encryption, and VLAN features add additional overhead.
The Table shows the combinations of the upstream and downstream modulation types, along
with of how likely they are to be implemented in an operational network. The combinations
shown in italic font have not been tested for iDS release 7. If you need to use one of these
combinations, please contact the iDirect Technical Assistance Center (TAC) for more information
(refer to “Getting Help” on page 2).
Note: For specific Eb/No values for each FEC rate and Modulation combination, contact the
iDirect Technical Assistance Center (TAC). Refer to “Getting Help” on page 2 for
contact information.
QOS MEASURES
When discussing QoS, at least four interrelated measures are considered. These are Throughput,
Latency, Jitter, and Packet Loss. This section describes these parameters in general terms,
without specific regard to an iDirect network.
Throughput. Throughput is a measure of capacity and indicates the amount of user data that is
received by the end user application. For example, a G729 voice call without additional
compression (such as cRTP), or voice suppression, requires a constant 24 Kbps of application
level RTP data to achieve acceptable voice quality for the duration of the call. Therefore this
application requires 24 Kbps of throughput. When adequate throughput cannot be achieved on a
continuous basis to support a particular application, Qos can be adversely affected.
Latency. Latency is a measure of the amount of time between events. Unqualified latency is the
amount of time between the transmission of a packet from its source and the receipt of that
packet at the destination. If explicitly qualified, it may also mean the amount of time between
a request for a network resource and the time when that resource is received. In general,
latency accounts for the total delay between events and it includes transit time, queuing, and
processing delays. Keeping latency to a minimum is very important for VoIP applications for
human factor reasons.
destination every 20 ms; this is a jitter value of zero. When dealing with a packet-switched
network, zero jitter is particularly difficult to guarantee. To compensate for this, all VoIP
equipment contains a jitter buffer that collects voice packets and sends them at the appropriate
interval (20 ms in this example).
Packet Loss. Packet Loss is a measure of the number of packets that are transmitted by a
source, but not received by the destination. The most common cause of packet loss on a
network is network congestion. Congestion occurs whenever the volume of traffic exceeds the
available bandwidth. In these cases, packets are filling queues internal to network devices at a
rate faster than those packets can be transmitted from the device. When this condition exists,
network devices drop packets to keep the network in a stable condition. Applications that are
built on a TCP transport interpret the absence of these packets (and the absence of their
related ACKs) as congestion and they invoke standard TCP slow-Start and congestion avoidance
techniques. With real time applications, such as VoIP or streaming video, it is often impossible
to gracefully recover these lost packets because there is not enough time to retransmit lost
packets. Packet loss may affect the application in adverse ways. For example, parts of words in
a voice call may be missing or there maybe an echo; video images may break up or become
block-like (pixilation effects).
Rule
Rule
Rule 1..n
Clause 1..n
Source/Destination IP Address
Source/Destination Port Number
Protocol
Type of Service (TOS/DSCP)
Service Levels
A Service Level may represent a single application (such as VoIP traffic from a single IP address)
or a broad class of applications (such as all TCP based applications). Each Service Level is
defined by one or more packet-matching rules. The set of rules for a Service Level allows logical
combinations of comparisons to be made between the following IP packet fields:
• Source IP address
• Destination IP address
• Source port
• Destination port
• Protocol (such as DiffServ DSCP)
• TOS priority
• TOS precedence
• VLAN ID
Packet Scheduling
Packet Scheduling is a method used to transmit traffic according to priority and classification.
In a network that has a remote that always has enough bandwidth for all of its applications,
packets are transmitted in the order that they are received without significant delay.
Application priority makes little difference since the remote never has to select which packet to
transmit next.
In a network where there are periods of time in which a remote does not have sufficient
bandwidth to transmit all queued packets the remote scheduling algorithm must determine
which packet from a set of queued packets across a number of service levels to transmit next.
For each service level you define in iBuilder, you can select any one of three queue types to
determine how packets using that service level are to be selected for transmission. These are
Priority Queue, Class-Based Weighted Fair Queue (CBWFQ), and Best-Effort Queue.
The procedure for defining service levels is discussed in “Chapter 8, Creating and Managing QoS
Profiles, Adding a Service Level” of the iBuilder User Guide.
Priority Queues are emptied before CBWFQ queues are serviced and CBWFQ queues are in turn
emptied before Best Effort queues are serviced. Figure 14 presents an overview of the iDirect
packet scheduling algorithm.
The packet scheduling algorithm (Figure 14) first services packets from Priority Queues in order
of priority, P1 being the highest priority. It selects CBWFQ packets only after all Priority Queues
are empty. Similarly, packets are taken from Best Effort Queues only after all CBWFQ packets
are serviced.
You can define multiple service levels using any combination of the three queue types. For
example, you can use a combination of Priority and Best Effort Queues only.
Priority Queues
Level 2. P2
Level 3. P3
All queues of higher priority must be empty before any lower-priority queue are serviced. If two
or more queues are set to the same priority level, then all queues of equal priority are emptied
using a round-robin selection algorithm prior to selecting any packets from lower priority
queues.
Packets are selected from Class-Based Weighted Fair Queues for transmission based on the
service level (or “class”) of the packet. Each service level is assigned a “cost”. Packet cost is
defined as the cost of its service level multiplied by its length. Packets with the lowest cost are
transmitted first, regardless of service level.
The cost of a service level changes during operation. Each time a queue is passed over in favor
of other service levels, the cost of the skipped queue is credited, which lowers the cost of the
packets in that queue. Over time, all service levels get an opportunity to transmit occasionally
even in the presence of higher priority traffic. Assuming there is a continuously congested link
with an equal amount of traffic on each service level, the total bandwidth available is divided
more evenly by deciding transmission priority based on each service level cost.
Packets in Best Effort queues do not have priority or cost. All packets in these queues are
treated equally by applying a round-robin selection algorithm to the queues. Best Effort queues
are only serviced if there are no packets waiting in Priority Queues and no packets waiting
CBWFQ Queues.
APPLICATION THROUGHPUT
Application throughput depends on properly classified and prioritized QoS and on properly
available bandwidth management. For example, if a VoIP application requires 16 Kbps and a
remote is only given 10 Kbps the application fails regardless of priority, since there is not enough
available bandwidth.
Bandwidth assignment is controlled by the Protocol Processor. As a result of the various network
topologies (for example, a shared TDM downstream with a deterministic TDMA upstream), the
Protocol Processor has different mechanisms for downstream control versus upstream control.
Downstream control of bandwidth is provided by continuously evaluating network traffic flow to
assigning bandwidth to remotes as needed. The Protocol Processor assigns bandwidth and
controls the transmission of packets for each remote according to the QoS parameters defined
for the remote’s downstream.
Upstream bandwidth is requested continuously with each TDMA burst from each remote. A
centralized bandwidth manager integrates the information contained in each request and
produces a TDMA burst time plan which assigns individual bursts to specific remotes. The burst
time plan is produced once per TDMA frame (typically 125 ms or 8 times per second).
Note: There is a 250 ms delay from the time that the remote makes a request for
bandwidth and when the Protocol Processor transmits the burst time plan to it.
iDirect has developed a number of features to address the challenges of providing adequate
bandwidth for a given application. These features are discussed in the sections that follow.
Static CIR
You can configure a static Committed Information Rate (CIR) or an upstream minimum
information rate for any upstream (TDMA) channel. Static CIR is bandwidth that is guaranteed
even if the remote does not need the capacity. By default, a remote is configured with a single
slot per TDMA frame. Increasing this value is considered as an inefficient configuration because
these slots are wasted if the remote is inactive. No other remote can be given these slots unless
the remote with the static CIR has not been acquired into the network. A static CIR is considered
as the highest priority upstream bandwidth. Static CIR only applies in the upstream direction.
The downstream does not need or support the concept of a static CIR.
Dynamic CIR
You can configure Dynamic CIR values for remotes in both the downstream and upstream
directions. Dynamic CIR is not statically committed and is granted only when demand is actually
present. This allows you to support CIR based service level agreements and, based on statistical
analysis, oversubscribe networks with respect to CIR. If a remote has a CIR but demand is less
than the CIR, only the actual demanded bandwidth is granted. It is also possible to indicate that
only certain QoS service levels “trigger” a CIR request. In these cases, traffic must be present in
a triggering service level before the CIR is granted. Triggering is specified on a per-service level
basis.
Beginning with iDS Release 7.1.1, additional burst bandwidth is assigned evenly among all
remotes in the network by default. You can, however, use custom keys to configure your
network to operate as in legacy releases. Refer to “Release 7.1.x Enhancements” on page 109
for more information.
Further QoS configuration procedures can be found in “Chapter 8, Creating and Managing QoS
Profiles, Adding a Service Level” of the iBuilder User Guide.
Real-time Demand
The goal of Network QoS is to determine and respond to real-time bandwidth demand
requirements across an entire network, and to allocate bandwidth to those applications selected
by the network operator. Such a feature is needed for the success of VoIP and other critical
applications. This feature allows the centralized bandwidth manager within the Protocol
Processor to determine which remotes have immediate real-time traffic requirements and to
respond by allocating the necessary additional bandwidth, potentially at the expense of other
remotes running less critical applications (for example, a bulk file transfer). Each QoS service
level is designated with a real-time weight. The choices for real-time weight are Normal,
Variable Real-Time, and Constant Real-Time. Demand requests indicate the amount of real-time
traffic that is present. The centralized bandwidth manager exclusively services real-time
bandwidth needs before “best-effort” requests but after CIR-based requests. Configuration
procedures can be found in “Chapter 8, Creating and Managing QoS Profiles, Adding a Service
Level” of the iBuilder User Guide”.
Free slot allocation is a round-robin distribution of unused TDMA slots by the centralized
bandwidth manager on a frame-by-frame basis. The bandwidth manager assigns TDMA slots to
particular remotes for each TDMA allocation interval based on current demand and configuration
constraints (such as minimum and maximum data rates, static CIR, dynamic CIR, and others). At
the end of this process it is possible that there are unused TDMA slots. In this case, if Free Slot
Allocation is enabled, the bandwidth manager gives these extra slots to remotes in a fair
manner, respecting any remote’s maximum configured data rate. By default, Free Slot
Allocation is enabled. It is important for you to realize that free slot allocation can potentially
hide QoS problems when there are just a few remotes in the network. It is important to test all
of your QoS during periods of congestion, so be aware that this feature exists. To enable Free
Slot Allocation, see “Chapter 6, Defining Networks, Line Cards, and Inroute Groups, Adding
Inroute Groups of the iDS Release 7.1 User’s Guide”.
You can enable Compressed Real-Time Protocol (cRTP) to significantly reduce the bandwidth
requirements of VoIP flows. cRTP is implemented via standard header compression techniques.
It allows for better use of real-time bandwidth especially for RTP-based applications, which
utilize large numbers of small packets since the 40-byte IP/UDP/RTP header often accounts for a
significant fraction of the total packet length. iDirect has implemented a standard header
compression scheme including heuristic-based RTP detection with negative cache support for
misidentified UDP streams. For example, G729 voice RTP results in less than 12 Kbps
(uncompressed is 24 Kbps). To enable cRTP, see “Chapter 7, Configuring Remotes, QoS Tab” of
the iBuilder User Guide.
It is possible to configure a remote upstream minimum statically committed CIR to less than one
burst in each TDMA frame. This feature allows many remotes to be “packed” into a single
upstream. Reducing a remote’s minimum statically committed CIR increases ramp latency. Ramp
latency is the amount of time it takes a remote to acquire the necessary bandwidth. The lower
the upstream static CIR, the fewer TDMA time plans contain a burst dedicated to that remote,
and the greater the ramp latency. Some applications may be sensitive to this latency and may
result in a poor user experience. iDirect recommends that this feature be used with care. The
iBuilder GUI enforces a minimum of one slot per remote every two seconds. for more
information, please see “Chapter 7, Configuring Remotes, Upstream and Downstream Rate
Shaping” of the iBuilder User Guide.
Sticky CIR
Sticky CIR is activated only when CIR is over-subscribed on the downstream or on the upstream.
When enabled, Sticky CIR favors remotes that have already received their CIR over remotes that
are currently asking for it. When disabled (the default setting), The Protocol Processor reduces
assigned bandwidth to all remotes to accommodate a new remote in the network.
When you enable Sticky CIR and your network traffic is exceeding the CIR, you will not receive
latency alarms on the NMS for some remotes because NMS latency measurement requests might
be received by the Protocol Processor. Use the Sticky CIR feature with caution to avoid missing
alarm notification. The sticky CIR feature is controlled using the following custom keys as shown
in Figure 15.
To turn on downstream "sticky cir" at the network level:
[NETWORK_DEFINITION]
sticky_cir_enable = 1
Parameter to control downstream sticky CIR
[NETWORK_DEFINITION]
max_cir_sticky_sec = # {seconds, default 1 hour}
To turn on upstream "sticky cir"
[INROUTE_GROUP_#]
sticky_cir_enable = 1
Application Jitter
While jitter plays a role in both downstream and upstream directions, a TDMA network tends to
introduce more jitter in the upstream direction. This is due to the discrete nature of the TDMA
time plan where a remote may only burst in an assigned slot. The inter-slot times assigned to a
particular remote do not match the desired play out rate, which results in jitter.
Another source of jitter is other traffic that a node transmits between (or in front of) successive
packets in the real-time stream. In situations where a large packet needs to be transmitted in
front of a real-time packet, jitter is introduced because the node must wait longer than normal
before transmission.
The iDirect system offers features that limit the effect of such problems; these features are
described the sections that follow.
The Protocol Processor bandwidth manager attempts to “feather” or spread out each individual
remote TDMA slots across the upstream frame. This is a desirable attribute in that a particular
remote’s bursts are spread out in time often reducing TDMA induced jitter. This feature is not
configured by the user.
SAR takes packets which have come from an external network and may range in size from 20
bytes to 1500 bytes and breaks them into smaller, nearly uniform segments and adds a small
header to allow for reassembly after transmission. Packets that are smaller than the configured
SAR length are left as is, except for the addition of a SAR header. Each SAR packet segment
receives the same QoS classification as its parent packet.
Note: In some contexts this process might be called “fragmentation”. iDirect does not use
this term because it may cause confusion with IP Fragmentation. This is a much more
heavyweight process in that it adds a 20 byte header per fragment. The iDirect SAR
process adds only two bytes for each segment. To configure SAR, see “Chapter 7,
Configuring Remotes, Up/Downstream SAR”of the iDS Release 7.1 User’s Guide”.
Enabling SAR during periods of congestion is important for the successful deployment of real-
time applications. For remotes that do not intend to deploy these types of applications, SAR
adds little benefit. Therefore, if raw performance is more of a concern than support for VoIP,
SAR should be disabled. By default, SAR is enabled for every remote.
Application Latency
Application latency is typically a concern for transaction-based applications such as credit card
verification systems. For applications like these, it is important that the priority traffic be
expedited through the system and sent, regardless of the less important background traffic. This
is especially important in bandwidth-limited conditions where a remote may only have a single
or a few TDMA slots. In this case, it is important to minimize latency as much as possible after
the distributor’s QoS decision. This allows a highly prioritized packet to make its way
immediately to the front of the transmit queue.
There are two options when configuring PAD: Maximum Channel Efficiency (default) and
Minimum Latency.
Maximum Channel Efficiency instructs the PAD layer to delay the release of a partially filled
TDMA burst to allow for the possibility that the next packet fill the burst completely. In this
configuration, the system waits for up to four TDMA transmit attempts before releasing a partial
burst.
Minimum Latency is the second option, which instructs the PAD layer to never delay partially
filled TDMA bursts and to transmit them immediately.
In general, Maximum Channel Efficiency is the desired choice, except in certain situations when
it is vitally important to achieve minimum latency for a prioritized service level. For example, if
your network is typically congested and you are configuring the system to work with a
transaction-based application which is bursty in nature and requires a minimum round trip time,
then configuring PAD for Minimum Latency may be the best choice. The PAD setting is configured
in iBuilder from the QoS tab for each remote. To configure PAD, see “Chapter 7, Configuring
Remotes, PAD” of the iBuilder User Guide.
During site commissioning, the installer uses iSite to set TX Initial Power. This parameter is set at
a low value and it is manually increased until the Netmodem is acquired into the network. The
hub then automatically adjusts the Netmodem output power to a nominal setting. With the acq
on command enabled, UCP messages are displayed at the console and the installer can observe
the TX power adjustments being made by the hub. When the hub determines that the bursts are
arriving in the nominal C/N range, power adjustments are stopped (displayed at the console as
0.0 dB adjustment). The installer can type tx power to read the current power setting.
iDirect recommends that you set the TX Initial Power value to 3 dB above the tx power reading.
For example, if the tx power is -17 dBm, set TX Initial Power to -14 dBm.
At any time after site commissioning, you can check the TX Initial Power setting by observing the
Remote Status and UCP tabs in iMonitor. If the Netmodem is in a “steady state” and no power
adjustments are being made, you can compare the current TX Power to the TX Initial Power
parameter to verify that TX Initial Power is 3dB higher than the TX Power. For detailed
information on how to set TX Initial Power, refer to the “Remote Installation and Commissioning
Guide”.
Note: Best nominal Tx Power measurements are made during clear sky conditions at the
hub and remote sites.
If all the bursts are arrive at similar C/N levels, the average is very near optimal for all of them.
However, if many bursts arrive at a varying C/N levels, the highest and lowest level bursts can
skew the average such that so that it is no longer optimal.
The nominal range is 2 dB wide (the green range in the iBuilder Acquisition/Uplink Control tab).
The actual range at which bursts can be optimally detected is approximately 8 dB wide centered
at the nominal gain point (Figure 16).
Ideal Case :
Optimal Detection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
Threshold C /N
U nder ideal circumstances , the average C /N of all remotes on the upstream channel is equal
to the center of the U CP adjustment range . Therefore the optimal detection range extends to
below the threshold C /N. (This example illustrates the TPC R ate 0 .66 threshold )
6 7 8 9 10 11 12 13 14 C /N (dB )
T hreshold C /N
W hen the T X Initial P ow er is set too high , rem otes entering the netw ork skew the average C /N to
be above the center of the U C P A djustm ent R ange . T herefore , during this period the optim al
detection range does not include the threshold C /N and rem otes experiencing rain fade m ay
experience a perform ance degradation .
T X Initial P ow er T oo Low :
S kew ed D etection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
T hreshold C /N
W hen the T X Initial P ow er is set too low , rem otes entering the netw ork skew the average C /N to be
above the center of the U C P A djustm ent R ange . T herefore , during this period the optim al
detection range does not include the threshold C /N and rem otes experiencing rain fade m ay
experience a perform ance degradation .
A remote that is a member of multiple networks is called a “roaming remote.” For details on
defining and managing roaming remotes, refer to “Chapter 7, Configuring Remotes, Roaming
Remotes” of the iBuilder User Guide.
In this example, there are 4 different networks connected to three different Regional Network
Control Centers (RNCCs). A group of remote terminals has been configured to roam among the
four networks.
Note: This diagram shows only one example from the set of possible network
configurations. In practice, there may be any number RNCCs and any number of
protocol processors at each RNCC.
On the left side of the diagram, a single NMS installed at the Global Network Control Center
(GNCC) manages all the RNCC components and the group of roaming remotes. Network
operators, both remote and local, can share the NMS server simultaneously with any number of
VNOs. (Only one VNO is shown in the Figure 20.) All users can run iBuilder, iMonitor, or both on
their PCs.
The connection between the GNCC and each RNCC must be a dedicated high-speed link.
Connections between NOC stations and the NMS server are typically standard Ethernet. Remote
NMS connections are made either over the public Internet protected by a VPN, port forwarding,
or a dedicated leased line.
ROOT PASSWORDS
Root password access to the NMS and Protocol Processor servers should be reserved for only
those you want to have administrator-level access to your network. Restrict the distribution of
this password information.
Servers are shipped with default passwords. Change the default passwords after the installation
is complete and make sure these passwords are changed on a regular basis and when an
employee leaves your company.
When selecting your new passwords, iDirect recommends that you follow these practices for
constructing difficult-to-guess passwords:
• Use passwords that are at least 8 characters in length.
• Do not base passwords on dictionary words.
• Use passwords that contain a mixture of letters, numbers, and symbols.
REMOTE DISTRIBUTION
The actual distribution of remotes and processes across a blade set is determined by the Proto-
col Processor controller dynamically in the following situations:
• At system Startup, the Protocol Processor Controller determines the distribution of processes
based on the number of remotes in the network(s).
• When a new remote is added in iBuilder, the Protocol Processor Controller analyzes the
current system load and adds the new remote to the blade with the least load.
• When a blade fails, the Protocol Processor Controller re-distributes the load across the
remaining blades, ensuring that each remaining blade takes a portion of the load.
The Protocol Processor controller does not perform dynamic load-balancing on remotes. Once a
remote is assigned to a particular blade, it remains there unless it is moved due to one of the
situations described above.
PP Blade 1
N M S Server
sam nc sarm t
spawn
NM S Servers and
control
sarouter sana
pp_controller
PP Blade 2
M onitor and C ontrol
sam nc
spawn
and
control
sarm t
sarouter
You can distribute your NMS server processes across multiple IBM eServers. The primary benefits
of machine distribution are improved server performance and better utilization of disk space.
iDirect recommends a distributed NMS server configuration once the number of remotes being
controlled by a single NMS exceeds 500-600. iDirect has tested the new distributed platform
with over 3000 remotes with iDS 7.0.0. Future releases continue to push this number higher.
Server configuration is performed one time using a special script distributed with the NMS
servers installation package. Once configured, the distribution of server processes across the
servers remains unchanged unless you reconfigure it. This is true even when you upgrade your
system.
The most common distribution scheme for larger networks is shown in Figure 22.
The busiest NMS processes, nrdsvr and evtsvr, are placed on their own servers for maximum
processing efficiency. All other NMS server processes are grouped on NMS Server 1.
1:n redundancy means that one physical machine backs up all of your active servers. If you
choose this form of redundancy, you must modify the dbBackup.ini file on each NMS server to
ensure that the separate databases are copied to separate locations on the backup machine.
The following diagram shows three servers, each copying its database to a single backup NMS. If
NMS 1 fails, you do not need to run dbRestore prior to switch-over since the configuration data
has already been sent to the backup NMS. If NMS 2 or NMS 3 fails, you need to run dbRestore
prior to the switch-over if you want to preserve and add to the archive data in the failed server’s
database. See Figure 23.
Server processes that must be run on the configuration server machine are:
• Control Server
• Revision Server
• SNMP Proxy Agent Server
10 TRANSMISSION SECURITY
(TRANSEC)
This section describes how TRANSEC and FIPS is implemented in an iDirect Network. It includes:
• “What is TRANSEC?‚" which defines Transmission Security.
• “iDirect TRANSEC‚" which describes protocol implementation.
• “TRANSEC Downstream‚" which describes the data path from the hub to the remote.
• “TRANSEC Upstream‚" which describes the data path from the remote to the hub.
• “TRANSEC Key Management‚" which describes public and private key usage.
• “TRANSEC Remote Admission Protocol‚" which describes acquisition and authentication.
• “Reconfiguring the Network for TRANSEC‚" which describes conversion requirements.
WHAT IS TRANSEC?
Transmission Security (TRANSEC) prevents an adversary from exploiting information available in
a communications channel without necessarily having defeated the encryption inherent in the
channel. Even if an encrypted wireless transmission is not compromised, information such as
timing and traffic volumes can be determined by using basic signal processing techniques. This
information could provide someone monitoring the network a variety of information on unit
activity. For example, even if an adversary cannot defeat the encryption placed on individual
packets, it might be able to determine answers to questions such as:
• What types of applications are active on the network currently?
• Who is talking to whom?
• Is the network or a particular remote site active now?
• Is it possible to determine between network activity and real world activity, based on traffic
analysis and correlation?
There are a number of components to TRANSEC, one of them being activity detection. With
current VSAT systems an adversary can determine traffic volumes and communications activities
with a simple spectrum analyzer. With a TRANSEC compliant VSAT system an adversary is
presented with a strongly encrypted and constant wall of data. Other components of TRANSEC
include remote and hub authentication. TRANSEC eliminates the ability of an adversary to bring
a non-authorized remote into a secured network.
IDIRECT TRANSEC
iDirect achieves full TRANSEC compliance by presenting to an adversary who may be
eavesdropping on the RF link a constant “wall” of fixed-size, strongly encrypted (such as
Advanced Encryption Standard (AES) and 256 bit key Cipher Block Chaining (CBC) Mode) traffic
segments, which do not vary in frequency in response to network utilization.
Other than network messages that control the admission of a remote terminal into the network,
all portions of all packets are encrypted, and their original size is hidden. The content and size
of all user traffic (Layer 3 and above), as well as network link layer (Layer 2) traffic is
completely indeterminate from an adversary’s perspective. Further, no higher layer information
is revealed by monitoring the physical layer (Layer 1) signal.
All hub line cards and remote model types associated with a protocol processor must be
TRANSEC compatible. You must also ensure that all protocol processor blades have Soekris 1201
or 1401 encryption cards installed. These cards are required for TRANSEC key management.
The only iDirect hardware that operate in TRANSEC mode are the M1D1-T Hub Line Card, the
iNFINITI 7350 remote, and the iConnex 700 remote and therefore these are the only iDirect
products that are capable of operating in a FIPS 140-2 Level 1 compliant mode.
For more information, see “Chapter 16, Converting an Existing Network to TRANSEC” of the
iBuilder User Guide, Release 7.1.
TRANSEC DOWNSTREAM
A simplified block diagram for the iDirect TRANSEC downstream data path is shown in Figure 24.
Each function represented in the diagram is implemented in software and firmware on a
TRANSEC capable line card.
Consider the diagram from left to right with variable length packets arriving on the far left into
the block named Packet Ingest. In this diagram, the encrypted path is shown as solid black, and
the unencrypted (clear) path is shown in dashed red. The Packet Ingest function receives
variable length packets which can belong to four logical classes: User Data, Bypass Burst Time
plan (BTP), Encrypted BTP, and Bypass Queue. All packets arriving at the transmit Hub Line Card
have this indication present as a pre-pended header placed there by the protocol processor (not
shown). The Packet Ingest function determines the message type and places the packet in the
appropriate queue. If the packet is not valid, it is not placed in any queue and it is dropped.
Packets extracted from the Data Queue are always encrypted. Packets extracted from the Clear
Queue are always sent unencrypted, and time-sensitive BTP messages from the BTP Queue can
be sent in either mode. A BTP sent in the clear contains minimal traffic analysis information for
an adversary and is only utilized to allow remotes attempting to exchange admission control
messages with the hub to do so. Traffic sent in the clear bypasses the Segmentation Engine and
the AES Encryption Engine, and precedes the physical framing and FEC engines for transmission.
Clear, unencrypted packets are transmitted without regard to segmentation; they are allowed to
exist on the RF link with variable sized framing.
Encrypted traffic next enters the Segmentation Engine. The Segmentation Engine segments
incoming packets based on a configured size and provides fill-packets when necessary. The
Segmentation Engine allows the iDirect TRANSEC downstream to transmit a configurable, fixed
size TDM packet segment on a continuous basis.
After segmentation, fixed sized packets enter the Encryption Engine. The encryption algorithm
utilizes the AES algorithm with a 256 bit key and operates in CBC Mode. Packets exit the
Encryption Engine with a pre-pended header as shown in Figure 25.
The Encryption Header consists of five 32 bit words with four fields. The fields are:
• Code. This field indicates if the frame is encrypted or not, and if encrypted indicates the
entry within the key ring (described under the key management section later in this
document) to be utilized for this frame. The Code field is one byte in length.
• Seq. This field is a sequence number that increments with each segment. The Seq field is two
bytes in length (16 bits, unsigned).
• Rsvd. This field is 1 byte and is reserved for future use.
• Initialization Vector (IV). IV is utilized by the encryption/decryption algorithm and contains
random data. The IV field is 16 bytes in length (128 bits unsigned).
A new IV is generated for each segment. The first IV is generated from the cipher text of the
initial Known Answer Test (KAT) conducted at system boot time. Subsequent IVs are taken from
the last 128 bits of the cipher text of the previously encrypted segment. IVs are continuously
updated regardless of key rotations and they are independent of the key rotation process. They
are also continuously updated regardless of the presence of user traffic since the filler segments
are encrypted. While no logic is included to ensure that IVs do not repeat, the chance of
repetition is very small; estimates place the probability of an IV repeating at 1:2102 for a
maximum iDirect downstream data rate.
The Segment is of fixed, configurable length and consists of a series of fixed length Fragment
Headers (FH) followed by variable length data Fragments (F). The entire Segment is encrypted
in a single operation by the encryption engine. The FH contains sufficient information for the
source packet stream, post decryption on the receiver, to be reconstructed. Each Fragment
contains a portion of a source packet.
The Encryption Header is transmitted unencrypted but contains only enough information for a
receiver to decrypt the segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
framing and forward error correction coding. These functions are essentially independent of
TRANSEC but complete the downstream transmission chain and are thus depicted in figure 1.
TRANSEC UPSTREAM
A simplified block diagram for the iDirect TRANSEC upstream data path is shown in Figure 26.
The functions represented in this diagram are implemented in software and firmware on a
TRANSEC capable remote.
The encrypted path is shown is solid black, and the unencrypted (clear) path is shown in dashed
red. The Packet Ingest function determines the message type and places the packet in the
appropriate queue or drops it if it is not valid.
Consider the diagram from left to right with variable length packets arriving on the far left into
the block named Packet Ingest. The upstream (remote to hub) path differs from the downstream
(hub to remote) in that on the upstream is configured for TDMA. Variable length packets from a
remote LAN are segmented in software, and can be considered as part of the Packet Ingest
function. Therefore there is no need for the firmware level segmentation present in the
downstream. Additionally, since the remote is not responsible for the generation of BTPs, there
is no need for the additional queues present in the downstream.
Packets extracted from the Data Queue are always encrypted. Packets exacted from the Clear
Queue are always sent unencrypted. The overwhelming majority of traffic will be extracted
from the Data Queue. Traffic sent in the clear bypasses the Encryption Engine and precedes the
FEC engine for transmission.
The encryption algorithm utilizes AES algorithm with a 256 bit key and will operate in CBC Mode.
Packets exit the Encryption Engine with a pre-pended header as described in Figure 27.
The Encryption Header consists of a single 32 bit word with 3 fields. The fields are:
IV Seed. This field is a 29 bit field utilized to generate an 128 bit IV. The IV Seed field starts at
zero and increments for each transmitted burst. The full 128 bit IV is generated from the padded
seed by passing it though the encryption engine. The IV is expanded into a 128-bit IV by
encrypting it with the current AES key for the inroute.Remotes can therefore expand the same
seed into the same full IV. However, this does not create any problems because due to
addressing requirements, it is impossible for any two remotes within the same upstream to
generate the same plain text data. While no logic is included to ensure that IVs do not repeat for
a single terminal, repetition is impossible because the key rotates every two hours by default.
Since the seed increments for each transmission burst, the number of total bursts prior to a seed
wrapping around is 229 or 536,870,912. Given the two-hour key rotation period, a single terminal
would need to send over 75,000 TDMA bursts per second to exhaust the range of the seed. This
exceeds any possible iDirect upstream data rate by far.
Key ID. This field indicates the entry within the key ring (described under the key management
section later in this document) to be utilized for this frame.
The Segment is of fixed, configurable length and consists of what we might call the standard
iDirect TDMA frame. A description of the details of the standard frame are beyond the scope of
this document, but as a general description, consist of a Demand Header which indicates the
amount of bandwidth a remote is requesting, the iDirect Link Layer (LL) Header, and ultimately
the actual Payload. This Segment is encrypted. The Encryption Header is transmitted
unencrypted but contains only enough information for a receiver to decrypt the segment if it is
in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
forward error correction coding. This function is essentially independent of TRANSEC but
completes the upstream transmission chain (as shown in figure 3).
A remote will always burst in its assigned slots even when traffic is not present by generating
encrypted fill payloads as needed. The iDirect Hub dynamic allocation algorithm will always
operate in a mode whereby all available time slots within all time plans are filled.
You must ensure that all protocol processor blades are equipped with Soekris 1201 or 1401
encryption cards for TRANCSEC key management. The 1401, which supports AES encryption, is
replacing the 1201 card. However, if you have 1201 cards currently installed, you do not need to
replace them with 1401 cards.
Key Distribution Protocol (Figure 28), Key Rolling (Figure 29), and Host Keying Protocol (Figure
30) are based on standard techniques utilized within an X.509 based PKI.
Key Distribution Protocol assumes that upon the receipt of a certificate from a peer that the
host is able to validate and establish a chain of trust based on the contents of the certificate.
iDirect TRANSEC utilizes standard X.509 certificates and methodologies to verify the peer’s
certificate.
After the completion of the sequence shown in Figure 28, a peer may provide a key update
message again in an unsolicited fashion as needed. The data structure utilized to complete key
update (also called a key roll) is shown in Figure 29.
This data structure conceptually consists of a set of pointers (Current, Next, Fallow), a two bit
identification field (utilized in the Encryption Headers described above), and the actual
symmetric keys themselves. A key update consists of generating a new key, placing it in the last
fallow slot just prior to the Current pointer, updating the next pointers (circular update so 11
rolls to 00) and current pointers and generating a Key Update message reflecting these changes.
The key roll mechanism allows for multiple keys to be “in play” simultaneously so that seamless
key rolls can be achieved. By default the iDirect TRANSEC solution rolls any symmetric key every
two hours, but this is a user configurable parameter. The iDirect Host Keying Protocol is
shown Figure 30.
This protocol describes how hosts are originally provided an X.509 certificate from a
Certificate Authority. iDirect provides a Certificate Authority Foundry module with its
TRANSEC hub. Host key generation is done on the host in all cases.
sequence. When a remote is given the opportunity to acquire into the network, the acquisition
sequence takes place as follows:
First, the protocol processor generates two time plans per inroute. One is the normal time plan
utilized to indicate to remotes which slots in which inroutes they may burst on. This time plan is
always encrypted. The second time plan is not encrypted, and it indicates the owner of the
acquisition slot and which remotes may burst in the clear (unencrypted) on selected slots. The
union of the two time plans covers all slots in all inroutes.
The time plans are then forwarded and broadcast to all remotes in the normal method. Remotes
that are not yet acquired receive the unencrypted time plan and wait for an invitation to join
the network via this unencrypted message.
The remote designated in the acquisition slot acquires in the normal fashion by sending an
unencrypted response in the acquisition slot of a specific inroute.
Once the physical layer acquisition occurs, the remote must follow the key distribution protocol
before it is trusted by the network, and for it to trust the network it is a part of. This step must
be carried out in the clear. Therefore remotes in this state will request bandwidth normally and
they will be granted unencrypted TDMA slots. The hub and remotes exchange key negotiation
messages in the cleartext channel. Three message types exist:
• Solicitations, which are used to synchronize, request, inform, and acknowledge a peer.
• Certificate Presentations, which contain X.509 certificates.
• Key Updates, which contain AES key information that is signed and RSA encrypted; the RSA
encryption is accomplished by using the remote’s public key and the signature is created by
using the hub’s private key.
After authentication, the key update message must also be completed in the clear. The actual
symmetric keys are encrypted using the remote’s public key information obtained in the
exchanged certificate. Once the symmetric key is exchanged, the remote enters the network as
a trusted entity, and begins normal operation in an encrypted mode.
11 FAST ACQUISITION
The Fast Acquisition feature reduces the average acquisition time for remotes, particularly in
large networks with hundreds or thousands of remotes. The acquisition messaging process used
in prior versions is included in this release. However, the Protocol Processor now makes better
use of the information available regarding hub receive frequency offsets common to all remotes
to reduce the overall network acquisition time. No additional license requirements are required
for this feature.
FEATURE DESCRIPTION
Fast Acquisition is configured on a per-remote basis. When a remote is attempting to acquire
the network, the Protocol Processor determines the frequency offset at which a remote should
transmit and conveys it to the remote in a time plan message. From the time plan message, the
remote learns when to transmit and at what frequency offset. The remote transmit power level
is configured in the option file. Based on the time plan message, the remote calculates the
correct Frame Start Delay (FSD). The fundamental aspects of acquisition are how often a remote
gets an opportunity to come into the network, and how many frequency offsets need to be tried
for each remote before it acquires the network.
If a remote can acquire the network more quickly by trying fewer frequency offsets, the number
of remotes that are out of the network at any one time can be reduced. This determines how
often other remotes get a chance to acquire. This feature reduces the number of frequency
offsets that need to be tried for each remote.
By using a common hub receive frequency offset, the fast acquisition algorithm can determine
an anticipated range smaller than the complete frequency sweep space configured for each
remote. As the common receive frequency offset is updated and refined, the sweep window is
reduced.
If an acquisition attempt fails within the reduced sweep window, the sweep window is widened
to include the entire sweep range. Beginning with Release 7.1, Fast Acquisition is enabled by
default. You can disable it by applying a custom key.
For a given ratio x:y, the hub informs the remote to acquire using the smaller frequency offset
range calculated based on the Fast Acquisition scheme. After x number of attempts, the remote
sweeps the entire range y times before it will sweep the narrower acquisition range. The default
ratio is 100:1. That is, try 100 frequency offsets within the reduced (common) range before
resorting to one full sweep of the remote’s frequency offsets.
If you want to modify the ratio, you can use custom keys that follow to override the defaults.
You must apply the custom key to the hub side for each remote in the network.
[REMOTE_DEFINITION]
sweep_freq_fast = 100
sweep_freq_entire_range = 1
[SWEEP_METHOD]
sweep_method = 1 (Fast Acquisition enabled)
sweep_method = 0 (Fast Acquisition disabled)
A number of new console commands are available related to this feature. These are described in
Console Commands Reference Guide, iDS Release 7.1.
Fast Acquisition cannot be used on 3100 series remotes when the upstream symbol rate is less
than 260 Ksym/s. This is because the FLL on 3100 series remotes is disabled for upstream rates
less than 260 Ksym/s.
The NMS disables Fast Acquisition for any remote that is enabled for an iDirect Music Box and for
any remote that is not configured to utilize the 10 MHz reference clock. In IF-only networks,
such as a test environment, the 10 MHz reference clock is not used.
FEATURE DESCRIPTION
Remote Sleep mode is supported on all iNFINITI series remotes. In this mode, the BUC is
powered down, thus saving power consumption.
When Sleep Mode is enabled on the iBuilder GUI for a remote, the remote enters Remote Sleep
Mode after a configurable period elapses with no data to transmit. By default, the remote exits
Remote Sleep Mode whenever packets arrive on the local LAN for transmission on the inbound
carrier.
Note: You can use the powermgt mode set sleep console command to enable or powermgt mode set
awake to disable remote sleep mode.
The stimulus for a remote to exit sleep mode is also configurable in iBuilder. You can select
which types of traffic automatically “trigger wakeup” on the remote by selecting or clearing a
check box for the any of the QoS service levels used by the remote. If no service levels are
configured to trigger wakeup the remote, you can manually force the remote to exit sleep mode
by disabling sleep mode on the remote configuration screen.
AWAKENING METHODS
There are two methods by which a remote is “awakened” from Sleep Mode. They are “Operator-
Commanded Awakening”, and “Activity-Related Awakening”.
Operator-Commanded Awakening
With Operator Command Awakening, you can manually force a remote into Remote Sleep Mode
and subsequently “awake” it via the NMS. This can be done remotely from the Hub since the
remote continues to receive the downstream while in sleep mode.
When the remote sees no traffic that triggers the wake up condition for the configured sleep
time-out, it goes into Remote Sleep Mode. In this mode, all the IP traffic that does not trigger a
wake up condition is dropped. When a packet with the service level marking that triggers a
wakeup is detected, the remote resets the sleep timer and wakes up. In Remote Sleep Mode, the
remote processes the burst time plans but it does not apply them to the firmware. No indication
is sent to the remote’s router that the interface is down, and therefore the packets from the
local LAN are still passed to the remote’s distributor queues. Packets that would wake up the
interface will not be dropped by the router and are available to the layers that process this
information. The protocol layer that manages the sleep function drops the packets that do not
trigger the wakeup mode.
Power consumed by the remote under normal and low power (Partial Sleep Mode) is shown in
Table Table 2 on page 91.
The iDirect Sleep Mode feature requires a custom key in Release 7.1.1. When you enable Sleep
Mode on the Remote QoS tab, the remote will conserve power by disabling the 10 MHz reference
for the BUC after the specified number of seconds have elapsed with no remote upstream data
transmissions. A remote should automatically wake from sleep mode when packets arrive for
transmission on the upstream carrier, provided that Trigger Wakeup is selected for the service
level associated with the packets.
However, in Release 7.1.1, a remote will not wake from Sleep Mode even if packets arrive for
transmission that match a service level with Trigger Wakeup selected without the appropriate
custom key. You must configure the following remote-side custom key in iBuilder on the Remote
Custom Tab for all remotes with Sleep Mode enabled:
[SAT 0]
forced = 1
Note: When this custom key is set to 1, a remote with RIP enabled will always advertise the
satellite route as available on the local LAN, even if the satellite link is down.
Therefore, the Release 7.1.1 Sleep Mode feature is not compatible with
configurations that rely on the ability of the local router to detect loss of the
satellite link.
To enable Remote Sleep Mode, see “Chapter 7, Configuring Remotes, Information Tab” of the
iBuilder User Guide.
To configure service level based wake up, see “Chapter 8, Creating and Managing QoS Profiles,
Adding a Service Level” of the iBuilder User Guide, Release.
Beginning with Release 7, iDirect remotes are no longer restricted to a single network. You can
define remotes that “roam” from network to network around the globe. These roaming remotes
are not constrained to a single location or limited to any geographic region. Instead, by using
the capabilities provided by the iDirect “Global NMS” feature, remote terminals have true global
IP access.
The decision of which network a particular remote joins is made by the remote. When joining a
new network, the remote must re-point its antenna to receive a new beam and tune to a new
outroute. Selection of the new beam can be performed manually (by using remote modem
console commands) or automatically. This chapter describes how automatic beam selection is
implemented in an iDirect network.
For detailed information on configuring and monitoring roaming remotes, see the iBuilder User
Guide and iMonitor User Guide.
THEORY OF OPERATION
Since the term "network" is used in many ways, the term "beam" is used rather than the term
"network" to refer to an outroute and its associated inroutes.
ABS is built on iDirect’s existing mobile remote functionality. When a modem is in a particular
beam, it operates as a traditional mobile remote in that beam.
As a vessel moves from the footprint of one beam into the footprint of another, the remote must
shift from the old beam to the new beam. Automatic Beam Selection enables the remote to
select a new beam, decide when to switch, and to perform the switch-over, without human
intervention. ABS logic in the modem reads the current location from the antenna and decides
which beam will provide optimal performance for that location. This decision is made by the
remote, rather than by the NMS, because the remote must be able to select a beam even if it is
not communicating with the network.
To determine the best beam for the current location, the remote relies on a beam map file that
is downloaded from the NMS to the remote and stored in memory. The beam map file is a large
data file containing beam quality information for each point on the Earth's surface as computed
by the satellite provider. Whenever a new beam is required by remotes using ABS, the satellite
provider must generate new map data in a pre-defined format referred to as a “conveyance
beam map file.” iDirect provides a utility that converts the conveyance beam map file from the
satellite provider into a beam map file that can be used by the iDirect system.
Note: In order to use the iDirect ABS feature, the satellite provider must enter into an
agreement with iDirect to provide the beam map data in a specified format.
The iDirect NMS software consists of multiple server applications. One such server application,
know as the map server, manages the iDirect beam maps for remotes in its networks. The map
server reads the beam maps and waits for map requests from remote modems.
A modem has a limited amount of non-volatile storage, so it cannot save an entire map of all
beams. Instead, the remote asks the map server to send a map of a smaller area (called a beam
“maplet”) that encompasses its current location. When the vessel nears the edge of its current
maplet, the remote asks for another beam maplet centered on its new location. The
geographical size of these beam maplets varies in order to keep the file size approximately
constant. A beam maplet typically covers a 1000 km square.
If the selected beam is unusable, the remote attempts to use another beam, provided one or
more usable beams are available. A beam can become unusable for many reasons, but each
reason ultimately results in the inability of the remote to communicate with the outside world
using the beam. Therefore the only usability check is based on the "layer 3 state" of the satellite
link, such as whether or not the remote can exchange IP data with the upstream router.
Anything that causes the remote to inhibit its transmitter causes the receive line card to stop
receiving the modem, which eventually causes Layer 3 to fail. The modem stops transmitting if
it loses downstream lock. A mobile remote will also stop transmitting under the following
conditions:
• The remote has not acquired and no GPS information is available.
• The remote antenna declares loss-of-lock.
• The antenna declares a blockage.
each visible, usable beam defined in its options file in turn for five minutes until the remote is
acquired. This can occur under various conditions:
• When a remote is being commissioned.
• If the vessel travels with the modem turned off and must locate a beam when returned to
service.
• If the remote cannot remain in the network for an extended period due to blockage or
network outage.
• If the map server is unreachable.
In all cases, after the remote establishes communications with the map server, it immediately
asks for a new maplet. When a maplet becomes available, the remote uses the maplet to
compute the optimal beam, and switches to that beam if it is not the current beam.
A steerable, stabilized antenna must know its geographical location in order to point to the
antenna. The antenna includes a GPS receiver for this purpose. The remote must also know its
geographical location to select the correct beam and to compute its distance from the satellite.
The remote periodically commands the antenna controller to send the current location to the
modem.
IP Mobility
Communications to the customer intranet (or to the Internet) are automatically re-established
after a beam switch-over. The process of joining the network after a new beam is selected uses
the same internet routing protocols that are already established in the iDirect system. When a
remote joins a beam, the Protocol Processor for that beam begins advertising the remote's IP
addresses to the upstream router using the RIP protocol. When a remote leaves a beam, the
Protocol Processor for that beam withdraws the advertisement for the remote's IP addresses.
When the upstream routers see these advertisements and withdrawals, they communicate with
each other using the appropriate IP protocols to determine their routing tables. This permits
other devices on the Internet to send data to the remote over the new path with no manual
intervention.
OPERATIONAL SCENARIOS
This section presents a series of top-level operational scenarios that can be followed when
configuring and managing iDirect networks that contain roaming remotes using Automatic Beam
Selection. Steps for configuring network elements such as iDirect networks (beams) and roaming
remotes are documented in iBuilder User Guide. Steps specific to configuring ABS functionality,
such as adding an ABS-capable antenna or converting a conveyance beam map file, are
described in “Appendix C, Configuring Networks for Automatic Beam Selection” of the iBuilder
User Guide.
2. The satellite provider enters into an agreement with iDirect specifying the format of the
conveyance beam map file.
3. The satellite provider supplies the link budget for the hub and remotes.
4. iDirect delivers the map conversion program to the customer specific to the conveyance
beam map file specification.
5. The satellite provider delivers to the customer one conveyance beam map file for each
beam that the customer will use.
6. The you order and install all required equipment and an NMS.
8. The NMS operator runs the conversion program to create the server beam map file from the
conveyance beam map file or files.
9. The NMS operator runs the map server as part of the NMS.
Adding a Vessel
This scenario outlines the steps required to add a roaming remote using ABS to all available
beams.
3. The NMS operator saves the modem's options file and delivers it to the installer.
5. The installer copies the options file to the modem using iSite.
10. The modem enters the network and requests a maplet from the NMS map server.
11. The modem checks the maplet. If the commissioning beam is not the best beam, the modem
switches to the best beam as indicated in the maplet. This beam is then assigned a high
preference rating by the modem to prevent the modem from switching between overlapping
beams of similar quality.
Note: Check the levels the first time the remote enters each new beam and adjust
the transmit power settings if necessary.
Normal Operations
This scenario describes the events that occur during normal operations when a modem is receiv-
ing map information from the NMS.
2. The modem receives the current location from antenna every five minutes.
4. As the ship approaches the edge of the current maplet, the modem requests a new maplet
from the map server.
5. When the ship reaches a location where the maplet shows a better beam, the remote
switches by doing the following:
a. Computes best beam.
b. Saves best beam to non-volatile storage.
c. Reboots.
d. Reads the new best beam from non-volatile storage.
e. Commands the antenna to move to the correct satellite and beam.
f. Joins the new beam.
Mapless Operations
This scenario describes the events that occur during operations when a modem is not receiving
beam mapping information from the NMS.
1. While operational in a beam, the remote periodically asks the map server for a maplet. The
remote does not attempt to switch to a new beam unless one of the following conditions are
true:
a. The remote drops out of the network.
b. The remote receives a maplet indicating that a better beam exists.
c. The satellite drops below the minimum look elevation defined for that beam.
2. If not acquired, the remote selects a visible, usable beam based only on satellite longitude
and attempts to switch to that beam.
3. After five minutes, if the remote is still not acquired, it marks the new beam as unusable
and selects the best beam from the remaining visible, usable beams in the options file. This
step is repeated until the remote is acquired in a beam, or all visible beams are marked as
unusable.
4. If all visible beams are unusable, the remote marks them all as usable, and continues to
attempt to use each beam in a round-robin fashion as described in step 3.
2. If the remote loses network connectivity for five minutes, it marks the current beam as
unusable and selects a new beam based on the maplet.
3. Any beam marked as unusable remains unusable for an hour or until all beams are marked as
unusable.
4. If only the current beam is visible, the remote will not attempt to switch from that beam,
even after losing connectivity for five minutes.
Error Recovery
This section describes the actions taken by the modem under certain error conditions.
1. If the remote cannot communicate with the antenna and is not acquired into the network, it
will reboot after five minutes.
2. If the antenna is initializing, the remote waits for the initialization to complete. It will not
attempt to switch beams during this time.
FEATURE DESCRIPTION
Prior to iDS 7.X releases, you were limited to configuring your primary NMS server to back up
your network configuration database to a backup NMS server, with both NMS servers (primary
and backup) typically located at the same teleport. This NMS Backup feature provided database
redundancy in cases of NMS server failure, but provided no redundancy in the event of a RF
interruption at the Primary Teleport which caused the remotes to lose the downstream carrier.
The new Hub Geographic Redundancy feature builds on the previously developed Global NMS
feature (see iDS Release 7.0 Features), and the existing dbBackup/dbRestore utility. You
configure the Hub Geographic Redundancy feature by defining all the network information for
both the Primary and Backup Teleports in the Primary NMS. All remotes are configured as
roaming remotes and they are defined identically in both the Primary and Backup Teleport
network configurations.
Only iNFINITI remotes can currently participate in Global NMS networks. Since the backup
teleport feature also uses the Global NMS capability, this feature is also restricted to iNFINITI
remotes.
During normal (non-failure) operations, carrier transmission is inhibited on the Backup Teleport.
During failover conditions (when roaming network remotes fail to see the downstream carrier
through the Primary Teleport NMS) you can manually enable the downstream transmission on the
Backup Teleport, allowing the remotes to automatically (after the configured default wait
period of five minutes) acquire the downstream transmission through the Backup Teleport NMS.
• A separate IP connection (at least 128Kbps) between the Primary and Backup Teleport NMS
for database backup and restore operations. A higher rate line can be employed to reduce
this database archive time.
• The downstream carrier characteristics for the Primary and Backup Teleports MUST be
different. For example, either the FEC, frequency, frame length or data rate values must be
different.
• On a periodic basis, backup and restore your NMS configuration database between your
Primary and Backup Teleports. See the NMS Redundancy and Failover Technical Note for
complete NMS redundancy procedures.
For example, if you want to make the wait period 10 minutes, use the following custom key:
net_state_timeout=600
For further configuration information, see “Chapter 5, Defining Network Components, Adding a
Backup Teleport” of the iBuilder User Guide.
15 CARRIER BANDWIDTH
OPTIMIZATION
This chapter describes carrier bandwidth optimization and carrier spacing. It includes:
• “Overview‚" which describes how reducing carrier spacing increases overall available
bandwidth.
• “Increasing User Data Rate‚" which provides an example of how you can increase user data
rates with out increasing occupied bandwidth.
• “Decreasing Channel Spacing to Gain Additional Bandwidth‚" which provides an example of
how you can increase occupied bandwidth.
OVERVIEW
The Field Programmable Gated Array (FPGA) firmware uses optimized digital filtering which
reduces the amount of satellite bandwidth required for an iDirect carrier. Instead of using a 40%
guardband between carriers, now the guardband may be reduced to as low as 20% on both the
broadcast Downstream channel and the TDMA Upstream. Figure 31 on page 104 shows an overlay
of the original spectrum and the optimized spectrum.
This optimization translates directly into a cost savings for existing and future networks
deployed with iDirect NetModems.
The spectral shape of the carrier is not the only factor contributing to the guardband
requirement. Frequency stability parameters of a system may result in the need for a guardband
of slightly greater than 20% to be used. iDirect complies with the adjacent channel interference
specification in IESS 308 which accounts for adjacent channels on either side with +7dB higher
power.
Be sure to consult the designer of your satellite link prior to changing any carrier parameters to
verify that they do not violate the policy of your satellite operator.
A consequence of choosing this option is that increasing the bit rate of the carrier to fill the
extra bandwidth requires slightly more power. Increasing the bit rate by 15% would result in an
additional 0.5 dB of power. Be sure to consult the provider of your link budget prior to adjusting
the bit rate of your carriers.
Frequency stability in the system may limit the amount of bit rate increase by increasing the
guardband requirement.
The example that follows illustrates a scenario applicable to a system with negligible frequency
stability concerns. It shows how the occupied bandwidth does not increase when the user data
rate increases. In this example, FEC rate 0.793 with 4kbit Turbo Product Code is used.
offset using an automatic frequency control algorithm. Any additional instability must be
accommodated by additional guardband.
The frequency references to the hub transmitter and to the satellite itself are generally very
stable so the main source of frequency instability is the downconverter at the hub. This is
because the automatic frequency control algorithm uses the hub receiver’s estimate of
frequency offset to adjust each remote transmitter frequency. Hub stations which use a
feedback control system to lock their downconverter to an accurate reference may have
negligible offsets. Hub stations using a locked LNB will have a finite frequency stability range.
Another reason to add guardband is to account for frequency stability of other carriers directly
adjacent on the satellite which are not part of an iDirect network. Be sure to review this
situation with your satellite link designer before changing carrier parameters.
The example that follows accounts for a frequency stability range for systems using equipment
with more significant stability concerns. Given the “Current Carrier Parameters” the previous
example and a total frequency stability of +/-5kHz, compute the new carrier parameters:
Solution:
• Subtract the total frequency uncertainty from the available bandwidth to determine the
amount of bandwidth left for the carrier (882.724kHz – 10kHz = 872.724kHz).
• Divide this result by the minimum channel spacing (872.724 / 1.2 = 727.270kHz).
• Use the result as the carrier symbol rate and compute the remaining parameters.
Therefore, a network operator may take advantage of the new carrier bandwidth optimization
by reworking their frequency plan such that excess bandwidth is available for use by another
carrier.
For example, consider an iDirect network with a user data (information) rate of 5Mbps on the
downstream and three upstream carriers of 1Mbps each. FEC rate 0.793 with 4kbit TPC is used
for all carriers in this example. Figure 32 on page 107 shows that an additional Upstream carrier
may be added by reducing the channel spacing of the existing carriers.
QOS ENHANCEMENTS
Beginning with iDS Release 7.1.1, additional burst bandwidth is assigned evenly among all
remotes in the network by default. All available burstable bandwidth (BW) is equally divided
between all remotes requesting additional BW, regardless of already allocated CIR.
Prior to this release, a remote in a highly congested network would often not get burst
bandwidth above its CIR. For example, consider a network with a 3 Mbps upstream and three
remotes, R1, R2, and R3. R1 and R2 are assigned a CIR of 1 Mbps each and R3 has no CIR. If all
remotes request 2 Mbps each, 1 Mbps is given to R3, making the total used BW 3 Mbps. In this
case, R1 and R2 receive no additional BW.
Beginning with this release, using the same example network, the additional 1Mbps BW is evenly
distributed by giving each remote an additional 333 Kbps. The default configuration is to allow
even bandwidth distribution. You can, however, use custom keys to configure your network to
operate as in legacy releases. Custom keys are applied in iBuilder at the network level by
modifying the network and entering the custom keys in the Custom tab of the Modify
Configuration Object window. To configure your network to operate in the legacy mode, apply
the custom keys to the downstream and upstream.
If you are using frequency hopping mode, use the following custom key to configure the
upstream:
[INROUTE_GROUP_#]
legacy_fairness = 1
Where,
# = the inroute group ID number.
If you are using carrier grooming mode, use the following custom key to configure the upstream:
[INROUTE_#]
legacy_fairness = 1
Where,
# = the inroute group ID number.
For detailed information about QoS, refer to “QoS Implementation Principles” on page 37.
Further QoS configuration procedures can be found in “Chapter 8, Creating and Managing QoS
Profiles, Adding a Service Level” of the iBuilder User Guide.