Вы находитесь на странице: 1из 50

DESCRIPTION

1551-ANZ 211 71/2 Uen A

CPS, CENTRAL PROCESSOR SUBSYSTEM


Contents 1 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 3 3.1 3.2 4 4.1 4.2 4.3 5 5.1 5.2 General Functions Survey Program Execution and Data Handling Software Kernel Function Change System Backup Copy Loading Output of Software Units Size Alteration Program Correction Program Test Processor Load Measurements Maintenance Statistics Product Administration Audit Functions Signal Linking and Symbol Translation. Program Change Administration of AXE parameters Products Software Hardware Hardware General Hardware APZ 212 33 Hardware APZ 212 33 C Characteristics System Limits Capacity

Acronyms and Abbreviations

1 General
In the subsystem CPS are implemented the principles of program control and data handling in the central processor which are used by the application system and the rest of APZ. CPS includes the central processor with memories and all the central operating system functions. I/O and maintenance functions are not included in CPS. CPS interacts with RPS through the bus RPB. The communication to the MAS hardware takes place through the maintenance bus AMB and the test bus CTB. The communication to the Inter Platform Network (IPN) takes place through Ethernet. CPS is duplicated for reasons of reliability. In a normal state the two processor sides work independently of each other but in synchronism. The bus UMB is used for updating and comparison between the sides. The CPS software interacts with the other subsystems through normal software signals. Interruption signals are used to activate micro programs in CPS from the other subsystems.

2 Functions
2.1 Survey
CPS contains the following functions:
y y y y y y y y y y y y y y y

Program execution and data handling in the central processor. CPS software kernel, i.e. service functions which are called from other functions. Function change, i.e. replacement, addition and deletion of function blocks. Handling of the system backup copy for use in case of a serious system fault. Loading and removal of software units and store administration. Copying of software units to an external medium. Size alteration of data files. Program correction, i.e. modification of the program code in software units loaded in APZ. Tools for program test. Measurement of the processor load. Collection of maintenance statistics. Product administration. Audit functions. Signal linking and symbol translation. Program change.

Administration of AXE parameters.

2.2 Program Execution and Data Handling


CPS is used for storage of the central software units which are products in the hierarchical product structure of the AX system. The central software units are handled as independent products, both in the "physical" handling (e.g. loading, deletion, replacement) and in the "logical" handling (e.g. program interaction, data accessing). The parts of the central software unit are stored in different stores which all belong to CPS. The program code is stored in the program store PS, and variables are stored in the data store DS. In order to implement the applicable principles of the program interwork and to facilitate new allocations and re-allocations of programs and variables, use is made of the reference store RS. RS is used for storage of start addresses and other information required for the addressing of programs and data. The execution of the central software units takes place in the central processor CP, which is divided into a number of function blocks. The program execution and the data handling is controlled by the micro program MIP. MIP is divided into two parts, MIPI and MIPS dealing with different tasks in CP. MIPI contains among other things micro programs to implement the machine instructions of the system. Functions to administer the signal and job handling are included in MIPS. The micro program is then aided in that some of the function blocks in CP can independently fetch instructions from the program store and make address calculations for the data store and also fetch data from DS. Implementation has also been made in the micro program of special functions that are critical in time, functions that are difficult to solve in programs, or functions that will increase the capacity of the system considerably compared with program solutions. Some of the program execution functions are implemented in software, for example timing, certain supervisions. These functions are implemented in function block JOB, the job monitor. JOB also contains certain variables which are used by the micro program during the program execution. Examples are the storage of delayed signals and tables for regular request of the user programs.

2.3 Software Kernel


Some functions in CPS are service routines which are called by software inside and outside CPS. These routines are implemented in the function blocks JOB, LARI, LAD, LABA, KEED, RTS and BIN and may be grouped as follows:
y y

Time measurement functions (job table and time queues) (JOB). Real time clock and calendar functions (JOB).

y y y y y y

Arithmetic functions (JOB). Interface to central system tables (LARI). Allocation of dynamic and communication buffers (LAD). Loading of program store cache (LABA). Handling of System States and System Events (KEED). Run-time support for HLPlex applications (RTS, BIN).

2.4 Function Change


The following types of function change have aids and operational instructions that belong to CPS:
y y y

y y

Addition of central software. Removal of central software. Replacement of central software: data is not transferred data is transferred: - data conversion not necessary - data conversion. Replacement of regional software and/or hardware. Replacement of central software and regional software and/or hardware.

For all these function changes, use is made of the fact that both the central processor and its stores and the regional processors are duplicated. The central processor is synchronously duplicated, and the regional processor pairs work under load sharing. Both the central processor and the regional processors can work in single machine operation. The control of single/double machine operation is handled by the maintenance subsystem MAS. When adding central software, the machine sides of the central processor are first separated, and the separated side is used as standby in case of a fault. The loading takes place by means of the loading system (see below). The activation of the new function can be made with local start or with system restart, depending on how much the new function interacts with the existing system. After some time of test operation, the central processor is once more taken into parallel operation. When deleting central software, the machine sides of the central processor are first separated, and the separated side is used as standby in case of a fault. The deletion of the not wanted function is made with the loading system. After some time of test operation, the central processor is once more taken into parallel operation. When changing the central software, two methods are used. With both methods, the machine sides of the central processor are first separated. If the central processor stores has room for both the old and the new version of the software, loading and switching in to the new program version is made in the operating machine side. The

separated side is used as standby in case of a fault. When switching in to the new software, variable data can be transferred between the old and the new software. Possible conversion of data is made according to the data conversion information that has been loaded earlier. After some time of test operation, the central processor is once more taken into parallel operation. For more extensive replacement of central software, all the preparations for the function change are instead made in the separated side and the side in operation is left alone. The separated side is used for loading, deletion, replacement and transfer of (exchange) data from the operating side. Data transfer is made via the Maintenance Unit (MAU) in subsystem MAS and normally also via the Update and Match Bus (UMB). When the separated side is completely prepared, side change and data transfer is made. Possible conversion of data is made according to the loaded data conversion information. Then a system restart takes place, and the new software takes over the operation. The other machine side with the old software is now standby in case of a fault. After some time of test operation, the central processor is once more taken into parallel operation. When replacing regional software and/or hardware, the whole load is first transferred to one of the regional processors in the pair. The replacement is now made in the other regional processor. The complete load is transferred to the regional processor where the replaced functions are placed, and then follows some time of test operation. If the function works in a satisfactory way, the replacement will also be made in the other regional processor, and then normal load sharing is introduced again. When replacing central software and regional software and/or hardware, the machine sides of the central processor are first separated and the load on all the regional processor pairs is transferred to one of the regional processors. The regional processors with load are now connected to the operating side of the central processor. Regional processors without load are connected to the separated central processor side. Now replacement is made of the central software and regional software and/or hardware in the separated passive system side according to the previously described function change procedure. When the new system side is completely prepared, a side change is made, followed by data transfer and system restart, and then the new side takes over the operation and the old side is standby in case of fault. After some time of test operation, the regional software and/or hardware in the old side is replaced, and then the system is once more taken into parallel operation. The function change functions are implemented in the set of parts Function Change of CP (FCCP) in CPS and in the maintenance subsystem (MAS). Simultaneous changes in CP and RP and EMRP are done in co-operation with RPS-B and RPS-M. The loading system in CPS is also used for function changes.

2.5 System Backup Copy

A system backup copy is provided as security for use in the case of a serious system error. The system backup copy is a copy of the contents of the central processor's stores. When maintenance subsystem MAS considers that the existing software is no longer functional (several system restarts take place consecutively in a brief period of time), MAS sends an order to the bootstrap loader to reload the system with the system backup copy. The system backup copy is located on a CP file or on a dedicated area in the main store, that acts as a fast cache for the external medium. If a reload attempt is considered to be unsuccessful, the system can automatically try to load an older system backup copy, if defined by command. An output of a new system backup copy must always be done after major system changes such as function changes and size alterations. The following contradictory requirements apply to the system backup:
y y

The system must start up safely. The information loss must be kept to a minimum.

In order to meet the first requirement, reloading should take place with the oldest (best proven) system backup copy possible. The second requirement means that the system backup copy should be as young as possible. These two incompatible requirements have resulted in a compromise. Output data has been subdivided into two types - data which can control the program path and data of purely recording type (for example, charging data). Both these data types occur in two versions on the external system backup copy (one version only in the main store backup) and are automatically output at times determined for the exchange. Programs and reference information are output only after major system changes such as function changes and size alterations. If reloading takes place, the following information is loaded to CP stores:
y y y

The youngest version of data of purely recording type. The oldest version of data which can control the program path (the youngest when taken from the backup copy in the main store). Program and reference information.

The function system backup copy is implemented in the set of parts Backup of CP (BUCP) and the function block LAB.

2.6 Loading
CPS contains loading functions for:
y y

Initial loading at system start. Reloading of the system backup copy in case of a serious system fault.

Loading during function change.

Initial loading at system start can be made of the system backup copy from another exchange or system test plant. Initial loading of the system backup copy is made by a PROM-stored function block called LAB. Reloading of the system backup copy in case of a serious system fault is made by LAB upon order from the maintenance subsystem MAS. At reloading, the youngest available system backup copy on line is normally selected. But the system can also be configured to automatically use an older, more proven, system backup copy if the reloading of a previous backup copy is not successful. After a successful reloading the command log associated with the system backup copy is executed automatically. Relocatable software units are loaded in the case of function change. Both additional loading and loading for replacing software units can be made. There are also functions to load tables that are used for variable conversions during a function change. The loading functions also include functions to delete software units and for store administration. The functions are implemented in the set of parts Loading Functions in CP (LOCP). The loading information is stored on a CP file.

2.7 Output of Software Units


Output of software units can be made to a CP file. On output of central software units, they will be received in a form that agrees with an output made with the programming system APS. The output made in APZ, however, also contains program corrections and the variable data which the software unit had at the time of the output. The software unit that has been output can then be loaded again in the usual manner. The output function is implemented in function block BUC.

2.8 Size Alteration


When extending an exchange, the data files of certain central software units must usually also be extended. CPS contains functions to perform size extension and size reduction of data files without disturbing the operation. A data file can contain a number of variables where each variable has an equal number of records (= individuals). All the data files in the system are grouped in a number of numbered size alteration cases. Global size alteration cases concern data files belonging to several software units, whereas local size alteration cases only concern one software unit.

A size alteration is initiated by command or by an application program. The size alteration takes place in interaction with the file-owning software unit in such a manner that the file-owning software unit answers questions, for example whether the size alteration is permitted, and also will know when the new file size can be used. The physical size alteration in the store takes place by means of the store-administering functions of the loading system. The size alteration function is implemented in set of parts Size Alteration of Data Files (SACP).

2.9 Program Correction


In case of an error in central software, the normal action is to replace the faulty software unit by a corrected version with a function change. If the fault is serious enough to require immediate correction, a program correction can be introduced. CPS contains a program correction system for this purpose. It is implemented in the set of parts Program Correction in CP (PCCP). The corrections are introduced by command in the assembler language. The main use of the program correction system is in system test plants.

2.10 Program Test


The program test system is used for tests and fault finding in the system test plant and an operating exchange. The program test system uses micro programs (MIP) and hardware (CP) for its functions. The program testing in CP is fully integrated with program testing in RP and EMRP. The functions are mainly based on the fact that supervision and tracing of a certain event can be made. When the supervised or traced event occurs, predefined tasks will be executed. Such a task can for example be storage of important data for later printout. The printout is received automatically. Supervision and tracing is obtained by setting trace bits in the program and reference stores and in a hardware unit in CP. These trace bits are scanned continuously by the micro program. The micro program checks whether a trace bit has been set. This can be done without loss of capacity since the micro program and the hardware often work in parallel during normal program execution, and that the micro program still must wait for such activities as for example store accesses. Supervision and tracing as well as tasks are set by command. Then a number of supervision types can be selected. For each supervision type, one or several tasks are selected which are taken from a task library. When the traced software units are forlopp adapted, the starting point for the tracings or supervisions is indicated and the test system will then automatically trace the forlopp. If the traced software units are not forlopp adapted, trace or supervision commands are used for each software unit of interest.

The program test system is implemented in the set of parts Test System in CP (TECP) and function block TETM.

2.11 Processor Load Measurements


In order to verify that the system fulfills the capacity requirements and that the system acts in a correct manner in extreme load situations, CPS contains aids for various load measurements to central and regional processors. Examples of measurements are the extent to which the processors are loaded and the extent to which various buffers are filled. The time consumed by a specified job can also be measured. The functions are initiated by commands, and are implemented in function blocks MEI, MEM, MEO and MEHW.

2.12 Maintenance Statistics


The Maintenance Statistics function collects status information and information about events which have happened in the system. The collected data can be used to evaluate performance indexes. The function is controlled by the support processor, SP. Contents of variables can be transferred to the SP where the data is processed. The variables to be transferred are described in DID's, Data Interface Descriptions. Examples of variables to be transferred in CPS are counters of System Events and System State changes as reported to block KEED. Examples of maintenance statistics collected in CPS are:
y y y y y

Number of restarts. Number of bit faults. Accumulated system stop time. Accumulated time for blocked CP. Memory sizes.

The Maintenance Statistics function is implemented in blocks LAVS and MEMS and uses KEED functions.

2.13 Product Administration


The Product Administration function checks that identities of software units and corrections are correct. The Product Administration function is implemented in block PACR.

2.14 Audit Functions


Audit functions detect and inform operators about errors and system states which require manual intervention. The audit function group of blocks consists of two different categories of blocks. The first group contains blocks which are common for all audit functions, such as I/O-handling, supervision and administration of audit function handlers etc. The second group are handlers which detect different types of errors in the CP. When an audit function detects an error or another state that requires manual intervention, an alarm may be raised. The alarm will cease either after that an operator have taken the appropriate actions, after correction of an error or after a reloading of the system. Three different audit functions exist today:
y y y

A function to detect uncontrolled writings in the program store. A function to supervise the utilization of data files in a size alteration case and the utilization of the different stores. A function to provide control of the exchange build level.

The audit functions are implemented in the set of parts Audit Functions in CP (AFCP).

2.15 Signal Linking and Symbol Translation.


This function is used when software units are loaded to and output from the system. Before the software units are loaded in the system all references are not resolved. References to signals are given as symbolic names instead of absolute numbers. At loading, the symbolic references are resolved and all signals are given a global signal number that later is used in all references. When software units are output from the system in relocatable format, this function is also used. In this case, the signal numbers are translated back to symbolic names again. The function to administer the global signal numbers and also to translate between symbols and global signal numbers is implemented in the set of parts LOCP. The linking function, a function to list information associated with the linking of central software units and a function to perform a consistency check of the signal symbol table and the Global Signal Distribution Tables are also implemented in the set of parts LOCP.

2.16 Program Change


The Program Change function enables faults in one or more central software units to be corrected by exchanging the complete faulty software unit with the complete corrected software unit without a system restart. Information about central software units that have programs loaded for Program Change can also be printed.

The Program Change function is implemented in the set of parts Program Change in CP (PXCP) and function block PXZMD. Note: The function is blocked for use. The command, PXSUL, received in block LACI is not supported in this version of CPS.

2.17 Administration of AXE parameters


This function provides a standard mechanism for examining and changing the values of AXE parameters that define the properties of an exchange, for example, an optional feature in the Mobile Telephony System. An updated parameter value is checked against predefined limits or a list of permissible values before being accepted and distributed to the central software units that use the parameter. The function is intended for maintenance staff and customers' technicians. The Administration of AXE parameters function is implemented in the set of parts Administration of AXE parameters (PARCP).

3 Products
3.1 Software
3.1.1 General
The software is organized into the following blocks and sets of parts. The sets of parts group together related blocks and functions as follows: AFCP Audit Functions in CP BUCP Back Up of CP FCCP Function Change of CP LOCP Loading Functions in CP PARCP Administration of AXE parameters PCCP Program Correction in CP PXCP

Program Change in CP SACP Size Alteration of Data Files TECP Test System in CP Each block in the sets of parts are fully described in the description of each set of part product. The blocks are briefly described in this document as well for reference.

3.1.2 AFCP, Audit Functions in CP


The AFCP set of parts contains the functions and blocks associated with detecting and informing operators about errors and system states which require manual intervention. AFCP contains the blocks AFBLA, AFCO, AFIO, AFMC and AFUS. 3.1.2.1 AFBLA, Audit Functions Build Level Administration AFBLA contains the routines which administer the logging of exchange build level functions:
y y y

All changes to the CP program Store contents are automatically recorded and are possible to print by command. Irregularities, intentional or unintentional, in the software build are detected. It is possible to periodically output recorded changes at predefined intervals.

The function detects changes of the Central Software units and the Regional Software units stored in the Central Processor's Program Store and corrections inserted in these programs. 3.1.2.2 AFCO, Audit Functions Controller AFCO contains routines which control the execution of audit functions:
y y y y

Distribution of audit test requests to an audit function handler. Distribution of audit test results to the correct receiver. Preventing interference phenomena. Starting of background jobs regularly.

It is possible to ask for an audit test by means of command or user request. A user request interrupts tests which have been initiated by command. 3.1.2.3 AFIO, Audit Functions I/O AFIO contains following I/O software for audit functions:
y y

Reception of commands to the different audit functions. Printout of test results.

Printout of alarms.

3.1.2.4 AFMC, Audit Functions, Main Store Checksums AFMC performs checksumming of the contents of the program store. Every CP software unit and every regional software unit stored in the central processor's program store are checksummed. The checksums are calculated when the system is initialized, then the checksums are updated when the contents of the program store is changed, as at introduction of new software units, new regional software units, introduction of program corrections, etc. 3.1.2.5 AFUS, Audit Functions, Utilization Supervision AFUS performs supervision of the utilization of data files belonging to a size alteration case. A user can set a limit of the utilization by means of a command and when that limit is reached an alarm is issued.

3.1.3 BUCP, Back Up of CP


The BUCP set of parts contains the functions and blocks for the Backup functions. BUCP contains the blocks BUAP, BUCF, BUCFS, BUCG, BUCL, BUFTPD, BUMS, BUO, BUS, BUTFTPD, LABS, LACPRP, LARPAP, LASTOC, LATSE, LOGB and TAPEEMU. 3.1.3.1 BUAP, Backup Functions AP part BUAP decides which backup generation to reload if reload occurs, and performs renaming of CP files at backup generation handling. It also handles commands for setting and printing of generation handling parameters. BUAP is allocated in Adjunct Processor (AP). In APG30 BUAP co-operates with the blocks BUCG and BUTFTPD, and in APG40 BUAP cooperates with BUCG and TAPEEMU. 3.1.3.2 BUCF, Backup Functions, Conversion of File BUCF handles commands for conversion of disk-stored backup copy from hard disk to magnetic tape, flexible disk or vice versa. BUCF is allocated in CP and co-operates with the block BUCFS. 3.1.3.3 BUCFS, BackUp Functions, Conversion of Files in SP

BUCFS performs conversion of disk-stored backup copy from hard disk to magnetic tape, flexible disk, optical disk or vice versa. The backup copy will be converted to or from a number of tapes, optical disks or flexible disks. BUCFS is allocated in SP and co-operates with the blocks BUCF and BUCG. 3.1.3.4 BUCG, Backup Functions, Generation Handling BUCG handles commands for conversion of backup files between hard disk and optical disk or magnetic tape. The whole of the backup copy will be contained on one disk or tape in this case. BUCG also contains commands for changing file names at generation handling of hard disk backup files, and setting and printing generation handling parameters. BUCG is allocated in CP and co-operates with the blocks BUCFS and BUAP. 3.1.3.5 BUCL, Backup Command Log Interface BUCL is the interface between the backup function and the command log. It handles creation of new command log subfiles and handles automatic execution of the relevant parts of the command log after a reload. BUCL is allocated in the CP and co-operates with the block LOGB. 3.1.3.6 BUMS, Backup in Main Store BUMS creates a system backup copy in the Backup Area (BUA) in main store. BUMS continuously checks the contents of the BUA. If the function dump to main store is in use at output of the system backup copy, the contents of the CP stores are copied to the BUA. Then the BUA contents are copied to external file. BUMS supports this function with a service that copies data from BUA to a buffer. This service is used by block BUO. BUMS issues alarms at lack of storage in BUA or if the content of a BUA is invalid. BUMS is allocated in the CP and co-operates with the blocks BUS and BUO. 3.1.3.7 BUO, Output of System Backup Copy Function block BUO contains functions for the output of a system backup copy. BUO deals with all interwork with File Management Subsystem (FMS). The output functions are, however, completely controlled from block BUS. BUO also handles the alarm functions in connection with the system backup copy. BUO is allocated in the CP and co-operates with the blocks BUS and BUMS.

3.1.3.8 BUS, Administration of System Backup Copy Function block BUS administers the output of a system backup copy. The system backup copy is used in the case of system errors which are so serious that a restart with reloading of the entire system is required. The system backup copy is loaded by function block LAB. BUS orders BUCL to create or execute command log subfiles. BUS is allocated in the CP and co-operates with the blocks BUO, BUMS and BUCL. 3.1.3.9 BUFTPD, Backup Functions, FTP Protocol BUFTPD implements a FTP protocol used for loading. BUFTPD is allocated in the AP. In APG40 (parallel RP bus configuration) BUFTPD co-operates with the blocks BUAP and LACPRP. 3.1.3.10 BUTFTPD, Backup Functions, TFTP Protocol BUTFTPD implements a TFTP protocol used for loading. BUTFTPD is allocated in the AP. In APG30 BUTFTPD co-operates with the blocks BUAP and LATSE, and in APG40 (parallel RP bus configuration) BUTFTPD co-operates with LACPRP and TAPEEMU. 3.1.3.11 LABS, Loading Administration, Bootstrap Loader in SP LABS handles reading of backup copy from disk and converts data to the tape format that is used by the loader LAB. LABS also decides which backup generation to reload if reload occurs. LABS is allocated in the SP and co-operates with the block LAB. 3.1.3.12 LACPIPN, Loading Administration for IPN LACPIPN provides an interface between LAB and FTP client, FTPC. LACPIPN provides an interface for handling signals for Initial Loading. Signals containing commands from the CP block LAB are interpreted and forwarded to the AP via the FTP client. 3.1.3.13 LACPRP, Loading Administration, CP RP Signalling Communication LACPRP provides an interface between LAB and TFTP, a conversion of RP signals to file transfer requests.

LACPRP is allocated in the RPG for the parallel RP bus configuration in APG40, and cooperates with the blocks LAB and BUTFTPD. 3.1.3.14 LARPAP, Loading Administration, RP AP Signalling Communication LARPAP provides an interface between LAB and the AP, a conversion of RP signals to Windows NT interface. LARPAP is allocated in the AP for the serial RP bus configuration in APG40, and co-operates with the blocks LAB and TAPEEMU. 3.1.3.15 LASTOC, Loading Administration, Signalling Terminal for Open Communication LASTOC implements a interface between LAB and LATSE, a conversion of RP-signals to method calls. Data being loaded are also divided into RP-signals by this block. LASTOC is allocated in the RPG/RPG2 for APG30, and co-operates with the blocks LAB and LATSE. 3.1.3.16 LATSE, Loading Administration, Tape Stream Emulator LATSE is implementing a tape stream emulator that converts data to the tape format that is used by the loader LAB. LATSE is allocated in the RPG/RPG2 for APG30, and co-operates with the blocks LASTOC and BUTFTPD. 3.1.3.17 LOGB, Command Log Handling LOGB handles the logging of subscriber commands and commands associated with a command logging category. Only commands that are fully or partly executed are logged. The logging function is activated (or deactivated) via operator command. This log is used as a backup log for restoration of the system after a system reload. LOGB is allocated in the CP and co-operates with the block BUCL. 3.1.3.18 TAPEEMU, Tape Stream Emulator TAPEEMU implements a function that emulates the tape format by adding tape headers. TAPEEMU is allocated in the AP. In APG40 parallel RP bus configuration TAPEEMU co-operates with the blocks BUAP and BUTFTPD, and in APG40 serial RP bus configuration with the blocks BUAP and LARPAP.

3.1.4 FCCP, Function Change of CP

The FCCP set of parts contains the functions and blocks associated with addition, removal and replacement of software units. FCCP contains the blocks FCA, FCBC, FCC, FCD, FCI, FCL, and FCT. 3.1.4.1 FCA, Function Change, Administration Function block FCA contains the administrative functions for most of the function change function. FCA has interwork with the block for data conversion, FCD. 3.1.4.2 FCBC, Function Change, Block Creation Function block FCBC is responsible for the creation of the temporary blocks created during data conversion of variables. FCBC co-operates with and is controlled by FCD. 3.1.4.3 FCC, Function Change, Copying of Data Function block FCC has signal initiated functions, for preparation and execution of data copying between the CP-sides, used at function change according to the side switch method. 3.1.4.4 FCD, Function Change, Data Conversion Function block FCD is used for converting the variable values in connection with a function change of CP software units. FCD ensures that variables in the new software units are allocated and are assigned values. Tables for variable data conversion are loaded directly to variables in FCD. 3.1.4.5 FCI, Function Change, Initiation Function block FCI contains the command and printout interface for the function change function. 3.1.4.6 FCL, Function Change, Kernel Function block FCL contains routines for reading and writing in the program store (PS), the data store (DS) and the reference store (RS). Other APZ machine dependent functions are also assembled in this block. 3.1.4.7 FCT, Function Change, Data Transfer Function block FCT contains functions for signalling between the CP sides in connection with data transfers between separated CP sides. The signalling can take place via the MAU or via an ordinary RP. The latter case is mainly used at processor replacement.

FCT contains central software unit FCTU and regional software unit FCTR. FCTR is used when function change is carried out when one APZ type is replaced by another. In these cases data is sent via an RP which is alternately connected to the operating side and the separated side.

3.1.5 LOCP, Loading Functions in CP


The LOCP set of parts contains the functions and blocks associated with the loading of central software units. LOCP contains the blocks BUC, LAAT, LACI, LACO, LACONV, LADS, LADSD, LADSD1, LAFI, LAL, LALI, LALT, LAMEAS, LAP, LAPS, LAPSD, LARS, LARSD, LASYMB, and LINKCP. 3.1.5.1 BUC, Output of Central Software Units This function block carries out a command-initiated output of relocatable CP software units and variable data conversion tables in the same format as occurs when they are output from APS. BUC is mainly intended for use in a system testing plant. 3.1.5.2 LAAT, Loading Administration, Allocation Test LAAT checks that storage layouts are consistent before output of a system backup copy. If the storage contents are not correct, an alarm is issued and the automatic updating of the backup information will be inhibited. Allocation test can also be performed towards both storage contents and backup information contents when ordered by command. A listing of any found anomalies in the layout will be obtained. 3.1.5.3 LACI, Loading, Command Entry LACI handles all interwork with alphanumeric functions in connection with loading of central software units. All command entries and all printouts are thus dealt with by LACI. Control is then normally handed over to LACO for the continued loading procedure. Purely listing functions are not, however contained in LACI. 3.1.5.4 LACO, Loading, Controlling Functions LACO contains the controlling functions in connection with loading of central software units. When any loading function is to be carried out, LACO normally takes over control from the command-receiving block, LACI. Apart from this, LACO has interwork with file-administering block LAFI and with the store-administering blocks LAPS, LADS and LARS as well as with block LARI, that handles system tables.

3.1.5.5 LACONV, Loading Administration, Binary Code Conversion LACONV performs the translation of executable ASA binary code from the standard binary code compatible format to an APZ specific format and vice versa when a central software unit (block) is loaded or output in relocatable format and at the loading and printing of program code corrections. LACONV isolates all APZ specific binary format knowledge for a particular APZ from the rest of the operating system. 3.1.5.6 LADS, Administration of the Data Store All handling of storage areas in data store DS is dealt with by block LADS on receipt of an order from block LACO. 3.1.5.7 LADSD, Loading Administration, CP-Dependent Data Store Handler The DS in an APZ 212 33 can be partitioned into three parts: a very high-speed access part implemented by mapping a Level 2 cache to the first 8MW16 part of the DS; and/or a high-speed access part implemented using Static RAM (SRAM) circuits; and/or a normal-speed access part implemented using Dynamic RAM (DRAM) circuits. When ordered, LADSD allocates variable areas in the DS so that the most frequently accessed variables are located in the fastest access memory in order to optimise the variable-access performance of an exchange. LADSD also arranges the allocated areas in the fastest access memory so that the areas for RELOAD-marked variables are located at the high-address end of the fastest access memory. 3.1.5.8 LADSD1, Loading Administration, CP-Dependent DS Statistics Handler LADSD1 assists block LADSD to maintain and store selection criteria data for the allocation of the most frequently accessed variables in the SRAM of the data store. 3.1.5.9 LAFI, Loading, File Handling With a knowledge of the organization in principle of the loading format as a basis, block LAFI deals with all interwork with AXE file systems when reading load information from tape. The loading takes place on receipt of an order from block LACO. The loading is made to a dynamic buffer which is then handed over to LACO. 3.1.5.10 LAL, Store Listing LAL lists the disposition of the contents in the stores of the central processor. The functions are ordered by means of commands. 3.1.5.11 LALI, Loading Administration, Listing of Linking Information

LALI lists information associated with the linking of central software units e.g. CP-CP signal cross-reference and the contents of the global signal distribution tables. The listings are ordered by means of commands. 3.1.5.12 LALT, Loading Administration, Linking Information Test LALT provides a function to perform a consistency check of the contents in the signal sending tables, signal distribution tables, the signal symbol table and the global signal distribution tables. The check function is ordered by means of a command. 3.1.5.13 LAMEAS, Loading Administration, Measurement Block LAMEAS measures and normalizes the number of variable and program store accesses made by each active block in the system. These measurements are then used by various APZ functions to reconfigure the system in order to optimise its performance. 3.1.5.14 LAP, Listing of Product Information LAP lists information on the central software products which are loaded in the system. The functions are ordered by means of commands. 3.1.5.15 LAPS, Administration of Program Store All handling of storage areas in program store PS is dealt with by block LAPS on receipt of an order from block LACO. 3.1.5.16 LAPSD, Loading Administration, CP-Dependent Program Store Handler LAPSD reorganizes the central software units in the program store in order to minimize APZ hardware access overheads between SDT lookup and initial program code execution. 3.1.5.17 LARS, Administration of the Reference Store All handling of storage areas in reference store RS are dealt with by block LARS on receipt of an order from block LACO. 3.1.5.18 LARSD, Loading Administration, CP-Dependent Reference Store Handler LARSD reorganizes the base address tables in the reference store in order to minimize the time it takes the APZ hardware to access those tables. 3.1.5.19 LASYMB, Administration of Symbols LASYMB administers the storing, retrieval and removal of symbols for central software units. The symbolic information includes the name and attributes of signals and the names of variables allocated in DS.

The symbol tables are updated during the loading, function change and removal of central software units. New signal and variable symbols may also be created by command. 3.1.5.20 LINKCP, Linking of CP software units LINKCP creates and updates the linking information for central software units during loading (i.e. adding new software), function change method I (i.e. replacing software), removal of software and when creating insignal corrections.

3.1.6 PARCP, Administration of AXE parameters


The PARCP set of parts contains the functions and blocks associated with administration of AXE parameters. The parameter data base is stored in tables within the Database Subsystem, (DBS). PARCP contains the blocks PARA, PARTAB1 and PARTAB2. 3.1.6.1 PARA, Parameter Administration Block PARA is the administering block in PARCP. PARA contains the AXE parameter administrative data and co-ordinates all updating and distribution of AXE parameters values. All operations on AXE parameters are initiated with generic commands provided by DBS. A command to enable and disable the access for changing certain types of parameter values is also provided. 3.1.6.2 PARTAB1 Block PARTAB1 holds DBS tables to store information required by block PARA. 3.1.6.3 PARTAB2 Block PARTAB2 holds DBS tables to store information required by block PARA.

3.1.7 PCCP, Program Correction in CP


The PCCP set of parts contains the function and blocks associated with corrections in the CP. PCCP contains the blocks PCA, PCI, PCS and PCT. 3.1.7.1 PCA, Program Correction, Administration When a program correction is being entered in a CP software unit, the ordering command is received by PCI, which hands over control to PCA.

PCA then loads the correction identification and administers the loading of the desired correction. This is carried out in conversational mode with the operator and in interaction with the superior unit, PCT. When a program code is listed, the initiating command is received by PCI which hands over control to PCA. PCA administers and carries out the printout in interaction with the superior unit, PCT. 3.1.7.2 PCI, Program Correction, Input PCI deals with the command reception for all program correction commands to CP units. When a system restart takes place, PCI ensures that all corrections in CP receive a defined state. 3.1.7.3 PCS, Program Correction, Signal handling PCS is used when a signal correction is entered. The signal is given in symbolic form and is translated to global signal number with the aid of PCS. PCS has an interface towards LASYMB to perform the symbol translation. 3.1.7.4 PCT, Program Correction, Table Translation PCT is used together with PCA and PCI in connection with corrections in CP software units. PCT includes the tables and routines which are required in connection with a program correction translation of the assembler code ASA210C to machine code and vice versa.

3.1.8 PXCP, Program Change in CP


The PXCP set of parts contains the function and blocks associated with Program Change in the CP. PXCP contains the blocks PXZA, PXZD and PXZS. 3.1.8.1 PXZA, Program Change, Administration Contains the Program Change administrative data and co-ordinates all the Program Change operations. All operations to perform the Program change are initiated with generic commands provided by the Database Management Subsystem (DBS). 3.1.8.2 PXZD, Program Change, Data Difference Handler PXZD contains information about the differences between the data-stored variables for two versions of a central software unit. The function block checks that the data-stored variables for the two versions are compatible. The function block also optimizes the amount of Reference Store (RS) and Data Store (DS) needed for a software unit loaded for Program Change.

3.1.8.3 PXZS, Program Change, Signal Difference Handler PXZS contains information about the differences between the signal interface for two versions of a central software unit. The function block checks that the signal interface for the two versions are compatible.

3.1.9 SACP, Size Alteration of Data Files


The SACP set of parts contains the functions and blocks associated with size alteration of data files. SACP contains the blocks SAFCI, SAFCO, SAFH, and SAFTAB1. 3.1.9.1 SAFCI SAFCI receives the commands for handling and printing of size alterations. SAFCI also inserts the Size Alteration Function Orders in the Size Alteration Function Queue for execution. The order is then sent to SAFH for execution. 3.1.9.2 SAFCO SAFCO controls and schedules all manual and automatic size alteration tasks to function block SAFH. It also administers the Size Alteration Event Database. 3.1.9.3 SAFH SAFH executes the size alterations. The alterations are carried out in collaboration between CPS and the file-owning blocks. The data files whose size can be changed are grouped in system-defined size alteration cases. 3.1.9.4 SAFTAB1 SAFTAB1 owns the DBS tables in the Size Alteration Event Database.

3.1.10 TECP, Test System in CP


The TECP set of parts contains the function and blocks associated with program testing in CP. TECP contains the blocks TED, TEI, TEM, TEO, TET and TEV. 3.1.10.1 TED, Program Testing, Data Handling TED contains functions for data handling in the Program Test System (PTS).

3.1.10.2 TEI, Program Testing, Input TEI contains command analysis for the commands which are included in PTS. TEI administers the linkage between a PTS testing individual and the typewriter or display terminal at which the operator works. TEI also receives commands for a specified setting and removal of tracing/supervision. The analysis of these commands is not carried out in TEI. Instead, control is handed over to the affected function block in the program test system. TEI also contains service functions which are used by other PTS blocks. Functions for administration of observation alarm is also included in TEI. 3.1.10.3 TEM, Program Testing, Monitor Program testing monitor TEM contains the controlling and supervising functions used in connection with program testing with the aid of PTS. TEM supervises testing of the interwork between a number of software units in real time. Each program tester is allocated a testing individual. Four testing individuals can be seized simultaneously in the program test system. 3.1.10.4 TEO, Program Testing, Output This is the software unit for output functions in the program testing system. TEO executes printouts on receipt of orders via signals from the remainder of the program testing system. These printouts can either be command-initiated or caused by tracings which have occurred. 3.1.10.5 TET, Program Testing, Extended Tracing TET contains command analysis functions for the commands which are used in connection with detailed setting of tracing/supervision and actions when a supervised event occurs. In addition, commands for removing supervision/tracing are analyzed. When the command analysis has been executed, the program testing monitor is ordered by means of a signal to carry out the requested function. 3.1.10.6 TEV, Program Testing Variable Handling TEV contains a command analysis function for variable-handling commands which are included in PTS.

3.1.11 Blocks not included in any set of parts


3.1.11.1 BIN, Dustbin for signals

BIN is part of the APZ run-time support for function blocks designed with High Level Plex. BIN receives HLPlex events (signals) with no receiver. It also takes part in the allocation of a unique RTS block instance to each Application Module (AM) during restart. 3.1.11.2 JOB, Job Monitor JOB contains the following functions:
y

y y

Job table: Periodic signals can be sent by means of the job table to the various program blocks at intervals which are determined by the respective blocks. Time queues: Program signals can be sent with a specified time delay with the aid of the operating system's time queues. Clock and Calendar: JOB contains the following functions concerning the program clock and calendar. - Stepping the clock. - Synchronization to external time reference if any. - Setting the clock by means of a command. - Adjusting the clock by means of a command. - Reading the clock by means of command. - Handling of multiple time zones. - Setting the day category by means of a command. - Listing day categories. - Reading the program clock. - Reading the weekday and the day category. - Checking to see if a time specification is reasonable and if it has expired. - Adding a time interval to the current time. - Fetching a time specification in a command. - Fetching a date specification in a command. - Adding a number of dates to a specified date. Supervision. The following supervision actions are carried out by software in JOB: - Supervision of program execution for both high-priority and low-priority jobs. - Check of job buffer delay times. Information on processor load and job buffer load. Restart measures and arithmetic functions.

JOB consists of one Central Processor (CP) program unit, JOBU and two Regional Processor (RP) program units, JOBR. The RP units handle the administration of the two optional reference clocks, URC (UTC Reference Clock) and RTU (Reference Time Unit). The regional software unit, JOBR, implements a software clock that is synchronized to an external time source that provides UTC (Universal Time Coordinated) time. On request from JOBU, JOBR provides UTC time in a, for JOBU, plausible format. The clock synchronization in

JOBR is implemented with NTP (Network Time Protocol). The NTP part of JOBR consists of code which have been ported from the freeware program xntpd. The RTU hardware is connected to the system via a RP. It is a stand-alone clock with an own oscillator. The RTU is controlled by a regional software unit, JOBR, running in that RP. 3.1.11.3 KEED, Event Distribution KEED handles system states and system events. System states are used to avoid interference between different activities. They are set and reset with signals from a user. A set request may be refused if any of a number of other system states already are set. Any user can request to be informed whenever a system state is changed. System events are used to spread information about events that has occurred. Users first have to request to be informed when certain system events occur. Other users report these events to KEED which distributes the information to the blocks concerned. 3.1.11.4 LAB, Loading Administration, Bootstrap Loader LAB performs reloading of backup information at system restart with reload or when ordered by command, from external medium or main store. LAB also performs initial loading ordered by command. The block belongs to the hardware and is located in a read only memory together with the micro program. Prior to the reloading LAB is transferred to program store for execution. LAB is executing on the CP and co-operates with the blocks LABS in the SP, LASTOC in the APG30, and LARPAP (serial RP bus configuration), LACPRP (parallel RP bus configuration) or LACPIPN (IPN-Ethernet configuration) in the APG40. 3.1.11.5 LABA, Loading of Program Store Cache LABA loads and administers the Program and Base Address Table Cache (PBC), which is part of the Program and Reference Store Cache Memory (PRSCM). Frequently executed blocks and their corresponding Base Address Tables (BATs) are loaded into the PBC to improve the execution time for their ASA code and therefore increase the total system capacity. 3.1.11.6 LAD, Dynamic Buffers LAD administers two memory banks. One for dynamic buffers and one for communication buffers.

Dynamic storage allocation from a special bank in the data store is used both for improving the storage economy and for simplifying the interwork between function blocks by making it possible to transfer large quantities of information with one signal. Storage allocation can be made to dynamic and communication buffers. Dynamic and communication buffers are addressed without base addresses and can be used internally within a function block for storing data or for transferring data between function blocks. Communication buffers have shorter access times and larger memory bank than dynamic buffers. 3.1.11.7 LARI, Loading, Reference Information LARI has a knowledge of the central processor's addressing principles and of where various areas are located in the stores as well as of how these areas are built up. LARI also contains the interface to the central processor's hardware. As a result, changes in the hardware, such as a new type of processor, and changes in the addressing principles mainly affect block LARI only. Other blocks access this information via functionally structured signals to LARI. LARI contains the following service functions, which can be called from other blocks:
y y y y y

Reading/writing in system tables. Translation between sector type on the loading medium (the loading format) and information type. A knowledge of which store-handling blocks (LAPS, LADS, LARS) handle a certain information type. A calculation of the size of the storage area for a certain information type. Reading/writing in special registers.

3.1.11.8 LAVS, Variable Statistics Output LAVS transfers data directly from other blocks variables to SP files. The variables to be transferred are described in Data Interface Descriptions, DID's. Based on this information user programs send signals to LAVS which indicate which variables shall be output. The data output by LAVS is then used by subsystem STS in order to produce formatted performance statistics of the system. 3.1.11.9 MEHW, IPU Performance Measurements The IPU performance measurement function makes it possible to measure different characteristics in the IPU hardware. 3.1.11.10 MEI, Load Measurement, Input

MEI analyses the commands for processor load measurements and orders MEM to carry out the measurements. 3.1.11.11 MEM, Load Measurement, Monitor MEM collects and stores data on processor load measurements. When a measurement is concluded, MEM sends the collected data to MEO for printout. 3.1.11.12 MEMS, Maintenance Statistics MEMS counts system events and system state changes. It also notifies LAVS upon request from SP about these counters so they will be sent to SP for statistics processing. 3.1.11.13 MEO, Load Measurement, Output MEO prints the result of load measurements. Most measurements can be obtained as a time function or in the form of a histogram. MEO can also specify which measurements are in progress. 3.1.11.14 MILO, Micro Program Backup Loading Block MILO is used as a storage of the APZ 212 30 micro program code. Block MILO is optional. 3.1.11.15 MILO1, Micro Program Backup Loading Block MILO1 is used as a storage of the APZ 212 33 micro program code. At a restart the micro program code is transferred from variables in MILO1 to the store where the micro program is executed. The variables containing the micro program are checksummed regularly to determine possible corruption. If a fault is found, an alarm is issued and the periodic updating of the system backup copy is blocked. Block MILO1 is optional. If it is not present the micro program code will be loaded from PROM instead. 3.1.11.16 PACR, Product Administration PACR has a function which checks that identities of software units and corrections are correct. The function is called with a command. The command is intended for use in files with program correction commands and at function change. 3.1.11.17 PXZMD, Program change, Machine Dependent Routines PXZMD performs the actual Program Change by directly updating the reference table and base address table positions for the changed software unit. Consequently, a different version of this

function block is required for each different APZ platform. These operations use low-level, machine-dependent assembler routines to optimize the speed of the Program Change. PXZMD logically belongs to PCXP but is not included in PXCP since it is target machine dependent. 3.1.11.18 RTS, Run-time Supports for HLPlex Block RTS is the basic APZ run-time support for function blocks designed with HLPlex. It contains support to set up logical connections, that is event paths, between different HLPlex function blocks in different Application Modules (AMs). RTS also supports active signal reception with buffering of events. RTS is used in 20 block instances with the names RTS00 to RTS19. One RTS instance is allocated to each AM. 3.1.11.19 TETM, Program Testing, Trace Measures TETM is the interface function towards hardware and micro program regarding program tracing. If a traced event occurs, micro program initiates start in TETM on trace level. After analysis of the type of tracing and preservation of data in some CP-registers, TETM initiates measures ordered by the operator in co-operation with the other blocks in PTS. TETM also contains service functions for setting/resetting trace bits for the different trace types when tracing is initiated or terminated. TETM logically belongs to TECP but is not included in TECP since it is target machine dependent.

3.2 Hardware
The following eight blocks in CPS are implemented in hardware or microprogram: IPU Instruction Processor Unit SPU Signal Processor Unit MAI Maintenance Interface RPH Regional Processor Handler MIPI Micro Program Instruction processor. MIPS

Micro Program Signal processor. RTU Reference Time Unit. RTU is an external time reference and is machine independent. CPS also includes the required mechanical equipment.

3.2.1 General
The function of the central processor, CP, is to control a telecommunications system connected to APZ through the regional processor system RPS by using programs in the program store, PS, data in the data store, DS and references in the reference store, RS. The CP interacts with RPS through the block RPH and the RP bus RPB. The work of the processor can be separated into two different parts, namely execution of instructions and job administration. The instruction execution implies execution of unbroken sequences of operations where the main work consists of address calculations, plausibility checks, store accesses and data manipulations. The job administration consists mainly of signal handling, signal conversion and signal buffer handling. The instruction execution is a single stream work whereas the job administration to a great extent deals with giving priority to and transferring data packets. In order to enable hardware solutions that are well adapted to these two different types of work as well as to provide a high CP capacity, the CP is implemented with an instruction processor (IPU) for instruction execution and a signal processor (SPU) for the job administration. IPU and SPU can in this manner be adapted to one distinct type of work each. The instruction processor IPU is supplied with jobs from the signal processor. While IPU is executing a job the signal processor prepares a new job for IPU. When IPU reaches the end of a job (End of Program), it shall be possible to begin the execution of a new job immediately. The instruction processor is optimized for fast instruction processing. IPU therefore has direct access to the program store, the data store and the reference store. A read ahead principle for the fetching of new instructions is used. The store interaction is synchronous and with refresh of the stores controlled from IPU. The signal processor SPU shall administer the job execution in CP and shall continuously provide the instruction processor with jobs for execution. The signal processor fetches jobs from the regional processors in RPS as well as from the instruction processor in the CP when the instruction processor executes transmission of buffered signals. SPU stores the fetched jobs in buffers and then takes them out successively in the order of priority for execution in the instruction processor. Signals to RPS are transferred by SPU from IPU to RPH. SPU has access to a separate store for signal handling. The need for fetching signals

from the instruction processor and the regional processor bus interfaces is signalled to SPU with interrupt signals. The CP is controlled by microprograms which has produced a very powerful instruction list especially adapted to telecommunications. Also some of the central functions of the operating system are implemented with micro programs. The program execution is based on the program block structure of the system. CP fetches instructions in program blocks where the actual program block is indicated by its block number in the block number register BNR. The position of the program block in the program store is indicated by the program start address register PSAR which is loaded with the program start address PSA of the block from the reference store. The relative instruction address within a block is indicated by the instruction address register NIAR. When an instruction has been fetched CPU executes the functions corresponding to the instruction code by using the micro program. IPU contains special functions for variable addressing, variable handling and protection against unauthorized writing in variables. CP contains special functions for the job handling of the operating system. SPU contains a store for the job buffers and the job table. Registers for linking and a number of time counters for time measuring are placed in IPU and SPU. CP has a priority system consisting of 4 program levels that can be interrupted and a number of microprogram levels. Each program level contains a complete set of process registers so that no storage of process registers is necessary when changing the program level. Each microprogram level is associated with a fixed micro program, for example a micro program for handling RP signals. No process registers are used on a microprogram level. CP contains special functions for tracing on hardware level, microprogram level and program level. For example there is a jump registration memory, JAM, for registration of all program jumps and a signal sequence memory, SSM, for registration of sent signals. CP contains some maintenance functions that interact with the maintenance system MAS. The maintenance functions are:
y

Working state logic. MAS assigns CPU the working state EX (executive) or SB (standby). In the SB side CPU is also assigned one of the substates WO (working), UP (updating), SE (separated) or HA (halted). Supervisory functions. Side-indicating supervision is made which directly indicates the faulty side. Side comparing supervision is made and MAS is informed if a mismatch occurs. There is also a total supervision that the program execution is performed in a reasonable manner.

y y

For updating, comparison between CP-sides and data transfer during function change there is a direct path between the sides, the UMB bus. This is also used to transfer the clock of the EX side to the SB side during parallel operation. Fault correction mechanism for DS, PS and RS data. An access path from MAS for processor test functions.

3.2.2 IPU, Instruction Processor Unit


3.2.2.1 General IPU is a function block in CPS. It executes ASA instructions and receives jobs or signals from the SPU (Signal Processor Unit). In the IPU the data store (DS), the reference store (RS) and the program store (PS) are included. IPU has interwork with the SPU, MAU, the twin IPU for updating and match reasons, the microprogram MIPI and ASA instructions. IPU is implemented on two different boards and three application specific circuits (ASICs). ASICs are used to gain high capacity but also to get a compact design. The ASICs are: IPC - Instruction Processor circuit. IPC contains several functions and is therefore divided in 10 modules. UBC - Update bus Buffer Circuit. DSC - Data Store Control circuit. The boards are: IPU IPU contains IPC, UBC, PRS DRAM, PRS SRAM and BOOT Memory (BOOTM). DSU DSU contains the store to hold the data. Two types of DSU exists. DSU_D dram board that can store 512MW 16 bits DSU_S sram board that can store 32MW 16 bits. 3.2.2.2 Functions in IPU The following main functions exist in IPU: Stores DS and PRS: Two types of stores exists in CP. The data store (DS). The store is implemented physically with a word length of 32 bits. The store also has 8 additional bits per word which are used as an error correction code. The code is generated in DSU on writing, and on reading from the store a check is made and possibly a correction in DSU. With this code it is possible to correct all 1-bit faults and to detect most 2-bit faults. Other faults are detected through the side-comparing supervision. The addressing of DS is done from the IPC circuit on IPU board. DS is connected to IPU through the DSB bus. The maximum size of DS from an addressing viewpoint is 8192 MW/16. From an implementation viewpoint, however, the actual maximum size is usually less.

The variables of the central software units are stored in the data store DS. The store is addressed on 32 bit word length. Physically DS can be implemented as DSU_D or DSU_S or a combination of these boards. DSU_D has a longer access time than DSU_S. CPS software allocates the variables depending on capacity demands. PS and RS are physically stored in the Program and Reference Store (PRS) located on the IPU board. The physical word length of PRS is 128 bits. The logical structure of the AXE system is reflected in the reference store, which is used for storage of addresses and sizes for all store areas in the central processor. The reference store contains the connection between the program code of a function block and its variables. RS also contains a number of tables used by the operating system. The word length is 32 bits and the store is addressed on 32 bit word length. CP software units are stored in a separate store, Program Store (PS) which is physically located in PRS with its own bus to the Program and Reference Store Unit (PRSU) located in IPC. This allows autonomous fetching of instructions without interaction from other modules in the IPU. Maximum addressable size is 256 M16W. Max 4 k blocks is handled where each block can have a size of maximum 64 k16W. The memory is addressed on 32 bit word length. The first 16 KW of the address area in PS is reserved for block LAB which is stored together with MIP in a PROM. Prior to the execution of LAB, LAB is copied to this area. In addition to PRS there is also a Program and Reference Store Cache Memory (PRSCM) that is stored in SRAM. The most frequently used blocks and their Base Address Tables are loaded in PRSCM by software. DS is placed together with IPU in the CPUM magazine. DSU_D DSU_D is a memory board with dynamic RAM components. DSU_D has the size 256MWx40 bits or 1024MWx40 bits depending on memory components generation. (32 data bits and 8 bits for error correction code). DSU_D is organized as sixteen banks with sixteen or sixty-four MW32 each. There are maximum eight DSU_D for DS in the standard magazine. There are maximum two DSU_D in the APZ 212 33 C magazine. DSU_S DSU_S is a memory board with static RAM components. DSU_S has the size 8MWx40 bits or 16MWx40 bits or 32MWx40 bits or 64MWx40 bits depending on memory components generation or if 8 or 16 bank are used. (32 data bits and 8 bits for error correction code). DSU_S is organized as eight or sixteen banks with one or four MW32 each. There are maximum eight DSU_S for DS in the standard magazine. A combination of DSU_D and DSU_S is also possible up to eight boards in total. The DSU_S board is not used in the APZ 212 33 C magazine. Which variables to be stored in DSU_S is controlled by CPS.

Register Memory (RM): RM contains four sets of 160 registers (32 bits), one for each priority level (RML). There are also a non level divided area of 224 registers. (RMN). 64 registers on each priority level are accessible from a user program and the rest including the 64, are accessible by the micro program. RM is implemented in the module RMU in the IPC circuit. Link register (LNKR): The link register (LNKR) is the store for return addresses in connections with linked jump instructions. It can store up to 24 linked jumps in normal operations. On Trace level it is possible to store 32 linked jumps. LNKR is implemented in the RMU in the IPC circuit. It uses the same physical memory as RMN from address 224 to 255. Prefetch PS: There is a prefetch function on instructions from PRS. Up to four 128 bit words can be fetched and decoded in advance The prefetch detects conditional jumps and fetches the first instruction in the jump branch. Unconditional jumps (JLN) are taken care of by the prefetch function and is not visible for the rest of the IPU. The most important reason for prefetch is to detect when there is an a-parameter instruction so the access to DS can start. The prefetch function is implemented in module Program and Reference Store Unit (PRSU) and Assembler to microprogram Translation Unit (ATU) in the IPC circuit. Prefetch DS: As soon as the ATU detects an a-parameter instruction, ATU requests BAT data from PRSU and the attributes are stored in a queue for the Instruction Queue Controller (IQC) to calculate the logical address for the variable. IQC can hold up to 8 instructions ongoing so the latency for variable reads can be hidden while other instructions are executed. Level one cache L1C: A small cache memory is implemented in PRSU to hold 128W16 instructions for the current scope of instructions. The cache is completely controlled by hardware. Data store cache: A small cache memory is implemented in DSH to hold 16W32 data for the most recently used variables. Using small variables within the same 32-bit word is therefore enhanced. The cache is completely controlled by hardware. ASA decoding: Decoding of the ASA instructions is hard-coded in the ASIC IPC. To be able to implement new instruction, without changing the ROM, the parameter decoding to those must be done by the microprogram. Jump Address Memory (JAM): On every change in the consecutive program flow the executed Instruction Address (IA) is stored in the Jump Address Memory. Both 'from addresses' and 'to address' are stored. If there is a change of program block, the block number of the old block together with program level is written to JAM.

JAM has the size of 1024 positions. Each change of the program flow inside a block requires one position in JAM and a block change requires two positions. Control Memory (CM): The Control Memory (CM) is located in two different places. For the first microinstruction of all ASA-instructions Instruction decode Control Memory (ICM) is used. This memory ICM is 66 bits wide and 512 words deep and belongs to function module ATU in IPC. The micro code for the continuations of the ASA-instructions and other micro programs is stored in Control Memory (CM) which is 56 bits wide and 16k deep. CM is also a part of ATU in IPC.. Boot Memory (BOOTM): To be able to start the CP the micro program together with other programs are stored in a PROM (BOOTM). At reset (GRS) ICM and CM is loaded from BOOTM. ICM and CM can also be loaded by MIP from variables in a block in ordinary DS. The rest of BOOTM are program blocks that should be loaded into PS by MIP. They are loaded to address 0 to 16k in PS. These blocks are used to load the rest of the system from IO. Support for tracing: Trace on two Instruction Addresses (IA) are supported by the hardware. Trace on jumps and Every Instruction Trace are also supported. There is special hardware to detect the Trace Enable bit when Reference Table is used and if there is outsignal trace. Note that the execution time when tracing is active could be drastically increased if the trace condition is fulfilled. If the trace flag for every instruction address trace or every jump trace or instruction address trace is activated there will be no capacity loss if the trace condition is not fulfilled. Trace on variable will result in capacity loss only for the pointed variable both for reading and writing. For forlopp trace and EP trace there will be a small capacity loss as for outsignal and insignal trace. Support for measurements: Hardware support is provided so that measurement can be done on jobs, job level and block number. To measure load on jobs and job level there are three counters of wrap around type. They are stepped when the level match, continuously every 100 nano second. If no jobs exists the micro program stops the counters. They starts as soon as there is a job on concerned levels. They can be reset by micro program order. The counters are 32 bits wide. For supervision of execution time on C and D level there are two counters. They are stepped every 100 ns and are of wrap around type. These counters are 32 bits wide each. They can be reset by micro program order. To measure load on blocks there are two possibilities to count the time during which a specified block is executed. Either a counter per C or THL level or a counter that is stepped on only THL. For the first two counters a mask register exist so that a sequence of blocks can be measured simultaneously. The first two counters are used by the measurement function, the latter is used for measurement of block load for the handling of the variables to be allocated to DSU_S. The load measurement function is implemented in the LMU module in IPC

General measurements can be done on utilization of resources in IPU by means of setting special registers in the LMU block. Updating and match function: IPU sends or receives data from the twin IPU to be used for comparing and updating. Comparison is made on one of three possible sources. 1: Comparison is made on a signature of the micro program addresses CMAB0 and CMAB1, condition signals IQCND0 and IQCDN1, the internal data buses DE0DB and DE1DB, program store data bus parity PRSDBP, SPUinterface, data store interface and data from the Branch Prediction Memory BPM. 2: Comparison on the complete DE0DB. 3: Comparison on the complete DE1DB. This is selected by setting control bits in the Update and Match Unit (UMU) in IPC by MIP. Program supervision: Program supervision. The following checks are supported by hardware: - a-parameter equal to zero - a-parameter greater than or equal to number of base addresses. - Block State (BS) not active - Local Signal Number (LSN) zero or greater than Number of Signals IN to the block (NSIN). - Signal Sending Pointer (SSP) zero or greater than Number of Signals OUT of the block (NSOUT). - Multiple signal indicator (M), in the SST does not match the signal sending instruction type. - Subvariable length greater than the variable length. - Subvariable addressing outside the variable. - Index greater or equal to 2^Q if Q not zero. - Pointer greater or equal to NumbeR of Records in variable (NRR). - Program Handling Check Circuit must be triggered (set to zero) within 70 ms.

3.2.3 SPU, Signal Processing Unit


3.2.3.1 General SPU is a function block in CPS. It has the overall responsibility for all job handling in the CPU. This means that SPU serves IPU with the next job to be executed, according to priority. SPU activities include job buffer administration, RP signal handling and job table scanning. SPU also interworks with MAS and the error signalling part of the CP test system (CPT) in the maintenance unit (MAU). SPU is, due to the high signalling demands, divided into two parallel working units: a master unit and a slave unit. These units are micro program controlled and running at half the IPU clock speed. In SPU the signal data transport is handled by autonomous hardware units for capacity reasons. 3.2.3.2 Main functions in SPU

The SPU is responsible for the following functions or activities:


y

y y y y y y

Administration of IPU job queues: receive IPU signals and store them into job buffers. Prepare and feed IPU with buffered signals according to priority at an appropriate moment. Scanning of job tables for time controlled jobs. Storing signals received from RPHB into job buffers for later execution in IPU. Transfer of signals coming from IPU directly to RPHB. All signals going to RPBH will be queued in RPHI. Communication with the maintenance subsystem (MAS) via MAI. Update and match functions between twin SPU's can be performed. Whenever in idle state allow for executing self test programs.

3.2.3.3 SPU structure The SPU functionality is divided in two separate main functions, called SPU-Master and SPUSlave. The master part has the task to handle the connection to IPU and to store (administrate) signals in the job buffer memory (JBM). The slave is responsible for sending and receiving signals from regional processor handler. Both, master and slave, have their own micro program controlled kernels (MKU and SKU). The slave part consist of the following main function units:
y

y y

RP handler interface (RPHI) establishing the interface to RPHB. With its subfunctions: RP handler bus interface unit (RPIU), being the physical interface to RPHB. RPHI memory control unit (RPMU) for temporary storage of incoming and outgoing signals and tables as the central RP-table. RPHI signal interface unit (RSIU) responsible for the signal transfer to the JBMU and from IPI. Slave kernel unit (SKU), a micro program controlled unit to supervise the slave part of the SPU. SPU maintenance interface unit slave (SMUS, connection to twin SPU and MAI), for different maintenance tasks as there are: supervision of both CP sides (match), update procedure, error collections and tracing of internal states. SPU test interface slave (STIS), Micro Instruction Trace Unit (MIT) interface and support for hardware tests in SPU slave.

The master part consist of the following main function units:


y y y

IPU interface (IPI), for signal transfer between IPU and JBMU, RPHI in SPU. Job buffer memory unit (JBMU), to store the prioritized signals coming from IPU or RPH side. Master kernel unit (MKU), a micro program controlled unit to supervise the master part of SPU.

SPU maintenance interface unit master (SMUM, connection to twin SPU and MAI), for different maintenance tasks as there are: supervision of both CP sides (match), update procedure, error collections, tracing of internal states, data exchange with MAI and support for communication between microcode in the two CP-sides in CP state EXSBSE. SPU test interface master (STIM), MIT interface and support for hardware tests in SPU master.

3.2.4 Maintenance Interface


MAI is the interface in CP towards MAU. The Interwork is carried out via the buses AMB, MAB and CTB and is asynchronous. MAI includes the following functions:
y y y y y y y y y y y y y

Error registration and error signalling to MAU. Interface logic for the test access ports in CPU. CPU working state logic. Clock generation and clock switching functions. Program supervision and reset functions including boot strap loading. Logic for sending CPT signals between MAU and SPU. Logic for handling interrupts from CPT and MAU. Logic to support DMA accesses in MAU. Logic to support clock stops in the CPU. Interface logic for reading Printed Board Assembly (PBA) ID. Interface logic for MAI indication. Supervision of fans and power. Indications on the CDU panel.

The maintenance interface is realized on two circuit boards POWC in CPUM and CDU on top of the cabinet.

3.2.5 RPH, Regional Processor Handler


RPH contains functions for communication with the RPS subsystem and the Inter Platform Network (IPN). It is divided into an A and B side which work synchronously, connected to the respective CP side. The communication with RPS takes place through the RPB bus. The communication with IPN takes place through Ethernet. The main functions of RPH are:
y y y

To receive an RP signal from IPU/SPU and to handle the reformatting and transmission of the signal to the addressed RP according to the RP bus protocol. To scan the connected RP's, to receive a signal when the scanned RP has a signal to send and to reformat and send this signal to SPU/IPU. To connect the CP to IPN via an Ethernet controller.

RPH is fully realized as one magazine RPHM connected to SPU via the RPH bus. The magazine contains a DC/DC converter and the following PBAs: RPIO RPIO is an interface board between SPU and RPH. RPBI-P RPBI-P is an interface board to the parallell RP bus (RPB-P). This board handles two independent RP buses. It is possible to fit up to 16 RPBI-P in RPHM. RPBI-S RPBI-S is an interface board to the serial RP bus (RPB-S). This board handles four independent RP buses. It is possible to fit up to 8 RPBI-S in RPHM. IPNA IPNA is an interface board to IPN. It is possible to fit up to 2 IPNA in RPHM. IPNX IPNX is an Inter-Platform Network Switch. It is possible to fit 1 IPNX in RPHM. Note: The maximum number of RP buses and IPN connections is 32. The RPBI-P and RPBI-S could be freely mixed within the limitation of number of RP buses. The total number of boards for RPBI-P, RPBI-S and IPNA can not exceed 16.

3.2.6 MIP, Micro Program


MIP's function is to control the functions in CP. For each decoded instruction and each received interrupt signal a microprogram sequence activates and executes functions in CP according to the instruction or interrupt signal. The micro program is divided into two blocks: MIPI and MIPS for IPU and SPU respectively. MIP mainly contains the following functions: MIPI Microprogram for all machine instructions in CPS. The following types of instructions are available:
y y y y y y y y y

Instructions to read from a store. Instructions to write in a store. Register instructions. Arithmetic and logical instructions. Local jump instructions. Signal handling instructions. Instructions to end a program. Search instructions. Operating system instructions.

The operating system instructions execute different special functions used by the operating system.

Micro program for RP signal reception. An RP signal is taken care of by IPU when it has been buffered by SPU. By addressing the central RP table, IPU obtains the required information to start the job. Auxiliary routines, for example for tracing and fault checking. MIPS SPU's function is to transmit signals to and from IPU and RPS and to handle the job buffers. It shall also scan the job table. SPU's work is controlled by a number of micro programs initiated by interrupt signals associated with a certain priority level. When no interrupt signals are present, routine tests will be executed. IPU and SPU contain micro programs for maintenance functions for:
y y y y y y

Storage and dumping of error data. Side indication tests. Routine test and updating. System restart. Reconfiguration. Actions on general reset.

3.2.7 RTU, Reference Time Unit


The reference time unit (RTU) is an optional block in CPS. RTU is used to get a high precision clock reference to the system clock. It is connected to the EM bus in an RP and has a signal interface to block JOB. JOB owns the system clock and the system clock will get its time reference from RTU when it is installed. RTU is not placed in the CP cabinet. It is placed together with either an APZ RP or an ordinary APT RP. RTU is realized in one magazine RTUM and contains the following board: RCLU Contains the clock circuits and a crystal oven. The crystal oven is used to get stability. An interface towards the EM buss is also provided.

4 Hardware
4.1 General
The hardware for CPS is implemented in two hardware versions

APZ 212 33 Standard Version Implemented in a cabinet in building practice BYB 501. It houses two CPU magazines, one for each side, A and B, as well as two RPH magazines, one for each side.

APZ 212 33 C Compact Version GEM size CPUM, it houses the two CP sides, A and B.

4.2 Hardware APZ 212 33


The APZ 212 33 Standard Version is implemented in a cabinet in building practice BYB 501. It houses two CPU magazines, one for each side, A and B, as well as two RPH magazines, one for each side. The CPU magazine is equipped with (per CP side)
y y y y

three processor boards (IPU, SPU, and POWC including MAI) one central power board (POU) upto eight main store (DS) memory boards (STUs) of DRAM and/or SRAM type One board position is reserved for the microinstruction trace unit board (BRU), with microinstruction trace, MIT, functionality, to be temporarily used at analysis of complicated hardware faults.

The maintenance board (MAU) is allocated in the B-side. Each RPH magazine comprises one RPH interface board (RPIO), and can be equipped with maximum sixteen RPH boards of parallel type (RPBI-P), each supporting interface to two RPH branches, or maximum eight RPH boards of serial type (RPBI-S), each supporting interface to four RPB-S branches. RPH boards of the parallel and serial types can be freely mixed. It is possible to fit up to 2 IPNA in RPHM, IPNA is an interface board to IPN. The total number of duplicated RPB branches, both serial and parallel, which can be handled, is limited to 32. The total number of boards for RPBI-P, RPBI-S and IPNA can not exceed 16. The structure of the hardware and the layout of the magazine group can be seen in the figures below.

Figure 1: APZ 212 33 CP Hardware Structure

AMB BRU CPU CTB DSU IPN IPU MAB MAI MAS MAU

AMU Bus Bus Recording Unit Central Processor Unit Central Processor Test Bus Data Store Unit Inter Platform Network Instruction Processor Unit Maintenance Bus Maintenance Unit Interface Maintenance Subsystem Maintenance Unit

MIT POWC PRS PTB RPB RPH SPU UMB

Micro-Instruction Trace Power Control Unit Program and Reference Store Processor Test Bus Regional Processor Bus (parallel or serial) Regional Processor Handler Signal Processor Unit Update and Match Bus

Figure 2: APZ 212 33 CP CABINET

CCU CDU CPUM MAUM RPHM

Cross Connection Unit CP Display Unit Central Processor Unit Magazine Maintenance Unit Magazine RP Bus Handler Magazine

Figure 3: APZ 212 33 CPU Magazine, CPUM

Figure 4: RP Processor Handler Magazine, RPHM

4.3 Hardware APZ 212 33 C


The APZ 212 33 Compact Version is implemented in a single magazine. It houses the two CP sides, A and B. The CPU magazine is equipped with (per CP side)
y y y y y

three processor boards (IPU, SPU, and POWC including MAI) two main store (DSU-D) DRAM memory boards one IPNAX board, (Inter-Platform Network Adapter and Switch), IPNAX is an interface board to IPN and IPN Switch. three RPH boards of serial type (RPBI-S), each supporting interface to ten RPB-S branches. One board position is reserved for the microinstruction trace unit board (BRU), with microinstruction trace, MIT, functionality, to be temporarily used at analysis of complicated hardware faults.

The maintenance board (MAU) is allocated in the B-side. The layout of the magazine can be seen in figure 5 below. An external RPH magazine can be used in place of the IPNAX boards and the three RPBI-S boards. In this extended configuration a REB board replaces the IPNAX board in each CP side. In this configuration, RPBI-S can not be used in CPUM. Each external RPH magazine comprises one RPH interface board (RPIO), and can be equipped with maximum sixteen RPH boards of parallel type (RPBI-P), each supporting interface to two RPH branches, or maximum eight RPH boards of serial type (RPBI-S), each supporting interface to four RPB-S branches. RPH boards of the parallel and serial types can be freely mixed. It is possible to fit up to 2 IPNA in RPHM, IPNA is an interface board to IPN. The total number of duplicated RPB branches, both serial and parallel, which can be handled, is limited to 32. The total number of boards for RPBI-P, RPBI-S and IPNA can not exceed 16.

Figure 5: APZ 212 33 C CPU Magazine, CPUM

5 Characteristics
5.1 System Limits
For a complete list of the system limits please see the Function Specification 'System Limits'. The major limits are listed below. APZ 212 33 and APZ 212 33 C system limits: DS volume 4GW16 PS volume 96MW16 RS volume 16MW32 Block size (including correction area) 64KW16 Table 1

Number of blocks Number of insignals per block Number of RP bus branches Number of RPs per RP bus branch Number of RPs Number of CM per RP

4095 4080 32 32 1024 16

5.2 Capacity
APZ 212 33 is available in one capacity version only.

6 Acronyms and Abbreviations


AMU Automatic maintenance unit AP Adjunct processor APS AXE Process Support APT Telephony Part of AXE (Switching System) APZ AXE Control System BAT Base Address Table BRU Bus recording unit CCU Cross connection unit CDU CP control and display unit CPUM Central processor unit magazine DBS Database subsystem DS Data store FTP File Transfer Protocol IPN Inter Platform Network IPNA Inter Platform Network Adapter IPNAX

Inter-Platform Network Adapter and Switch IPNX Inter-Platform Network Switch IPU Instruction processor unit GEM Generic Ericsson Magazine MAI Maintenance unit interface MAS Maintenance subsystem MAU Maintenance unit MIP Micro program MIT Micro-instruction trace (logic analyser) PBC Program and Base Address Table Cache POU Power unit POWC Power control unit PRS Program and reference store PRSCM Program and Reference Store Cache Memory PS Program store RPBI Regional processor bus interface RPH Regional processor handler RPS Regional processor subsystem RPUM Regional processor unit magazine RS Reference store SP Support processor SPU Signal processor unit TFTP Trivial file transfer protocol UMB

Update and match bus