Академический Документы
Профессиональный Документы
Культура Документы
Situation hp
COURSE OUTLINES
Analogue Signals: Analogue signals are what we encounter every day of our life.
Speech is an analogue signal, and varies in amplitude (volume) and frequency (pitch).
The main characteristics of analogue signals are:
(i) Amplitude: This is the strength of the signal. It can be expressed a number
of different ways (as volts, decibels). The higher the amplitude, the stronger (louder) the
signal.
(ii) Frequency: This is the rate of change the signal undergoes every second,
expressed in Hertz (Hz), or cycles per second. A 30Hz signal changes thirty times a
second. In speech, we refer to it as the number of vibrations per second, as the air that
we expel out of our mouth pushes the surrounding air at a specific rate. One cycle is
defined as a point on the wave to the next identical point. The number of cycles there
are per second determine the frequency of the signal.
Digital Signals: Digital signals are the language of modern day computers. Digital
signals normally comprise only two states. These are expressed as ON or OFF, 1 or 0
respectively.
Simplex: Data in a simplex channel is always one way. Simplex channels are not often
used because it is not possible to send back error or control signals to the transmit end.
An example of a simplex channel in a computer system is the interface between the
keyboard and the computer, in that key codes need only be sent one way from the
keyboard to the computer system.
Half Duplex: A half duplex channel can send and receive, but not at the same time. Its
like a one-lane bridge where two way traffic must give way in order to cross. Only one
end transmits at a time, the other end receives.
Full Duplex: Data can travel in both directions simultaneously. There is no need to
switch from transmit to receive mode like in half duplex. Its like a two lane bridge on a
two-lane highway.
Serial: Each bit is sent over a single wire, one after the after. No signal lines are used
to convey clock (timing information). There are two ways in which timing information is
encoded with the signal so that the sender and receiver are synchronised (working on
the same data at the same time). If no clock information was sent, the receiver would
misinterpret the arriving data (due to bits being lost, going too slow).
Parallel transmission is obviously faster, in that all bits are sent at the same time,
whereas serial transmission is slower, because only one bit can be sent at a time.
Parallel transmission is very costly for anything except short links.
Synchronous Serial Transmission: In synchronous transmission, the line idle state is
changed to a known character sequence (7E), which is used to synchronise the receiver
to the sender. The start and stop bits are removed, and each character is combined with
others into a data packet. The data packet is prefixed with a header field, and suffixed
with a trailer field which includes a checksum value (used by the receiver to check for
errors in sending). The header field is used to convey address information (sender and
receiver), packet type and control data.
The data field contains the users data (if it can't fit in a single packet, then use multiple
packets and number them) or control data. Generally, it has a fixed size. The tail field
contains checksum information which the receiver uses to check whether the packet
was corrupted during transmission.
Baud Rate: Baud rate is the reciprocal of the shortest signal element (a measure of the
number of line changes which occur every second). For a binary signal of 20Hz, this is
equivalent to 20 baud (there are 20 changes per second).
Bits Per Second: This is an expression of the number of bits per second. Where a
binary signal is being used, this is the same as the baud rate. When the signal is
changed to another form, it will not be equal to the baud rate, as each line change can
represent more than one bit (either two or four bits).
Protocols: A protocol is a set of rules which governs how data is sent from one point to
another. In data communications, there are widely accepted protocols for sending data.
Both the sender and receiver must use the same protocol when communicating. BY
CONVENTION, THE LEAST SIGNIFICANT BIT IS TRANSMITTED FIRST.
(i) Asynchronous Protocols: Asynchronous systems send data bytes between the
sender and receiver. Each data byte is preceded with a start bit, and suffixed with a stop
bit. These extra bits serve to synchronise the receiver with the sender. Transmission of
these extra bits (2 per byte) reduce data throughput. Synchronisation is achieved for
each character only. When the sender has no data to transmit, the line is idle and the
sender and receiver are NOT in synchronisation. Asynchronous protocols are suited for
low speed data communications.
Datel (data over dial-up telephone circuit): The public communications carrier
provides a dial-up line and modem for the user. The line may be used for speech or
data, but not at the same time. The circuit is non-permanent and is switched. The user
dials the number, and when the other end modem replies, flicks a switch which
connects the modem and disconnects the telephone. It is suited to low speed,
intermittent, low volume data requirements. Datel was 300bits per second and is no
longer offered.
Leased Lines: A leased line is a permanent non-switched end to end connection. Data
is sent from one end to the other. There is no requirement to send routing information
along with the data. Leased lines provide an instant guaranteed method of delivery.
They are suited to high volume, high speed data requirements. The cost of the line
(which is leased per month), is offset against that of toll or other rental charges. In
addition, the leased line offers a significantly higher data transmission rate than the
datel circuit. Very high speeds can be achieved on leased lines. The cost varies, and
goes up according to the capacity (speed in bits per second) that the customer requires.
Packet Switched Circuits: Connection is made to the public communications carrier
packet network. This is a special network which connects users which send data
grouped in packets. Packet technology is suited to medium speed, medium volume data
requirements. It offers cheaper cost than the datel circuit, but for large volumes, is more
expensive than the leased circuit. Special hardware and software is required to
packetize the data before transmission, and depacketize the data on arrival. This adds
to the cost (though this equipment can be leased). Packet switched circuits exist for the
duration of the call.
ISDN: ISDN refers to a new form of network which integrates voice, data and image in
a digitised form.
It offers the customer a single interface to support integrated voice and data traffic.
Rather than using separate lines for voice (PABX system) and data (leased lines), ISDN
uses a single digital line to accommodate these requirements.
The basic circuit provided in ISDN is the 2B+D circuit. This is two B channels (each of
64Kbits) and one D channel (of 16Kbits). The D channel is normally used for
supervisory and signalling purposes, whilst the B circuits are used for voice and data.
ISDN is great for transferring a reasonable amount of data in a short time. It is a dial up
on demand service. A lot of Radio stations use ISDN to transfer songs, and companies
use it for internet access like dialling up a mail server in another site periodically during
the day to transfer email. ISDN is not suited for permanent 24 hour connections. It
becomes too costly if used for extended periods. A leased line is a far better option if
permanent connection is required.
Host Computer: The host computer provides a centralised computing environment for
the execution of application programs. Terminals and peripheral devices are connected
to it via front-end processors. It is a large scale main-frame computer with associated
disk and tape storage devices.
Terminal: A terminal is an input device used by a user. It consists of a keyboard and
console screen. Most terminals are non-intelligent, in that all processing is performed by
the host computer. Data typed at the keyboard is sent to the host computer for
processing, and screen updates are sent from the host computer to the console screen
of the terminal. Terminals are generally connected via a serial link, though it now
becoming popular to connect terminals via network connections like token ring or
ethernet. A terminal is dependant upon the host computer. It cannot work as a
standalone; all processing is done by the host computer.
Local: Local refers to the location being the same site as the host computer. For
instance, local processing means that all processing is done locally, on site, without the
need for sending the data somwehere else to be processed. Often, it will be located in
the same room or building.
Remote: Remote refers to the location being an external location other than the
present physical site. To support terminals in different cities, these remote terminals are
connected to the host computer via data communication links. In general, we could say
that a remote terminal is connected via a data communication link (utilising a modem).
Acoustic Modem: The acoustic modem interfaces to the telephone handset. The
acoustic modem picks up the sound signals generated by the earphone of the
telephone handset, and generates sound which is picked up by the mouthpiece of the
telephone handset. The advantage of these types of modems are easy interface to
existing phones. Its disadvantage is that because the signals are coupled acoustically, it
is suitable only for very low data rates up to about 300 bits per second.
The relationship between design specifications is a close one. Although the process of
setting out a requirement specification as the basis of a contract is a separate activity,
formalizing that specification may be part of the design process. In practice, the
designer iterates between specification and design. The design processes involve
describing the system at a number of different levels of abstraction. As a design is
decomposed, errors and omission in earlier stage are discovered. These feed back to
allow earlier design stages to be refined. It is common to begin the next stage before a
stage is finished simply to get feedback from the refinement process.
This process is repeated for each sub-system until the component identified can be
mapped directly into programming language components such as packages,
procedures or functions.
Structured methods have been applied successfully in many large projects. They can
deliver significant cost reductions because they use standard notations and ensure that
designs follow a standard form. The use of structured methods normally involves
producing large amounts of diagrammatic design documentation. Although there are a
large number of methods, they have much in common and usually support some or all
of the following views of a system:
(1) A data flow view where the system is modelled using the data
transformations which takes place as it is processed.
(2) An entity–relation view which is used to describe the logical data
structure being used.
Particular methods supplement these with other system models such as state transition
diagrams, entity life histories, which shows how each entity is transformed as it is
processed, and so on. Most methods suggest that a centralized repository for system
information (a data tool dictionary) should be used but this is only really feasible with
automated tool support. There is variety of methods and no one method is
demonstrably superior to another. A mathematical method (such as the method for long
division) is a strategy which, if adopted, will always lead to the same result. The term
‘structured method’ suggests, therefore, that designers should normally generate similar
designs from the same specification.
In practice, the guidance given by the methods is informal so this situation is unlikely.
These ‘methods’ are really standard notations and embodiments of good practice. By
following these methods and applying the guidelines, a reasonable design should
emerge but designer creativity is still required to decide on the system decomposition
and to ensure that the design adequately captures the system specification.
There are three types of notation which are widely used for design documentation:
(1) Graphical Notations These are used to display the relationships
between the components making up the design and to relate the design to the
real–world system it is modelling. A graphical view of a design is an abstract view
and is most useful for giving an overall picture of the system.
Generally, all of these different notations should be used in describing a system design.
The architecture and the logical data design should be described graphically,
supplemented by design rationale and further informal or formal descriptive test. The
interface design, the detailed data structure design and the algorithm design are best
described using a PDL. Descriptive rationale may be included as embedded comments.
Some works has been carried out to establish design quality metrics to establish
whether or not a design is a ‘good’ design. These have mostly been developing in
conjunction with structured design methods such as Yourdon’s structured design.
Quality characteristics are equally applicable to object-oriented and function-oriented
design. Because of the inherent nature of object-oriented designs, it is usually easier to
achieve maintainable designs because information is concealed within objects.
However, as is discussed in each of the following sections, inheritance in
object-oriented systems can compromise design quality.
2.3.1 Cohesion
The cohesion of a component is a measure of how well it fits together. A component
should implement a single logical function or should implement a single logical entity. All
of the part of the component should contribute to this implementation. If the component
includes parts which are directly related to its logical functions (for example, if it is a
grouping of unrelated operations which are executed at the same time) it has a low
degree of cohesion. Constantine and Yourdon (1979) identify seven levels of cohesion
in order of increasing strength of cohesion from lowest to highest.
· Coincidental Cohesion he parts of a component are not related
T
but simply bundled into a single component.
These cohesion classes are not strictly defined and Constantine and Yourdon illustrate
each by example. It is not always easy to decide under what cohesion category a unit
should be classed. Constantine and Yourdon’s method is functional in nature and it is
obvious that the most cohesive form of unit is the function. However, a high degree of
cohesion is also a feature of object-oriented systems. Indeed, one of the principal
advantages of this approach to design is that the objects making up the system are
naturally cohesive.
A cohesive object is one where a single entity is represented and all of operations on
that entity are included with the object. For example, an object representing a compiler
symbol table is cohesive if all of the functions such as ‘Add a symbol’, ‘Search table’,
and so on, are included with the symbol table object. Thus, a further class of cohesion
might be defined as follows:
2.3.2 Coupling
Coupling is related to cohesion. It is an indication of the strength of interconnections
between program units. Highly coupled systems have strong interconnections, with
program units dependent on each other. Loosely coupled systems are made up of units
which are independent or almost independent. As a general rule, modules are tightly
coupled if they make use of shared variables or if they interchange control information.
Constantine and Yourdon call this common coupling and control coupling. Loose
coupling is achieved by ensuring that, wherever possible, representation information is
held within a component and that its data interface with other units is via its parameter
list. If shared information is necessary, the sharing should be controlled as in an Ada
package. Constantine and Yourdon call this data coupling and stamp coupling.
Other coupling problems arise when names are bound to value at an early stage in the
development of the design. For example, if a program is concerned with tax
computations and a tax rate of 30% is encoded as a number in the program, that
program is coupled with the tax rate. Changes to the tax rate require changes to the
program. If the program reads in the tax rate at run-time, it is easy to accommodate rate
changes. Perhaps the principal advantage of object-oriented design is that the nature of
object leads to the creation of loosely coupled systems. It is fundamental to
object-oriented design that the representation of an object is concealed within that
object and is not visible to external components. The system does not have a shared
state and any object can be replaced by another object with the same interface.
2.3.3 Understandability
Changing a design component implies that the person responsible for making the
change understands the operation of the component. This understandability is related to
a number of component characteristics:
2.3.4 Adaptability
If a design is to be maintained, it must be readily adaptable. Of course, this implies that
the component should be loosely coupled. As well as this, however, adaptability means
that the design should be well –documented, the component documentation should be
readily understandable and consistent with the implementation, and that the
implementation should be written in a readable way. An adaptable design should have
a high level of visibility. There should be a clear relationship between the different levels
in the design. It should be possible for a reader of the design to find related
representations such as the structure chart representing a transformation on a data flow
diagram.
It should be easy to incorporate changes made to the design in all design documents. If
this is not the case, changes made to a design description may not be included in all
related descriptions. The design documentation may become inconsistent. Later
changes are more difficult to make (the component is less adaptable) because the
modifier cannot rely on the consistency of design documentation. For optimum
adaptability, a component should be self-contained. A component may be loosely
coupled in that it only cooperates with other component via message passing. This is
not the same as been self-contained as the component may rely on other components,
such as system functions or error handling functions. Adaptations to the component
may involve changing parts of the component which rely on external functions so the
specification of these external functions must also be considered by the modifier.
This simple adaptability is one reason why object-oriented languages are so effective
for rapid prototyping. However, for long time systems, the problem with inheritance is
that as more and more changes are made, the inheritance network becomes
increasingly complex. Functionality is often replicated at different points in the network
and components are harder to understand. Experience of object-oriented programming
has shown that the inheritance network must be periodically reviewed and restructured
to reduce its complexity and functional duplication. Clearly, this adds to the costs of
system change.
The high cost and the difficulties of developing software systems are well known.
Accordingly, as the costs of computer hardware have decreased, it has become
cost-effective to provide individual software engineers with automated tools to support
the software development process. Although it is possible to operate CASE tools in
conjunction with application systems, it is generally the case that software development
is best supported on a separate system which is called a software development
environment (SDE).
(1) The software development may include hardware tools. For example,
microprocessor development systems usually include an in-circuit emulator as an
integral part of the environment.
(2) The software development environment is specific rather than general.
It should support the development of a particular class of systems, such as
microprocessor systems, artificial intelligence systems, and so on. The
environment need not support all types of application systems development.
SDEs usually run on a host computer system (or network) and the software is
developed for a separate target computer. There are several reasons why the
host–target model is the most appropriate for software development environments:
(1) In some cases, the application software under development may be
for a machine with no software development facilities.
There are many different types of software development environment which are now in
use. Environments may be classified into three major groups:
(1) Programming Environments These are environments which are
principally intended to support the processes of programming, testing and
debugging. Their support for requirements definition, specification and software
design is limited.
Within each of these broad environment classes, there are various types of
environment. These will be discussed in the appropriate section below. While it is
convenient to classify environment as above, in practice the boundaries between the
different classes of environment are not clear cut. As programming environments are
populated with tools, their facilities resemble those of software engineering
environments. As CASE workbenches mature and are made available on more powerful
hardware, they offer increasing support for programming and testing, and so also
resemble software engineering environments.
(4) Testing And Debugging Tools These might include test drivers,
dynamic and static program analyzers and test output analysis programs.
Debugging on the host of programs executing on the target should be supported
if possible.
Interpretative environment are excellent environments for software prototyping. Not only
do they provide an expressive, high-level language for system development, they also
allow the application software developer to use the facilities of the environment in their
application. Typically, interpretative environments provide support for incremental
compiling and execution, persistent storage, and system browsers which allow the
source code of the environment to be examined. A development of interpretative
environments is artificial intelligence (AI) development systems such as ART and KEE
which are based on an interpretative environment (typically a LISP environment) and
add facilities to simplify the production of AI systems. The added facilities may include
truth maintenance systems, object-oriented facilities and mechanisms for knowledge
representation and rule application.
However, this class of environment is not well suited to the development of large,
long-lifetime systems. The environment and the system are closely integrated so
system users must install the environment as well as the application systems. This
increase the overall cost of the system. Both the environment and the system have to
be maintained throughout the lifetime of the application, by which time the environment
may be obsolete. Structure–oriented environments were originally developed for
teaching programming (Teitelbaum and Reps, 1981) as they allowed an editing system
to be constructed which trapped syntax errors in the program before complication.
Rather than operate on the text of a program, these structure editors operate on an
abstract syntax tree decorated with semantic information.
CASE workbench systems are designed to support the analysis and design stages of
the software process. These systems are oriented towards the support of graphical
notations such as those used in the various design methods. They are either intended
for the support of a specific method, such as JSD or the Yourdon method, or support a
range of diagram types, encompassing those used in the most common methods.
Typical components of a CASE workbench are:
(2) Design analysis and checking facilities which process the design
and report on errors and anomalies. These may be integrated with the editing
system so that the user may be informed of errors during diagram creation.
(3) Query language facilities which allow the user to browse the
stored information and examine completed designs.
CASE workbench systems, like structured methods, have been mostly used in the
development of data processing system but they can also be used in the development
of other classes of system. Chikofsky and Rubenstein (1988) suggest that productivity
improvements of up to 40% may be achieved with the use of such systems. They also
suggest that, as well as these improvements, the quality of the developed systems is
higher with fewer errors and inconsistencies and the developed systems are more
appropriate to the user’s needs.
Since Martin’s critique of these systems, some of the deficiencies have been addressed
by the system vendors. This has been simplified by the fact that CASE workbenches
are mostly available on IBM PC or PC–compatible hardware or on workstation UNIX
systems. Some workbenches allow data transfer to and from common applications,
such as word processors, available on these systems. Many systems now support
high-quality PostScript (Adobe System Inc., 1978) output to laser printers.
The user of a CASE workbench is provided with support for parts of the design process
but design are not automatically linked to other products such as source code, tests
data suites and user documentation. It is not easy to create and track multiple versions
of a design nor is there a straightforward way to interface the work bench with external
configuration management tools. The problems of integrating CASE workbenches with
a configuration management system are a result of their closed nature and the tight
integration around a defined data structure. As a standard for CASE information is
development and published, it will be possible to interface other tools to workbenches.
This means that CASE benches will become open rather than closed environments.
They will evolve into software engineering environments.
Integrated project support environments (IPSEs) are ‘open’ environments which may be
tailored to support development in a number of different programming languages using
different design methods. Example of IPSEs include ISTAR (Dowson, 1987), ECLIPSE
(Bott, 1989) and the Arcadia environment (Taylor et al. , 1988).
(3) It becomes possible to build more powerful software tools because
these tools can make use of the relationships recorded in the OMS.
Historically, the term ‘maintenance’ has been applied to the process of modifying a
program after it has been delivered and is in use. These modifications may involve
simple changers to correct coding errors, more extensive changes to correct design
errors or drastic rewrites to correct specification errors or to accommodate new
requirements. It is impossible to produce systems of any size which do not need to be
maintained. Over the lifetime of a system, its original requirements will be modified to
reflect changing needs, the system’s environment will change and errors, undiscovered
during system validation, may emerge. Because maintenance is unavoidable, systems
should be designed and implemented so that maintenance problems are minimized.
· Perfective maintenance.
· Adaptive maintenance.
· Corrective maintenance.
Perfective maintenance means changes which improve the system in some way without
changing its functionality. Adaptive maintenance is maintenance which is required
because of changes in the environment of the program. Corrective maintenance is the
correction of previously undiscovered system errors. A survey by Lientz and Swanson
(1980) discovered that about 65% of maintenance was perfective, 18% adaptive and
17% corrective.
Coding errors are usually relatively cheap to correct; design errors are more expensive
as they may involve the rewriting of several program components. Requirements errors
are the most expensive to repair because of the redesign which is usually involved. The
maintenance process is triggered by a set of change requests from system users or
management. The costs and impact of these changes is assessed and, assuming it is
decided to accept the proposed changes, a new release of the system is planned. This
release will usually involve elements of adaptive, corrective and perfective maintenance.
The changes are implemented and validated and a new version of the system is
released. The process then iterates with a new set of changes proposed for the new
release.
Maintenance has a poor image among software engineers. It is seen as a less skilled
process than program development and, in many organizations, maintenance is
allocated to inexperienced staff. This negative image means that maintenance costs are
probably increased because staffs allocated to the task are less skilled than those
involved in system design.
Boehm (1983) suggests several steps that can be taken to improve maintenance staff
motivation:
The second law states that, as a system is changed, its structure is degraded.
Additional costs, over and above those of implementing the change, must be accepted if
the structural degradation is to be reversed. The maintenance process might include
explicit restructuring activities aimed at improving the adaptability of the system.
The third law is, perhaps, the most interesting and the most contentious of Lehman’s
laws. It suggests that large systems have a dynamic of their own that is established at
an early stage in the development process. This determines the gross trends of the
system maintenance process and limits the number of possible system changes.
Maintenance management can not do whatever it wants as far as changing the system
is concerned. Lehman and Belady suggest that this law is a result of fundamental
structural and organizational factors.
As changes are made to a system, these changes introduce new system faults whish
then require more changes to correct them. Once a system exceeds some minimal size
it acts in the same way as an inertial mass. It inhibits major change because these
changes are expensive to make and result in a system whose reliability is degraded.
The number of changes which may be implemented at any one time is limited.
Furthermore, large systems are inevitably produced by large organizations. These
organizations have their own internal bureaucracies which slow down the process of
change and which determine the budget allocated to a particular system. Major system
changes require organizational decision making and changes to the project budget.
Such decisions take time to make and, during that time, other, higher priory system
changes may be proposed. It may be necessary to shelve the changes to a later date
when the change approval process must be reinitiated. Thus, the rate of change of the
system is partially governed by the organization’s decision making processes.
Lehman’s fourth law suggests that most large programming projects work in what he
terms a “saturated” state. That is, a change to resources or staffing has imperceptible
effects on the long-term evolution of the system. Of course, this is also suggested by
the third law which suggests that program evolution is largely independent of
management decisions. This law confirms that large software development teams are
unproductive as the communication overheads dominate the work of the team.
Lehman’s laws are really hypotheses and it is unfortunate that more work has not been
carried out to validate them. They seem sensible and maintenance management
should not attempt to circumvent them but should use them as a basis for planning the
maintenance process. It may be that business considerations require them to be
ignored at any one time. In itself, this is not impossible but management should realize
the likely consequences for future system change and the high costs involved if such
changes are proposed.
Maintenance costs vary widely from one application domain to another. On average,
they seem to be between two and four times development costs for large embedded
software systems. For business application systems, a study by Guimaraes (1983)
showed that maintenance costs were broadly comparable with system development
costs. It is usually cost-effective to invest time and effort when designing and
implementing a system to reduce maintenance costs. Increasing development costs will
result in an overall decrease in system costs if the percentage saving in maintenance
costs is comparable with the percentage development cost increase.
Maintenance costs are difficult to estimate. The difficulties arise because these costs
are related to a number of product, process and organizational factors. Some of the
factors which are unrelated to the engineering techniques used for software
development are:
(4) Program Validation and Testing enerally, the more time and
G
effort spent on design validation and program testing, the fewer errors in the
program. Consequently, corrective maintenance costs are minimized.
As system age, maintenance cost more. Old systems may be written in programming
language which are no longer used for new development or may have been developed
using design methods which have been supplanted by newer techniques. Special
provision may have to be made to train staff member to maintain these programs. This
problem is becoming more severe as newer language like Ada and C+ + take over from
older language such as Fortran and as object-oriented development replaces
function-oriented development.
Using data gathered from 63 projects in a number of application areas, Boehm (1981)
established a formula for estimating maintenance costs. This is part of the COCOMO
software cost estimation model. Boehm’s maintenance cost estimation is calculated in
terms of a quality called the annual change traffic (ACT) which he defines as follows:
Boehm’s estimation method for maintenance costs uses the ACT and the estimated or
actual development effort in person-months to derive the annual effort required for
software maintenance. This is computed as follows:
AME and SDT are the annual maintenance effort and the software development time
and the units of each are person-months (p.m.). Notice that this is a simple linear
relationship. If we say a software project required 236 person-months of development
effort and it was estimated that 15% of the code would be modified in a typical year. The
basic maintenance effort estimate is:
This formula gives a rough approximation of maintenance costs and is a basis for
computing a more precise figure based on the attribute multipliers. These take into
account process, project, product and personnel factors. The maintenance cost
estimate may be refined by judging the importance of each factor affecting the cost and
selecting the appropriate cost multiplier. The basic maintenance cost is then multiplied
by each multiplier to give the revised cost estimate.
For example, say in the above system the factors having most effect on maintenance
costs were reliability (RELY), which had to be very high, the availability of support staff
with language and application experience (AEXP and LEXP), which was also high, and
the use of modern programming practices for system development (very high). These
might have multipliers as follows:
RELY 1.10
AEXP 0.91
LEXP 0.95
MODP 0.72
By applying these multipliers to the initial cost estimate, a more precise figure may be
computed:
The reduction in estimated costs has come about partly because experienced staff are
available for maintenance work but mostly because modern programming practices had
been used during software development. As an illustration of their importance, if modern
programming practices are not used and other factors are unchanged the development
cost is increased to 53.53 p.m. the maintenance cost estimate is also much larger:
Structured development techniques are not yet universal and their use only became
widespread in the 1980s. Much of the maintenance effort in an organization is devoted
to old systems which have not been developed using these techniques. This is at least
part of the reason for high maintenance costs. Usually, different part of the system will
have different ACTs so a more precise figure can be derived by estimating initial
development effort and annual change traffic for each software component. The total
maintenance effort is then the sum of these individual component efforts.
A problem with algorithmic maintenance cost estimate is that it is difficult to allow for the
fact that the software structure degrades as the software ages. Using the original
development time as a key factor in maintenance cost estimation introduces
inaccuracies as the software loses its resemblance to the original system. As with
development cost estimation, algorithmic cost modeling is most useful as a means of
option analysis. It should not be expected to predict costs accurately. A cost estimation
model which takes into account factors such as programmer experience, hardware
constraints, software complexity, and so on, allows decisions about maintenance to be
made on a rational rather than an intuitive basis.
As in development cost estimation, algorithm cost models are useful because they allow
relative rather than absolute costs to be assessed. It may be impossible to predict the
annual change traffic of a system to any degree of accuracy but the inaccuracy is
reflected in all computations. Although the absolute cost figure may be out of step with
reality, the information allows management to make reasoned decisions about how best
to allocate resources to the maintenance process.
5.0 DOCUMENTATION
All large software systems, irrespective of application, have a prodigious amount of
associated documentation. Even for moderately sized systems, the documentation will
fill several filling cabinets. For large systems, it may fill several rooms. A high proportion
of software process costs is incurred in producing this document. Management should
therefore pay as much attention to documentation and its associated cost as to the
development of the software itself.
(3) They should provide information for management to help them plan,
budget and schedule the software development process.
(4) Some of the document should tell users how to use and administer
the system.
(3) Standards These are documents which set out how the
process is to be implemented. They may be developed from organizational,
national or international standards and expressed in great detail.
(5) Memos and Electronic Mail Messages These record the details
of everyday communication between managers and development engineers.
Although of interest to software historians, most of this information is of little real use
after it has gone out of date and there is not normally a need to preserve it after the
system has been delivered. Of course, there are some exceptions to this. For example,
test schedules are of value during software evolution as they act as a basis for
re-planning the validation of system changes. Design rationale should be kept for
maintenance engineers but this is usually not implicitly recorded and is difficult to find
among the many working papers produce in the cause of a project.
(1) End-users use the software to assist with some task. This may be
flying an aircraft, managing insurance policies, writing a book, and so on.
They want to know how the software can help them. They are not interested
in computer or administration details.
To cater for these different classes of user and different level of user expertise, there
are at least five documents (or perhaps chapter in a single document) which should be
delivered with the software system.
The functional description of the system outlines the system requirements and briefly
describes the services provided. This document should provide an overview of the
system. Users should be able to read this document with an introductory manual and
decide if the system is what they need.
The system reference manual should describe the system facilities and their usage,
should provide a complete listing of error messages and should describe how to recover
from detected errors. It should be complete. Formal descriptive techniques may be
used. The style of the reference manual should not be unnecessarily pedantic and
turgid, but completeness is more important than readability.
Often, pressure of works means that this modification is continually set aside until
finding what is to be changed becomes very difficult indeed. The best solution to these
problem is to support document maintenance with software tools which record
document relationships, remind software engineers when changes to one document
affect another and record possible inconsistencies in the documentation.
(1) All documents, however short, should have a cover page which
identifies the project, the document, the author, the date of production, the
type of document, configuration management and quality assurance
information, the intended recipient of the document, and the confidentiality
class of the document. It should also include information for document
retrieval (an abstract or keywords) and a copyright notice.
(2) Documents which are more than a few pages long should be divided
into chapters with each chapter structured into section and subsection. A
contents page should be produced listing these chapters, sections and
sub-sections. A consistent numbering scheme for chapter, section and
sub-section should be defined and chapter should be individually page
numbered (the page number should be chapter-page). This simplifies
document change as individual chapters may be replaced without reprinting
the whole document.
(3) If a document contains a lot of detailed reference information it
should have an index. A comprehensive index allows information to be
discovered easily and can make a badly written document usable. Without an
index, reference documents are virtually useless.
(4) If a document is intended for a wide spectrum of readers who may
have differing vocabularies, a glossary should be provided which defines the
technical terms and acronyms used in the document.
Document structures are often defined in advance and set out in documentation
standards. This has the advantage of consistency although it can cause problems.
Standards may not be appropriate in all cases and an unnatural structure may have
to be used if standards are thoughtlessly imposed.
Software quality assurance (QA) is closely related to the verification and validation
activities carried out at each stage of the software life cycle. Indeed, in some
organization there is no distinction made between these activities. However, QA and
verification and validation process should be distinct activities. QA is a management
function and verification and validation are technical software development process.
An important role of the QA team is to provide support for “whole life cycle.” Quality
should be engineered into the software product and quality should be the driver of the
software process. Quality is not an attribute which can be added to the product and, at
all stages of development, achieving high quality results should the goal. The function of
QA should be to support quality achievement, not to judge and criticize the development
team. Bersoff ( 1984) provides a good working definition of QA:
Buckley and Poston (1984) point out that some of these quality criteria may have no
relevance for particular product. For example, it may be possible to transfer a system
from microcomputer to a mainframe (portability) but this may not be sensible or
desirable.
Quality planning should begin at an early stage in the software process. A quality plan
should set out the desired product qualities and should define how these are to be
assessed. A common error made in QA is the assumption that there is a common
understanding of what “high quality” software actually means. No such common
understandings exist. Sometimes different software engineers work to ensure that
particular, but different, product attributes are optimised. The quality plan should clearly
set out which quality attributes are most significant for the product being developed. It
may be that efficiency is paramount and other factors have to be sacrificed to achieve
this. If this is set out in the plan, the engineers working on the development can
cooperate to achieve this. The plan should also define the quality assessment process;
there is little point in trying to attain some qualities if there is no standard way of
assessing whether that quality is present in the product.
The quality should set out which standards are appropriate to the product and to the
process and (if necessary) define the plan for developing these standards. Although the
plan may not actually include details of particular standards, it should reference these
and the quality management should ensure that the standards documents are generally
available.
However, process quality is clearly important and part of the QA function is ensuring the
quality of the process. This involves:
(2) Monitoring the development process to ensure that the standards are
being followed.
(3) They assist in continuity where work carried out by one person is
taken up and continuity by another. Standards ensure that all engineers within an
organisation adopt the same practices so that the learning effort when starting
new work is reduced.
The need to adhere to standards is often resented by software engineers. They see
standards as bureaucratic and irrelevant to the technical activity of software
development. Although they usually agree about the value of standards in general,
engineers often find good reasons why standards are not appropriate to their particular
project. Products standards such as standards setting out programme formats, design
documentation and document structures are often tedious to follow and to check.
Unfortunately, these standards are sometimes written by staff who are remote from the
software development process and who are not aware of modern practices. Thus the
standards may appear to be out of date and unworkable.
To avoid these problems, the QA organization must be adequately resourced and must
take the following steps:
(1)
Involve software
engineers in the development of product standards. They should understand the
motivation behind the standard development and be committed to these
standards. The standards document should not simple state a standard to be
followed but should include a rationale of why particular standardization
decisions have been made.
To avoid the difficulties and resentment that inappropriate standards may have, the
project manager and the QA team should decide, at the beginning of a project, which of
the standards in the handbook should be used without change, which should be
modified and which should be ignored. It may also be the case that new standard have
to be created in response to a particular project requirement (for example, standards for
formal specifications if these have not been used in previous projects) and these must
be allowed to evolve in the course of the project.
A quality review need not involve a detailed study of individual system components. It
may be more concerned with the validation of component interaction and with
determining whether a component or document meets the user requirements. The remit
of the review term is to detect errors and inconsistencies and point them out to the
designer or document author.
The review term should include project members who can make an effective
contribution but, at the same time, it should not be too large. For example, if a
sub-system design is being reviewed, designers of related sub-system should be
included in the review team. They may bring importance insights into sub-system
interfaces which could be missed if the sub-system is considered in isolation.
There are two ways to approach review team selection. A small team of three or four
people may be selected as principal reviewers. They are responsible for checking and
reviewing the document being reviewed. The remainder of the team may be composed
of others who feel that they have a contribution to make. These other team member
may not be involved in reviewing the whole document but may concentrate on those
parts which affect their work. During the review, any team member may comment at any
time.
The technical review itself should be relatively short (two hours at most) and involves
the document author ‘walking though’ the document with the review team. A member of
the review team should be appointed chairman and they are responsible for organizing
the review. Another should be responsible for recording all decision made during the
review. At least be review team, member (often the chairman) should be a senior
designer who can take the responsibility for marking significant technical decision.
Depending on the organization the type of document being reviewed and the individual
concerned, this may mean presenting an overview on a blackboard or may involve a
more formal presentation using prepared slides. During this process the team should
note problems and inconsistencies. On completion of the review, the actions are noted
and forms recording the comments and actions are signed by the designer and the
review chairman. These are then filed as part of the formal project documentation. If
only minor problems are discovered, a further review may be unnecessary. The
chairman is responsible for ensuring that the required changes are made. If major
changes are necessary, it is normal practice to schedule a further review.
In the course of the review, all of the comments made should be considered along with
any other written submissions to the review team. Some of the comments may be
incorrect and can be disregarded. The others should be classed under one of three
categories:
(1) No action The review discovered some kind of anomaly but it
was decided that this was not critical and the cost of rectifying the problem was
unjustified.
(2) Refer for repair The review detected a fault and it is the
responsibility of the designer or document originator to correct the fault.
Some of the errors discovered in a review may be errors in the software specifications
and requirements definition. Requirements errors must be reported to the software
contractor and the impact of changing the requirements or the specification must be
evaluated. The requirement may be changed. However, if the change involves
large-scale modification of the system design, it may be most cost-effective to live with
the fault and instruct the design team to design around rather than correct that error.
As well as being part of the QA process, management usually requires that designs be
‘signed off’, which means that approval must be given to the design before further work
can go ahead. Normally, this approval is given after the review. This is not the same as
certifying the design to be fault free. The review chairman may approve a design for
further development before all problems have been repaired if the problems detected do
not seriously affect further development.
Finally, reviews are valuable for training purpose. They offer an opportunity for
designers to explain their design to other project engineers. Newcomers to the project
or designers who must interface with the system being designed may attend the review
as observers as it offers an excellent opportunity to learn about the system design.
36