Вы находитесь на странице: 1из 14

Automation in Requirements Analysis And Specification

The following contribution deals with the


automation in requirements analysis and
specification. There are many ways to
check the content of the software
requirements inspections, reviews,
walkthroughs, etc. However the attempt to
automate the checking of requirement
specifications has only come up in the past
years as a result of the progress in
automated natural language analysis. Text
processors parse natural language texts and
that makes it possible to also parse
requirement documents. The three main
objectives of automation in requirements
analysis are a) extracting test cases for
requirements-based b) tests checking the
requirements document for a set of rules,
and c) measuring the quality, quantity and
complexity of the requirements .
Automatic analysation of text is a means
of fulfilling these goals. The author began
work on this subject as early as 2001 in a
project in Vienna to test a large scale
investment banking system against its
requirements and has been working on it
ever since.

Intoduction: Requirement analysis in


software engineering, takes into count
those tasks that determine the needs or
conditions to develop a new product.
Requirements analysis can be a tiring
process in which many delicate skills are
used. New systems change the
environment and relationships between
people, so it is important to identify all the
stakeholders. Analysts can employ several
techniques to elicit the requirements from
the customer. Historically, this has

included such things as holding interviews,


or holding focus groups (more aptly
named in this context as requirements
workshops) and creating requirements
lists.

Software requirements engineering:


There are two main steps involved in
requirements engineering:
1) Elicit: find out what customers
really want. The person who keeps
track of requirement analysis is
called system analyst. It collects
data pertaining to product. He
writes the SRS document (software
requirements specification). The
SRS document is reviewed by
customer.
2) Document (requirement analysis):
it consists of requirement gathering
and then discussin with the
customer

Essential parts of an SRS : Introduction:


Purpose
of this document, including
intended audience is what will be produced
, application, benefits, objectives and
goals.
Future prospects in RE
The progress in automated text analysis,
i.e. text mining, has opened up a host of
opportunities to better control the quality
of requirement documents. It does not

alleviate the necessity to have a manual

2009). In addition, the requirement writing

review of the requirement, but now this

rules need to be extended and more

review can concentrate on the content

precisely checked. There is a definite need

(Parnas & Weiss, 1995). The checking of

for more research on how to measure and

formalities can be done by a tool. Besides

assure the quality of requirements. In

that an analysis tool can provide essential

respect to Lord Kelvin, only when we are

measures for assessing the quality of the

able to express the quality of requirement

document and for estimating

documents in numbers, will we be able to

implementation costs. Up until now

compare and assess them (Kelvin, 1867).

software measurement has been focused

There is an even greater need for research

on the design und code. Now it can be

on how to estimate project costs based on

extended to the requirement documents.

the requirements. The costing methods

Defining new requirement metrics above

currently propagated are far from being

and beyond what has been described in

mature. Therefore, this work can only be

this paper will become an important part of

seen as a first step in trying to measure the

the software measurement field. There will

quality of requirements.

also be a tighter connection between


requirements and testing. Now that it has

Benefits of requirement analysis: The

become feasible, more and better test cases

question comes up as to what are the

with links to the business objects,

benefits of requirement metrics and what

processes and rules, will be taken

consequences should be taken from the

automatically from the requirements and

deficiency reports? In the case of the

fed into the test case data base. There is

deficiency report, the answer is clear. It is

still a gap between the logical test cases

intended to be the basis for a requirement

and the physical test data, but in time, this

review. It brings to light the obvious

gap too will be overcome and it will

formal weaknesses in the requirement

become possible to validate a system

document so that the document reviewers

directly against its requirements (Sneed,

can concentrate on the essential problems

2008).

in the content of the document. If there are


many formal deviations from the

In the future more requirement analysis

standards, this indicates that the

tools will appear on the scene. At the

requirement document should be revised

moment they are more in an experimental

before it can be accepted. In addition, the

stage. Requirement metrics need to be

deficiency report has an educational

calibrated based on feedback from the

purpose. It shows the requirement writers

projects where the tools are used (Selby,

what they have done wrong so that they

can improve their requirement writing

document will be more costly to maintain.

skills. Just as programmers learn from

The testability metric is an indicator for

code deficiency reports, requirement

the effort required to test the system. A

engineers need a feedback on the quality

system with many conditional actions

of their work.

requires more test cases than a system with


mainly unconditional actions. Finally, the

In the case of the requirement metrics,

conformity metric shows to what degree

there is no direct benefit to the requirement

the requirement writers have followed the

engineers, other than to compare their

prevailing rules. Rules can be crossed out,

scores on one document with the scores on

then they will not be checked, but those

another. The benefits are more for the

rules which are kept should be adhered to.

product owners or project managers. The

The degree of conformity to the rules is the

requirement specification is the main

best indicator for the uniformity of

source of information for making project

requirement documents, provided that is a

cost estimations (Sneed, 2014) Object-

goal of the user organization.

points, function-points and use case points


are standard size measures taken from the
requirement document to project the size
of the software to be developed. Normally

Natural language requirements

they would have to be counted manually

checking tool : Regarding tools for the

but here they are automatically derived

automated quality assurance of software

from the requirement document. The

requirements, one must distinguish

complexity metrics can be useful in

between tools which process formatted

assessing the costs of requirement

requirement specifications stored in tables,

engineering. They indicate requirement

xml documents or requirement databases

bloat and redundancy, for instance if the

such as RequisitePro from IBM and tools

size of the text is appropriate to the content

which process plain texts. There are

contained therein. The quality metrics are

numerous tools of the former type on the

indicators for the degree of ripeness the

commercial market but these tools are

requirement document has attained. If the

proactive in nature. They force the

document has low completeness and

requirement writer to formulate the

consistency ratings it is a sign that it is not

requirements in a specific way. A good

really finished. The incomplete and

example is RAT the Requirements

inconsistent text parts will have an effect

Authoring Tool from the ReUse Company

on the quality of the product as a whole. A

in Madrid. RAT uses a set of agreed upon

low changeability metric indicates that the

boilerplates and leads you, step by step,

suggesting the next term of your

Rosenberg and Hyatt on the automated

requirement always ensuring the right

quality analysis of natural language

grammar. While guiding the user RAT

requirement documents [Wilson, 2007].

collects some useful metrics on what that

Wilsons tool uses nine quality indicators

user is writing, including the detection of

for requirements specification:

inconsistences, coupling requirements,

Imperatives, Continuances, Directives,

ambiguous requirements, non-atomic

Options, Weak Phrases, Size, Specification

requirements, use of the wrong verb tense,

Depth, Readability and Text Structure. At

mode or voice, consistent use of

the 2008 conference on requirements

measurement units, etc. This is a practical

engineering Castro, Duan, Cleland and

approach provided the requirements are

Mobasher from the DePaul University

not written yet. It forces the user to work

presented a promising approach to text

in a prescribed template.

mining for soliciting requirements [Castro

[see: www.reusecompany.com].

et al. , 2008].

The second category of tools, to which

This early research proves that natural

SoftAudit belongs, presumes that the

language text analysis is coming along but

requirements have already been written.

that it is still in the experimental phase.

The most that can be done is to add key

The Sophist GmbH has developed such a

words or markup the document. These

tool but up until now it has only been used

tools presume some kind of natural

internally in projects. Most of the other

language processing NLP. Significant

tools of this type have remained at the

progress has been made in the past years in

universities or research institutes where

parsing natural language texts, making it

were developed. An exception is the

possible to recognize grammatical patterns

Requirements Assistant offered by IBM.

and to count grammar types. Text mining

[see: http://www.requirementsassistant.nl].

has become a frequently cited technique

This tool was developed to aide reviewers

used for extracting selected information

in reviewing a requirement document in

from all kinds of text [Mille, 2005]. It is

interactive mode. It analyzes the text

obvious that this technique can also be

sentence by sentence and highlights

applied to natural language requirement

contradictory and unclear terms. It can also

texts. Mora and Denger have reported on

track cross references if they are really

their work in automatically measuring

satisfied. The human reviewer is

requirement documents [Mora & Denger

practically guided through the text. The

2003] and in the ICSE proceedings of

emphasis is less on measuring the

2007 there is a contribution from Wilson,

requirements as it is on reviewing the

structure and content of the requirement

approach of Olga Ormandjievaia and her

for deficiencies. To this end, it is very

associates at Concordia is a very promising

useful but it presupposes a human reviewer

approach to measuring the quality of

who has the time to go through the

requirements in terms of their unambiguity

requirements step by step. The persons

and comprehensibility. [Ormandjieva

responsible for estimating the costs of a

2007].

project based on the requirements will


prefer a tool which can provide a quick

A similar tool has been proposed by

and cheap insight into the overall size,

Fabbrini and his associates at the

complexity and quality of a requirement

University of Pisa in Italy. That tool with

document before the manual reviewing

the name QuARS - Quality Analyzer for

process begins. This of course does not

Requirements Specification. syntactically

preclude a manual review. That is still

parses natural language sentences using a

necessary, but only after the decision has

MINIPAR parser. The quality indicators

been made to proceed or not.

are based mostly based on specific


keywords, rather than on more general

That tool which comes closest to what is

classes of words. In this respect, it is

described here was developed within a

similar to the tool described here [Fabbrini

research project at the Concordia

2001].

University in Montreal. It is part of a


bigger project aimed at applying NLP

In summary, it can be stated that there are

techniques to the RE process. The tool not

now a number of tools to choose from for

only classifies sentences but also extracts

analyzing natural language requirements,

values of features indicators likely to

however they are all goal and context

make a sentence ambiguous or

dependent. The right choice depends on

unambiguous in terms of service

what information one wants to get out of

understanding using a metric for the

the requirements, for what purpose, and

degree of difficulty. The Feature Extractor

how the requirements are formulated.

tool then feeds the sentences one-by-one to

There are many factors to be considered.

the Stanford Parser for POS tagging and


syntax parsing. In so doing, the values of
the indicators are counted for each

Natural language requirements

sentence. In the end the NLT requirement

metrics :

document is graded according to its


understandability, conceptual consistency

As Tom Gilb once put it, metrics say

and its adherence to formal rules. This

quality better than words (Gilb, 2008). If

the quality of requirements is to be

To measure completeness Boehm

evaluated, the requirements should be

suggested counting the number of TBDs,

measured. Unfortunately, requirement

the number of undefined references, the

measurement is still at a rudimentary state

number of required but missing text items,

compared to design and code

the number of missing functions, the

measurement. It is true that Gilb addressed

number of missing constraints and the

the subject of requirements measurement

number of missing prerequisites. To

in his first book on software metrics in

measure consistency he proposed counting

1976. Gilb perceived requirement

the number of contradictory statements,

documents from the viewpoint of the

the number of deviations from existing

HIPO method as a tree structure with

standards and the number of references to

nodes and levels. Each elementary node

non-existing documents. To measure

was seen as a single requirement with

feasibility he suggested counting the

inputs and outputs. To measure the size

number of requirements that could only be

and complexity of the requirement trees,

implemented with great effort, the number

he suggested counting the nodes, the

of requirements whose implementation

breath and the depth of the tree as well as

effort could not be predicted at all, the

the number of inputs and outputs of each

number of high risk requirements and the

node. Unfortunately Gilbs book was

sum of the risk factors. To measure

published in Sweden and hardly anyone

testability Boehm proposed counting the

took notice of it (Gilb, 1976).

number of quantified non-functional


requirements, the number of testable

In the USA it was not until 1984 that an

functional-requirements with a clear

article appeared on the subject of

pass/fail criteria and the number of visible

requirement measurement. Boehm was

intermediate results. These counts were

concerned with the quality of the

then compared with the total number of

requirements and in particular with the

counted items to come up with a quality

requirement characteristics given in

score. Of course these counts had to be

Chapter 2:

done manually by a human inspector.


Automated requirement measurement was

Completeness,

at that time out of the question (Boehm,

Consistency,

1984).

Feasibility and

Testability.

In his book on requirements management


Christof Ebert makes a case for measuring

requirement documents even if they are

she and her coworkers have expounded

only in natural language. He maintains that

upon this subject and proposed the

the requirements should be counted, their

following requirement metrics (Rupp &

degree of completeness determined, the

Recknagel, 2006):

number of standards violations counted


and the business value of each requirement

Ambiguity

computed. Only by quantifying the

Activeness

requirement quality and quantity is it

Classification

possible to estimate the implementation

Identification

effort and to calculate a return on

Readability

investment (Ebert, 2005). If software

Unambiguousness is measured by counting

projects are to be value driven, this is a

the requirements written according to a

definite prerequisite.

predefined template so that they cannot be


misinterpreted.

Ebert categorizes requirement document


types as being informal, structured, semiformal or formal. To be measurable
requirements should at least be structured
and contain semi-formal elements. This
makes it possible for natural language text
analyzers to not only count elements but
also to check rules. One of the leading
pioneers in this field is the author and
lecturer Chris Rupp. Rupp has not only
proposed how to structure natural language
requirements but also how to measure
them. In her book on requirements
management Rupp proposes 25 rules for
writing prose requirements including such
rules as that all requirements should be
uniquely identified and that every
requirement sentence should have a
subject, an object and a predicate (Rupp,
2007). She also suggests how requirements
can be measured. In subsequent articles

Requirement Documents check: The


need to control the quality of requirement
documents was seen already in the 1970s
by Boehm and others. In a much cited
article Boehm demonstrates that the earlier
defects are uncovered, the cheaper it is to
remove them. The costs of correcting
requirement errors in production are 10-20
times greater than the costs of correcting
them at the time the requirements are
written and requirement errors account for
at least 40% of all errors (Boehm, 1975).
This is the main justification for investing
in the quality assurance of the
requirements. Of course, the requirements
cannot be tested, but they can be analyzed
and checked for consistency and
completeness. The first work on
controlling textual requirement documents

goes back to the 1970s when Gilb

another. If an object is referenced by a

proposed reviewing requirement

function, then that object should also be

documents according to a specific set of

defined. There should be no unsatisfied

rules (Gilb, 1976). At the same time Fagan

references, no contradictory relationships

at IBM was introducing inspection

and no missing links. Boehm emphasizes

techniques not only for inspecting code but

the importance of traceability.

also for inspecting requirement documents.

Feasibility is the degree to which the

(Fagan, 1986). In the Ballistic Missile

requirements can be implemented within

Defense Project Alford was leading a tool

the planned time and budget. The number

supported effort to verify the correctness

of functions and the level of quality must

of the requirement nets (Alford, 1977).

remain within the limits set by the cost


estimation. Also the risks involved should

Some years later in 1984 Boehm published

not exceed the estimated risk limits.

an article on the verification and validation

Testability is the degree to which the

of requirement documents at TRW

system can be tested with a minimum of

(Boehm, 1984). Like much of what Boehm

effort. That means that there should be a

wrote, this article became an inspiration to

limited number of external interfaces, a

all those responsible for ensuring the

limited number of data exchanged and a

quality of requirement documents.

limited number of possible states. To be

According to Boehm and his associates at

testable a requirement document should be

TRW there are four basic criteria for

well structured and have a minimum set of

checking requirement specifications. These

structural elements.

are:

Completeness,

Consistency,

Feasibility and

Testability.

In 1990 Martin and Tsai expanded the


earlier inspection techniques by
introducing N-Fold inspection for
requirements analysis (Martin & Tsai,
1990). The requirements inspected were

According to this early work of Boehm

those of the Amtrak rail traffic control

Completeness is the degree that all of

system. The inspection teams used the

the system elements are described. There

checklist approach suggested by Fagan.

should be no TBDs, no nonexistent

The crux of this wide scale experiment

references, no missing attributes, no

was that each team finds different faults,

missing functions and no missing

since each team has another view of the

constraints. Consistency is the degree to

problem. Each team alone detected less

which the descriptions coincide with one

than 15% of the known faults, but by

application solution proposed in the

joining the results of all teams more than

requirements is the best approach to

70% of the faults was reported. This only

finding serious faults. Applied to modern

confirms the old saying that more eyes see

day business requirement documents, that

more than one. It also confirms that

means walking through the business

potential errors in the future software

processes and their use cases.

system can be recognized already in the


requirements if those requirements are

Boehm continued his work as a professor

structured, as they were here, and one

at the University of Southern California,

looks close enough.

where he pursued a number of research


projects, one of which was on discovering

Victor Basili and his associates at the

conflicts in non-functional requirements. It

University of Maryland picked up the early

is known that software quality goals can

work of Boehm, Gilb and Fagan and began

conflict with one another. For instance,

experimenting with it to demonstrate how

runtime performance can conflict with

effective inspection methods are in finding

usability as well as with reliability and

defects in documents. The experiment was

security. Testability can conflict with

done on the requirement specification of

performance and usability. Portability can

the cruise missile. For it they used several

conflict with usability, performance and

groups of students using three different

maintainability. The goal of the research

inspection methods ad hoc, checklists

reported on in 1996 was to use a

and scenarios. Using scenarios to

knowledge base and automated extraction

systematically walk through the

of the non-functional requirements to

functionality turned out to be the most

identify quality requirement conflicts. The

effective way of discovering defects.

method was applied to the requirements

Using checklists helped to discover minor

for the National Library of Medicine

defects and formal errors in the

MEDLARS and was astoundingly

construction of the requirement document,

successful. Over half of the quality

but it was even weaker than the ad hoc

problems that later came up in operating

method in detecting serious problems. The

the system could be detected as the result

scenario-based approach could detect up to

of contradictory requirements. Most of

45% of all known faults, the ad hoc

these contradictions could not be

method 31% and the checklist-based

recognized by the human eye, but in using

method only 24% (Basili, et al., 1995).

automated techniques to process the

This indicates that simulation of the

requirement texts, they were detected. The

significance of this research is that it

in the software are not because of

introduced for the first time, the automated

implementing the wrong solution, but

analysis of textual requirements (Boehm &

because of implementing the right solution

Hoh, 1996).

wrong. The majority of errors found in


system testing are errors of omission, i.e.

In recent years, work on the automated

something is missing which should be

analysis of requirement documents has

there. That was the finding in the Clean

exploded.. The term Text Mining is used

Room approach to quality assurance

to denote the technology of extracting

(Sheerer, 1996). The cause of such errors

information from natural language texts.

is inadequate requirement specifications.

Several papers have been published and a

The missing function, data or link was

book written by T. Miller published on the

already missing in the requirements

techniques of text mining (Miller 2005).

document, only it was not noticed there

Some approaches first convert the text to a

among the several hundred pages of

markup language as done at the University

monotonous text. By putting that text in a

of Madrid (Sierra et al., 2008) whereas

structured, marked up format the

others extract information from the text

contradictions and missing elements

directly (Hayes et al., 2006). In either case,

become more obvious. Besides, that makes

the result is information which can be

it possible to use text parsers to analyze the

further processed to determine the quality

texts, thereby relieving the human

of the requirement document. Advanced

controller of the boring task of checking

tools even recognize contradictions and

each and every entity attribute and cross

missing attributes as well as violations to

reference.

the rules of requirement writing (Sneed,


2007).

Completeness and consistency checks as


well as the test of preciseness can and

It may not be possible for a quality

should be automated. For instance, it is

assurance agent to judge the quality of the

possible to check if an object is input or

business solution. That can only be done

output to some use case and that a use case

by a domain expert in reviewing the

has at least one input and one output.

document. It is the task of quality

Furthermore it is possible to automatically

assurance to ensure that the description of

check if all required attributes of a use case

the proposed business solution is precise,

are defined, if every business rule is

complete and consistent. One should not

applied in at least one use case and that

forget that most errors which later come up

every use case fulfills at least one

requirement. Inversely, it can be confirmed

Boegh, J. (2008). A new Standard


for Quality Requirements. IEEE Software
Magazine, March 2008, p. 57

Boehm, B. (1975). The high costs


of software. In Practical Strategies for
developing large Software Systems, Ed. E.
Horowitz, Addison-Wesley, Reading MA.,
1975, p.3.

Boehm, B. (1984). Verifying and


Validating Software Requirements. IEEE
Software Magazine, Jan. 1984, p. 75

Boehm, B. & Hoh I. (1996).


Identifying Quality Requirement
Conflicts, IEEE Software Magazine,
March, 1996, p. 25

Castro-Herrera, C.,Duan, C.,


Cleland-Huang, J., Mobasher, B.: Using
Data Mining and Recommender Systems
to facilitate large-scale, Open and
Inclusive Requirements Elicitation
Processes, Proc of 16th Int. Conference on
Requirements Engineering, IEEE
Computer Society Press, Barcelona, Sept.
2008, p. 33

Cockburn, A. (2002). Agile


Software Development. Addison-Wesley,
Reading, MA.

Ebert, C. (2005). Systematic


Requirements Management. dpunkt.verlag,
Heidelberg

El Emam, K. & Birk, A. (2000).


Validating the ISO/IEC 15504 Measure of
Software Requirements Analysis Process
Capability.IEEE Trans. on S.E., Vol. 26,
No. 6, June 2000, p. 541

that every functional requirement is


fulfilled by at least one use case.
Incompleteness and inconsistency are
signs that the business solution has not
been thought through. In addition, it is
possible to automatically check the
preciseness of statements in that they
conform to the rules of requirement
writing. The requirement document needs
to be brought up to a minimum quality
level before it is accepted for
implementation and automated analysis
can help achieve this. By submitting
documents to an automated requirement
checking tool, a significant number of
potential errors can be detected, in
particular those errors that result from an
incomplete and inconsistent description
(Wilson et al., 1997).

References: Alford, M. (1977). A


Requirements Engineering Methodology
for Realtime Processing
Requirements. IEEE Trans. On S.E., Vol.
3, No. 1, Jan. 1977, p. 60
Basili, V. et al. (1995). Comparing
Detection Methods for Software
requirements Inspections A replicated
Experiment. IEEE Trans. on S.E., Vol. 21,
No. 6, June 1995, p. 563
Beck, K. et al. (2001): Manifest
fr Agile Softwareentwicklung,
<agilemanifesto.org/iso.de>, 2001

Ewusi-Mensah, K. (2003) Software


Development Failures. M.I.T. Press,
Cambridge

Fabrinni, F., Fusani, M., Gnesi, S.,


and Lami, G., An Automatic Quality
Evaluation for Natural Language
Requirements, Proceedings of the
Seventh International Workshop on
Requirements Engineering: Foundation for
Software Quality REFSQ'01, Interlaken,
Switzerland, June 4-5, 2001.

ISO (2002). ISO/IEC 15288:


Information Technology Life Cycle
Management System Life Cycle
Process. Int. Organization for
Standardization, Genf

ISO (2007). ISO/IEC Standard


25030-2007: Software Engineering
Software Product Quality Requirements
and Evaluation (SQuaRE) Quality
Requirements. Int. Organization for
Standardization, Genf

Fagan, M. (1986). Advances in


Software Inspections. IEEE Transactions
on Software Engineering, Vol. 12, No. 7,
Juli 1986, p. 744.

Jacobson, I. et al. (1992). Objectoriented Software Engineering A Use


Case Driven Approach. Addison-Wesley.
Reading, Ma.

Gilb, T. (1976) Software Metrics,


Studentenlitteratur. Lund, Sweden,

Gilb, T. (2008) Metrics say Quality


better than words. IEEE Software
Magazine, March 2008, p. 64

Kelvin, W.T. (1867). Treatise on


Natural Philosophy. British Encyclopedia
of Science, Glasgow

Hall, A. (1990). Seven Myths of


Formal Methods. IEEE Software
Magazine, Sept. 1990, p. 11

Martin, J. & Tsai, W.T. (1990) NFold Inspection A Requirements


Analysis Technique. Comm. Of ACM, Vol.
33, No. 2, Feb. 1990, p. 225

Hayes, J. et al. (2006). Advancing


Candidate Link Generation for
Requirements Tracing. IEEE Trans. on
S.E., Vol. 32, No. 1, Jan. 2006, p. 4

Miller, T.W. (2005). Data and Text


Mining A Business Application
Approach. Prentice-Hall, Upper Saddle
River, N.J.

Mora, M. & Denger C. (2003).


Requirements Metrics. IESE-Report No.
096.03/Version 1.0, October 1, Brssel

Ormandjieva, O., Hussain, I.,


Kosseim, L.: Toward a Text Classification
System for the Quality Assessment of
Software Requirements Written in Natural
Language, Proceedings of SOQUA'07,
September 3-4, 2007, Dubrovnik, Croatia

Parnas, D. (1977). The use of


precise Specifications in the development

IEEE (1998). ANSI-IEEE Standard


830: Recommended Practice for Software
Requirement Specifications. IEEE
Standards Office, New York
ISO (1994). ISO/IEC Standard
9126: Software Product
Evaluation. International Standards
Organization, Genf

of Software. IFIP Congress-77, Toronto,


Sept. 1977

Parnas, D. & Weiss, D. (1995).


Review Techniques for assessing
Requirement Quality. Comm. Of ACM,
Vol. 38, No. 3, March, 1995, p. 319
Pernul, G. et al. (2009). Analyzing
Requirements for Virtual Business
Alliances the case of SPIKE, Int. Conf.
on Digital Business (DIGIBIZ 2009),
London, UK, June 2009

Pohl, K. (2007). Requirements


Engineering. dpunkt.verlag, Heidelberg.

Pohl, K. & Rupp, C.


(2009). Basiswissen Requirements
Engineering, Aus- und Weiterbildung zum
Certified Professional for Requirements
Engineering Foundation level nach IREBStandard. dpunkt.verlag, Heidelberg

Rupp,C. (2007). Requirements


Engineering and Management. Hanser
Verlag, Munich/ Vienna

Rupp, C. & Cziharz, T. (2011). Mit


Regeln zu einer besseren
Spezifikation. Informatikspektrum, Vol. 34,
No. 3, 2011, p. 255

Sierra, J. et al. (2008). From


Documents to Applications using Markup
Languages. IEEE Software Magazine,
March, 2008, p. 68

Scheer, A.-W. (2005). Von


Prozessmodellen zu lauffhigen
Anwendungen ARIS in der Praxis.
Springer Verlag, Berlin, 2005

Sheerer, S. et al. (1996) Experience


using Cleanroom Software
Engineering. IEEE Software Magazine,
May 1996, p. 69

Porter, A., Votta, L., Basili, V.


(1995). Comparing detection Methods for
Software requirement Specifications A
replicated Experiment. IEEE Transactions
on S.E., Vol. 21, No. 6, 1995

Selby, R. (2009). Analytics-driven


Dashboards enable leading Indicators for
Requirements and Designs of Large-Scale
Systems.IEEE Software Magazine, Jan.
2009, p. 41

Rich, C. & Waters, R. (1988). The


Programmers Apprentice. IEEE
Computer Magazine, Nov. 1988, p. 19

Robertson, S. & Robertson, J.


(1999). Mastering the Requirements
Process, Addison-Wesley, Harlow, G.B.

Sneed, H. (2007). Testing against


natural language Requirements. 7th Int.
Conference on Software Quality
(QSIC2007), Portland, Oct. 2007

Sneed, H. (2008). Bridging the


Concept to Implementation Gap in
Software Testing. 8th Int. Conference on
Software Quality (QSIC2008), Oxford,
Oct. 2008

Sneed, H. (2009). The System Test,


Hanser Verlag, Munich/Vienna

Rupp,C. & Recknagel,M. (2006).


Messbare Qualitt in
Anforderungsdokumenten. Objektspektru
m, Nr. 4, August 2006, p. 24

Sneed, H. (2014).
Anforderungsbasierte
Aufwandsschtzung. GI Management
Rundbrief Management der
Anwendungsentwicklung und wartung,
Vol. 20, No. 1, April 2014, p. 62
Van Lamsweerde, A. & Letier, E.
(2000). Handling Obstacles in GoalOriented Requirements Engineering. IEEE
Trans. on S.E., Vol. 26, Nr. 10, Oct. 2000,
p. 978
Wiegers, K. (2005). More about
Software Requirements thorny issues and

practical advice. MicroSoft Press,


Redmond, Wash.

W. M. Wilson et al. (1997).


Automated Quality Analysis of Natural
Language Requirement
Specifications. 19th Int. Conference on
Software Engineering (ICSE2007),
Montreal, May 2007, p. 158

Wing, J. (1990). A Specifiers


Introduction to Formal Methods. IEEE
Computer Magazine, Sept. 1990, p. 8.

Вам также может понравиться