Вы находитесь на странице: 1из 34

Match the problem domains in GROUP I with the solution technologies in GROUP II

GROUP I GROUP II
(P) Service oriented computing (1) Interoperability
(Q) Heterogeneous communicating systems (2) BPMN
(R) Information representation (3) Publish-find-bind
(S) Process description (4) XML
(A) P-1, Q-2, R-3, S-4
(B) P-3, Q-4, R-2, S-1
(C) P-3, Q-1, R-4, S-2
(D) P-4, Q-3, R-2, S-1

Answer: (C)
The following figure represents access graphs of two modules M1 and M2. The filled circles represent
methods and the unfilled circles represent attributes. If method m is moved to module M2 keeping the
attributes where they are, what can we say about the average cohesion and coupling between modules in
the system of two modules

(A) There is no change.


(B) Average cohesion goes up but coupling is reduced.
(C) Average cohesion goes down and coupling also reduces.
(D) Average cohesion and coupling increase.

Answer: (A)

Explanation: Answer is “No Change”


Cohesion refers to the degree to which the elements of a module belong together.
Coupling is the manner and degree of interdependence between software modules
Coupling between M1 and M2 = (Number of external links) /
(Number of modules)
= 2/2
= 1
Cohesion of a module = (Number of internal links) /
(Number of methods)

Cohesion of M1 = 8/4 = 2
Cohesion of M2 = 6/3 = 2

After moving method m to M2, we get following

Coupling = 2/2 = 1
Cohesion of M1 = 6/3 = 2
Cohesion of M2 = 8/4 = 2

A company needs to develop a strategy for software product development for which it has a choice of two
programming languages L1 and L2. The number of lines of code (LOC) developed using L2 is estimated
to be twice the LOC developed with Ll. The product will have to be maintained for five years. Various
parameters for the company are given in the table below.

Parameter Language L1 Language L2


Man years needed for development LOC/10000 LOC/10000

Development cost per man year Rs. 10,00,000 Rs. 7,50,000

Maintenance time 5 years 5 years

Cost of maintenance per year Rs. 1,00,000 Rs. 50,000

Total cost of the project includes cost of development and maintenance. What is the LOC for L1 for
which the cost of the project using L1 is equal to the cost of the project using L2?
(A) 4000
(B) 5000
(C) 4333
(D) 4667

Answer: (B)

Explanation: Let LOC of L1=x, so LOC of L2=2x


Now,
(x/10000)*1000000 + 5*100000 = (2x/10000)*750000 + 5*50000
Solving for x, we get x =5000
Source: http://clweb.csa.iisc.ernet.in/rahulsharma/gate2011key.html
A company needs to develop digital signal processing software for one of its newest inventions. The
software is expected to have 40000 lines of code. The company needs to determine the effort in person-
months needed to develop this software using the basic COCOMO model. The multiplicative factor for
this model is given as 2.8 for the software development on embedded systems, while the exponentiation
factor is given as 1.20. What is the estimated effort in person-months?
(A) 234.25
(B) 932.50
(C) 287.80
(D) 122.40

Answer: (A)

Explanation: In the Constructive Cost Model (COCOMO), following is formula for effort applied
Effort Applied (E) = ab(KLOC)b [ person-months ]
b

= 2.8 x(40)1.20
= 2.8 x 83.65
= 234.25
Which one of the following is NOT desired in a good Software Requirement Specifications (SRS)
document?
(A) Functional Requirements
(B) Non-Functional Requirements
(C) Goals of Implementation
(D) Algorithms for Software Implementation

Answer: (D)

Explanation: The software requirements specification document is a requirements specification for a


software system, is a complete description of the behavior of a system to be developed and may include a
set of use cases that describe interactions the users will have with the software. In addition it also contains
non-functional requirements. Non-functional requirements impose constraints on the design or
implementation (such as performance engineering requirements, quality standards, or design constraints)
(Source: Wiki)
An SRS document should clearly document the following aspects of a system: Functional Requirements,
Non-Functional Requirements and Goals of implementation (Source: Fundamentals of Software
Engineering by Rajib Mall)
A software test engineer is assigned the job of doing black box testing. He comes up with the following
test cases, many of which are redundant.

Which one of the following option provide the set of non-redundant tests using equivalence class
partitioning approach from input perspective for black box testing?
(A) T1,T2,T3,T6
(B) T1,T3,T4,T5
(C) T2,T4,T5,T6
(D) T2,T3,T4,T5

Answer: (C)
Explanation:
T2,T4,T5 and T6 belong to different classes. Hence it gives an optimal test suite.
In Black Box Testing we just focus on inputs and output of the software system without bothering about
internal knowledge of the software .

Equivalence partitioning is a software testing technique that divides the input data of a software unit into
partitions of equivalent data from which test cases can be derived

. As T1 and T2 checking same condition a=0

So,we can select T1 or T2.

For T3 ,T4 (b^2-4ac)=0.

so , we can select T3 or T4.

for T5 (b^2-4ac) > 0

for T6 (b^2-4ac) < 0

So, C is the answer.


The cyclomatic complexity of each of the modules A and B shown below is 10. What is the cyclomatic
complexity of the sequential integration shown on the right hand side?

(A) 19
(B) 21
(C) 20
(D) 10

Answer: (A)

Explanation:
Cyclomatic Complexity of module = Number of decision points + 1

Number of decision points in A = 10 - 1 = 9


Number of decision points in B = 10 - 1 = 9
Cyclomatic Complexity of the integration = Number of decision points + 1
= (9 + 9) + 1
= 19
What is the appropriate pairing of items in the two columns listing various activities encountered in a
software life cycle?
P. Requirements Capture 1.Module Development and Integration
Q. Design 2.Domain Analysis
R. Implementation 3.Structural and Behavioral Modeling
S. Maintenance 4.Performance Tuning
(A) P-3, Q-2, R-4, S-1
(B) P-2, Q-3, R-1, S-4
(C) P-3, Q-2, R-1, S-4
(D) P-2, Q-3, R-4, S-1

Answer: (B)
Explanation: Module Development and Integration is clearly implementation work
Performance Tuning is clearly maintenance work
Domain Analysis is clearly Requirements Capture
The following program is to be tested for statement coverage:
begin
if (a== b) {S1; exit;}
else if (c== d) {S2;]
else {S3; exit;}
S4;
end
The test cases T1, T2, T3 and T4 given below are expressed in terms of the properties satisfied by the
values of variables a, b, c and d. The exact values are not given.
T1 : a, b, c and d are all equal
T2 : a, b, c and d are all distinct
T3 : a = b and c != d
T4 : a != b and c = d
Which of the test suites given below ensures coverage of statements S1, S2, S3 and S4?
(A) T1, T2, T3
(B) T2, T4
(C) T3, T4
(D) T1, T2, T4

Answer: (D)

Explanation:
T1 checks S1

T2 checks S3

T4 checks S2 and S4
The coupling between different modules of a software is categorized as follows:
I. Content coupling
II. Common coupling
III. Control coupling
IV. Stamp coupling
V. Data coupling
Coupling between modules can be ranked in the order of strongest (least desirable) to weakest (most
desirable) as follows:
(A) I-II-III-IV-V
(B) V-IV-III-II-I
(C) I-III-V -II-IV
(D) IV-II-V-III-I

Answer: (A)
Which of the following statements are TRUE?
 I. The context diagram should depict the system as a single bubble.
 II. External entities should be identified clearly at all levels of DFDs.
 III. Control information should not be represented in a DFD.
 IV. A data store can be connected wither to another data store or to an external entity.
(A) II and III
(B) II and III
(C) I and III
(D) I, II and III

Answer: (C)

Explanation: Only statement (I) and (III) are correct.


Statement (II) is not correct, because all external entities intracting with the system should be represented
only in context diagram. The external entities should not appear in the DFDs at any other level.
Statement (IV) is not correct, because a DFD represents only data flow, and it does not represent any
control information.

So,
option (C) is true.
Consider the following statements about the cyclomatic complexity of the control flow graph of a
program module. Which of these are TRUE?
I. The cyclomatic complexity of a module is equal to the maximum number of
linearly independent circuits in the graph.
II. The cyclomatic complexity of a module is the number of decisions in the
module plus one,where a decision is effectively any conditional statement
in the module.
III.The cyclomatic complexity can also be used as a number of linearly
independent paths that should be tested during path coverage testing.
(A) I and II
(B) II and III
(C) I and III
(D) I, II and III

Answer: (B)

Explanation:
TRUE: The cyclomatic complexity of a module is the number of decisions in the
module plus one,where a decision is effectively any conditional statement
in the module.
TRUE: The cyclomatic complexity can also be used as a number of linearly
independent paths that should be tested during path coverage testing.

Match the following:


1) Waterfall model a) Specifications can be
developed incrementally

2) Evolutionary model b) Requirements compromises


are inevitable

3) Component-based c) Explicit recognition of risk


software engineering

4) Spiral development d) Inflexible partitioning of


the project into stages
(A) 1-a, 2-b, 3-c, 4-d
(B) 1-d, 2-a, 3-b, 4-c
(C) 1-d, 2-b, 3-a, 4-c
(D) 1-c, 2-a, 3-b, 4-d

Answer: (B)
Explanation:
 Waterfall Model: We can not go back in previous project phase as soon as as we proceed to next
phase ,So inflexible
 Evolutionary: It keeps changing with evolution so incremental in nature
 Component based: Reuse-based approach to defining, implementing and composing loosely
coupled independent components into systems
 Spiral: Spiral model is the most advanced .It includes four faces one of which is Risk.
 Phases: Planning, Risk Analysis, Engineering and Evaluation
Which one of the following is TRUE?
(A) The requirements document also describes how the requirements that are listed in the document are
implemented efficiently.
(B) Consistency and completeness of functional requirements are always achieved in practice.
(C) Prototyping is a method of requirements validation.
(D) Requirements review is carried out to find the errors in system design

Answer: (C)

Explanation: Only option (c) is TRUE.

The development of software begins once the requirements document is 'ready'. One of the
objectives of this document is to check whether the delivered software system is acceptable. For
this, it is necessary to ensure that the requirements specification contains no errors and that it
specifies the user's requirements correctly. Also, errors present in the SRS will adversely affect
the cost if they are detected later in the development process or when the software is delivered to
the user. Hence, it is desirable to detect errors in the requirements before the design and
development of the software begins. To check all the issues related to requirements,
requirements validation is performed.

n the validation phase, the work products produced as a consequence of requirements


engineering are examined for consistency, omissions, and ambiguity. The basic objective
is to ensure that the SRS reflects the actual requirements accurately and clearly. Other
objectives of the requirements document are listed below.
1. To certify that the SRS contains an acceptable description of the system to be
implemented
2. To ensure that the actual requirements of the system are reflected in the SRS
3. To check the requirements document for completeness, accuracy, consistency,
requirement conflict', conformance to standards and technical errors.
Requirements validation is similar to requirements analysis as both processes review
the gathered requirements. Requirements validation studies the 'final draft' of the
requirements document while requirements analysis studies the 'raw requirements'
from the system stakeholders (users). Requirements validation and requirements
analysis can be summarized as follows:
1. Requirements validation: Have we got the requirements right?
2. Requirements analysis: Have we got the right requirements?
Various inputs such as requirements document, organizational knowledge, and
organizational standards are shown. The requirements document should be formulated
and organized according to the standards of the organization. The organizational
knowledge is used to estimate the realism of the requirements of the system.
The organizational standards are specified standards followed by the organization
according to which the system is to be developed.

The output of requirements validation is a list of problems and agreed actions of the
problems. The lists of problems indicate the problems encountered in
the"requirements document of the requirements validation process. The agreed
actions is a list that displays the actions to be performed to resolve the problems
depicted in the problem list.

Requirements Review
Requirements validation determines whether the requirements are substantial to design
the system. The problems encountered during requirements validation are listed below.
1. Unclear stated requirements
2. Conflicting requirements are not detected during requirements analysis
3. Errors in the requirements elicitation and analysis
4. Lack of conformance to quality standards.
To avoid the problems stated above, a requirements review is conducted, which
consists of a review team that performs a systematic analysis of the requirements. The
review team consists of software engineers, users, and other stakeholders who examine
the specification to ensure that the problems associated with consistency, omissions,
and errors are detected and corrected. In addition, the review team checks whether the
work products produced during the requirements phase conform to the standards
specified for the process, project, and the product.
At the review meeting, each participant goes over the requirements before the meeting
starts and marks the items which are dubious or need clarification. Checklists are often
used for identifying such items. Checklists ensure that no source of errors, whether
major or minor, is overlooked by the reviewers. A 'good' checklist consists of the
following.
1. Is the initial state of the system defined?
2. Is there a conflict between one requirement and the other?
3. Are all requirements specified at the appropriate level of abstraction?
4. Is the requirement necessary or does it represent an add-on feature that may not be
essentially implemented?
5. Is the requirement bounded and has a clear defined meaning?
6. Is each requirement feasible in the technical environment where the product or system
is to be used?
7. Is testing possible once the requirement is implemented?
8. Are requirements associated with performance, behavior, and operational
characteristics clearly stated?
9. Are requirements patterns used to simplify the requirements model?
10. Are the requirements consistent with the overall objective specified for the
system/product?
11. Have all hardware resources been defined?
12. Is the provision for possible future modifications specified?
13. Are functions included as desired by the user (and stakeholder)?
14. Can the requirements be implemented in the available budget and technology?
15. Are the resources of requirements or any system model (created) stated clearly?
The checklists ensure that the requirements reflect users' needs and provide
groundwork for design. Using the checklists, the participants specify the list of potential
errors they have uncovered. Lastly, the requirements analyst either agrees to the
presence of errors or states that no errors exist.

Other Requirements Validation Techniques


A number of other requirements validation techniques are used either individually or in
conjunction with other techniques to check the entire system or parts of the system. The
selection of the validation technique depends on the appropriateness and the size of the
system to be developed. Some of these techniques are listed below.
1. Test case generation: The requirements specified in the SRS document should be
testable. The test in the validation process can reveal problems in the requirement. In
some cases test becomes difficult to design, which implies that the requirement is
difficult to implement and requires improvement.
2. Automated consistency analysis: If the requirements are expressed in the form of
structured or formal notations, then CASE tools can be used to check the consistency of
the system. A requirements database is created using a CASE tool that checks the entire
requirements in the database using rules of method or notation. The report of all
inconsistencies is identified and managed.
3. Prototyping: Prototyping is normally used for validating and eliciting new
requirements of the system. This helps to interpret assumptions and provide an
appropriate feedback about the requirements to the user. For example, if users have
approved a prototype, which consists of graphical user interface, then the user interface
can be considered validated.
4. In the context of modular software design, which one of the following combinations is desirable?
(A) High cohesion and high coupling
5. (B) High cohesion and low coupling
(C) Low cohesion and high coupling
(D) Low cohesion and low coupling

Answer: (B)
Explanation: Coupling is the manner and degree of interdependence between software modules.
6. Cohesion refers to the degree to which the elements of a module belong together.
7. In a good software design, it is always desirable to have less interaction among modules (Low
coupling).
8. Advantages of high cohesion (or “strong cohesion”) are:
9. 1) Reduced module complexity (they are simpler, having fewer operations).
2) Increased system maintainability, because logical changes in the domain affect fewer modules,
and because changes in one module require fewer changes in other modules.
3) Increased module reusability,because application developers will find the component they need
more easily among the cohesive set of operations provided by the module.
Cohesion is increased if:

 The functionalities embedded in a class, accessed through its methods, have much in common.
 Methods carry out a small number of related activities, by avoiding coarsely grained or unrelated sets of
data.
Advantages of high cohesion (or "strong cohesion") are:

 Reduced module complexity (they are simpler, having fewer operations).


 Increased system maintainability, because logical changes in the domain affect fewer modules, and
because changes in one module require fewer changes in other modules.
 Increased module reusability, because application developers will find the component they need more
easily among the cohesive set of operations provided by the module.
While in principle a module can have perfect cohesion by only consisting of a single, atomic element – having a
single function, for example – in practice complex tasks are not expressible by a single, simple element. Thus a
single-element module has an element that either is too complicated, in order to accomplish a task, or is too
narrow, and thus tightly coupled to other modules. Thus cohesion is balanced with both unit complexity and
coupling.

Types of cohesion[edit]
Cohesion is a qualitative measure, meaning that the source code to be measured is examined using a rubric to
determine a classification. Cohesion types, from the worst to the best, are as follows:
Coincidental cohesion (worst)
Coincidental cohesion is when parts of a module are grouped arbitrarily; the only relationship between
the parts is that they have been grouped together (e.g. a “Utilities” class). Example:

/*
Groups: The function definitions
Parts: The terms on each function
*/
Module A{
/*
Implementation of r(x) = 5x + 3
There is no particular reason to group functions in this way,
so the module is said to have Coincidental Cohesion.
*/
r(x) = a(x) + b(x)
a(x) = 2x + 1
b(x) = 3x + 2
}

Logical cohesion
Logical cohesion is when parts of a module are grouped because they are logically categorized to do
the same thing even though they are different by nature (e.g. grouping all mouse and keyboard input
handling routines).
Temporal cohesion
Temporal cohesion is when parts of a module are grouped by when they are processed - the parts are
processed at a particular time in program execution (e.g. a function which is called after catching an
exception which closes open files, creates an error log, and notifies the user).
Procedural cohesion
Procedural cohesion is when parts of a module are grouped because they always follow a certain
sequence of execution (e.g. a function which checks file permissions and then opens the file).
Communicational/informational cohesion
Communicational cohesion is when parts of a module are grouped because they operate on the same
data (e.g. a module which operates on the same record of information).
Sequential cohesion
Sequential cohesion is when parts of a module are grouped because the output from one part is the
input to another part like an assembly line (e.g. a function which reads data from a file and processes
the data).
Functional cohesion (best)
Functional cohesion is when parts of a module are grouped because they all contribute to a single
well-defined task of the module (e.g. Lexical analysis of an XML string). Example:

/*
Groups: The function definitions
Parts: The terms on each function
*/
Module A {
/*
Implementation of arithmetic operations
This module is said to have functional cohesion because
there is an intention to group simple arithmetic operations
on it.
*/
a(x, y) = x + y
b(x, y) = x * y
}

Module B {
/*
Module B: Implements r(x) = 5x + 3
This module can be said to have atomic cohesion. The whole
system (with Modules A and B as parts) can also be said to have
functional
cohesion, because its parts both have specific separate purposes.
*/
r(x) = [Module A].a([Module A].b(5, x), 3)
}

Perfect cohesion (atomic)


Example.

/*
Groups: The function definitions
Parts: The terms on each function
*/
Module A {
/*
Implementation of r(x) = 2x + 1 + 3x + 2
It´s said to have perfect cohesion because it cannot be reduced any
more than that.
*/
r(x) = 5x + 3
}

Although cohesion is a ranking type of scale, the ranks do not indicate a


steady progression of improved cohesion. Studies by various people
including Larry Constantine, Edward Yourdon, and Steve
McConnell [4] indicate that the first two types of cohesion are inferior;
communicational and sequential cohesion are very good; and functional
cohesion is superior.
While functional cohesion is considered the most desirable type of
cohesion for a software module, it may not be achievable. There are cases
where communicational cohesion is the highest level of cohesion that can
be attained under the circumstances.[citation needed]
Match the following
List-I List-II
A. Condition coverage 1. Black-box testing
B. Equivalence class partitioning 2. System testing
C. Volume testing 3. White-box testing
D. Alpha testing 4. Performance testing

Codes:
A B C D
(a) 2 3 1 4
(b) 3 4 2 1
(c) 3 1 4 2
(d) 3 1 2 4
(A) a
(B) b
(C) c
(D) d

Answer: (C)

Explanation: White Box Testing tests internal structures or workings of an application. It covers
following
Control flow testing
Data flow testing
Branch testing
Statement coverage
Decision coverage
Modified condition/decision coverage
Prime path testing
Path testing
So conditional coverage must be in White-Box Testing.
Black-box testing is a method of software testing that examines the functionality of an application without
peering into its internal structures or workings. Typical black-box test design techniques include:
Decision table testing
All-pairs testing
Equivalence partitioning
Boundary value analysis
Cause–effect graph
Error guessing
Volume Testing Does performance testing for specific size.
Alpha testing is system testing by potential users/customers or an independent test team at the developers’
site.
Consider the following C program segment.
while (first <= last)
{
if (array [middle] < search)
first = middle +1;
else if (array [middle] == search)
found = True;
else last = middle – 1;
middle = (first + last)/2;
}
if (first < last) not Present = True;
The cyclomatic complexity of the program segment is __________.
(A) 3
(B) 4
(C) 5
(D) 6

Answer: (C)

Explanation: the cyclomatic complexity of a structured program[a] is defined with reference to the
control flow graph of the program, a directed graph containing the basic blocks of the program, with an
edge between two basic blocks if control may pass from the first to the second. The complexity M is then
defined as

M = E − N + 2P,
where
E = the number of edges of the graph.
N = the number of nodes of the graph.
P = the number of connected components.
Source: http://en.wikipedia.org/wiki/Cyclomatic_complexity
For a single program (or subroutine or method), P is always equal to 1. So a simpler formula for a single
subroutine is

M = E − N + 2
For the given program, the control flow graph is:

E = 13, N = 10.
Therefore, E - N + 2 = 5.
A variable x is said to be live at a statement Si in a program if the following three conditions hold
simultaneously:
1. There exists a statement Sj that uses x
2. There is a path from Si to Sj in the flow
graph corresponding to the program
3. The path has no intervening assignment to x
including at Si and Sj

The variables which are live both at the statement in basic block 2 and at the statement in basic block 3 of
the above control flow graph are
(A) p, s, u
(B) r, s, u
(C) r, u
(D) q, v

Answer: (C)

Explanation: Live variable analysis is useful in compilers to find variables in each program that may be
needed in future.
As per the definition given in question, a variable is live if it holds a value that may be needed in the
future. In other words, it is used in future before any new assignment.

A Software Requirements Specification (SRS) document should avoid discussing which one of the
following?
(A) User interface issues
(B) Non-functional requirements
(C) Design specification
(D) Interfaces with third party software

Answer: (C)
Explanation: A software requirements specification (SRS) is a description of a software system to be
developed, laying out functional and non-functional requirements, and may include a set of use cases that
describe interactions the users will have with the software. (Source Wiki)
Design Specification should not be part of SRS.

Consider the basic COCOMO model where E is the effort applied in person-months, D is the
development time in chronological months, KLOC is the estimated number of delivered lines of code (in
thousands) and ab, bb, cb, db have their usual meanings. The basic COCOMO equations are of the form.
(A) E = ab(KLOC) exp(bb), D = cb(E) exp(db)
(B) D = ab(KLOC) exp(bb), E = cb(D) exp(db)
(C) E = ab exp(bb), D = cb(KLOC) exp(db)
(D) E = ab exp(db), D = cb(KLOC) exp(bb)

Answer: (A)

Explanation: In Basic COCOMO, following are true.

Effort Applied (E) = ab(KLOC)b [ person-months ]


b

Development Time (D) = cb(Effort Applied)d [months] b

People required (P) = Effort Applied / Development Time [count]

Consider a software program that is artificially seeded with 100 faults. While testing this program, 159
faults are detected, out of which 75 faults are from those artificially seeded faults. Assuming that both real
and seeded faults are of same nature and have same distribution, the estimated number of undetected real
faults is ____________.
(A) 28
(B) 175
(C) 56
(D) 84

Answer: (A)

Explanation:
Total faults detected = 159
Real faults detected among all detected faults = 159 - 75
= 84
Since probability distribution is same, total number of real
faults is (100/75)*84 = 112

Undetected real faults = 112- 84 = 28


Another Solution :
75% of faults are detected because 75 artificially seeded faults are detected out of 100.
Given that the total faults detected = 159
=> Real faults detected among all detected faults = 159 – 75= 84
Since probability distribution is same, total number of real faults is (100/75)*84 = 112
Therefore undetected real faults = 112-84 = 28.
So, option (A) is correct one.

Consider a software project with the following information domain characteristic for calculation of
function point metric.
Number of external inputs (I) = 30
Number of external output (O) = 60
Number of external inquiries (E) = 23
Number of files (F) = 08
Number of external interfaces (N) = 02
It is given that the complexity weighting factors for I, O, E, F and N are 4, 5, 4, 10 and 7, respectively. It
is also given that, out of fourteen value adjustment factors that influence the development effort, four
factors are not applicable, each of he other four factors have value 3, and each of the remaining factors
have value 4. The computed value of function point metric is ____________
(A) 612.06
(B) 212.05
(C) 305.09
(D) 806.9

Answer: (A)

Explanation: Function point metrics provide a standardized method for measuring the various functions
of a software application
The value of function point metric = UPF * VAF

Here,
UPF: Unadjusted Function Point (UFP) count
VAF: Value Adjustment Factor

UPF = 4*30 + 60*5 + 23*4 + 8*10 + 7*2 = 606

VAF = (TDI * 0.01) + 0.65


Here TDI is Total Degree of Influence
TDI = 3*4 + 0*4 + 4*6 = 36

VAF = (TDI * 0.01) + 0.65


= 36*0.01 + 0.65
= 0.36 + 0.65
= 1.01

FP = UPF * VAF
= 1.01 * 606
= 612.06

formula for VAF is 0.65 + [ (Ci) / 100] .i = is from 1 to 14 representing each factor

given

four factors are not applicable,

each of the other four factors have value 3, so 4*3

and each of the remaining factors have value 4

remaining=14-8=6 (since total 14 factors)

6*3

so VAF= 0.65 + [4 * 3 + 6 * 4]/100 = 1.01


-
The values of McCabe’s Cyclomatic complexity of Program-X, Program-Y and Program-Z respectively
are
(A) 4, 4, 7
(B) 3, 4, 7
(C) 4, 4, 8
(D) 4, 3, 8

Answer: (A)

Explanation:
The cyclomatic complexity of a structured program[a] is defined
with reference to the control flow graph of the program, a directed
graph containing the basic blocks of the program, with an edge
between two basic blocks if control may pass from the first to the
second. The complexity M is then defined as.

M = E − N + 2P,
where

E = the number of edges of the graph.


N = the number of nodes of the graph.
P = the number of connected components.

Source: http://en.wikipedia.org/wiki/Cyclomatic_complexity

For first program X, E = 11, N = 9, P = 1, So M = 11-9+2*1 = 4


For second program Y, E = 10, N = 8, p = 1, So M = 10-8+2*1 = 4
For Third program X, E = 22, N = 17, p = 1, So M = 22-17+2*1 = 7

In a software project, COCOMO (Constructive Cost Model) is used to estimate

(A) effort and duration based on the size of the software


(B) size and duration based on the effort of the software
(C) effort and cost based on the duration of the software
(D) size, effort and duration based on the cost of the software

Answer: (A)

Explanation: The basic COCOMO equations take the form


Effort Applied (E) = ab(KLOC)b [ person-months ]
b

Development Time (D) = cb(Effort Applied)d [months]


b

As it can be observed from above equations,Effort and Duration are on L.H.S whereas KLOC(Line of
code) ,used to measure program size is on R.H.S. Hence, Answer is A
A software organization has been assessed at SEI CMM Level 4. Which of the following does the
organization need to practice beside Process Change Management and Technology Change Management
in order to achieve Level 5?

(A) Defect Detection


(B) Defect Prevention
(C) Defect Isolation
(D) Defect Propagation

Answer: (B)
In SEI CMM 5 phases are Initial -> Repeatable -> Defined -> Managed -> Optimized

where goal of the 5th phase is to prevent occurrence of defects .

-
A software project involves execution of 5 tasks T1, T2, T3, T4 and T5 of duration 10, 15, 18, 30 and 40
days, respectively. T2 and T4 can start only after T1 completes. T3 can start after T2 completes. T5 can
start only after both T3 and T4 complete. What is the slack time of the task T3 in days?

(A) 0
(B) 3
(C) 18
(D) 30

Answer: (A)

Explanation: Given, T1=10 ; T2=15; T3=18; T4=30;T5=40


T3 : EST =10+15 = 25 //T1 and T2 complete before
T5: EST =10+15+18=43
T3: LST =43-18=25
Slack time = EST-LST =25-25=0 // For T3

Consider the following program module:


int module1 (int x, int y) {
while (x! = y) {
if (x > y)
x = x - y,
else y = y - x;
}
return x;
}
What is Cyclomatic complexity of the above module?
(A) 1
(B) 2
(C) 3
(D) 4
Answer: (C)

Explanation: Condition nodes :


1.While
2. if
Cyclomatic Complexity=2+1=3
There are 2 decision points in this module

1. while condition and

2. if condition

hence cyclomatic complexity = number of decision points + 1 = 3


Assume that the delivered lines of code L of a software is related to the effort E in person months and
duration t in calendar months by the relation L P* (E/B)1/3 * t4/3, where P and B are two constants for the
software process and skills factor. For a software project, the effort was estimated to be 20 person months
and the duration was estimated to be 8 months. However, the customer asked the project team to complete
the software project in 4 months. What would be the required effort in person months?

(A) 10
(B) 40
(C) 160
(D) 320

Answer: (D)

Explanation:
Given,
Initial Effort in Person, E1 = 20 and Initial time, T1 = 8 months
Final Effort in Person, E2 = ?
Final time, T2 = 4 months
Equating both equation,
P* (E1/B)1/3 * t1 (4/3) = P* (E2/B)1/3 * t2 (4/3)
we get, E2 = 320.

A software was tested using the error seeding strategy in which 20 errors were seeded in the code. When
the code was tested using the complete test suite, 16 of the seeded errors were detected. The same test
suite also detected 200 non-seeded errors. What is the estimated number of undetected errors in the code
after this testing?
(A) 4
(B) 50
(C) 200
(D) 250

Answer: (B)

Explanation: Error seeding, as the name implies, seeds the code with some known errors. In other words,
some artificial errors are introduced into the program artificially. The number of these seeded errors
detected in the course of the standard testing procedure is determined. These values in conjunction with
the number of unseeded errors detected can be used to predict:
• The number of errors remaining in the product.
• The effectiveness of the testing strategy.
Let N be the total number of defects in the system and let n of these defects be
found by testing.
Let S be the total number of seeded defects, and let s of these defects be found
during testing.
n/N = s/S
or
N = S × n/s
Defects still remaining after testing = N–n = n×(S – s)/s =200*(20-16)/16=50

What is the availability of a software with the following reliability figures?


Mean Time Between Failure (MTBF) = 25 days
Mean Time To Repair (MTTR) = 6 hours

(A) 1%
(B) 24%
(C) 99%
(D) 99.009%

Answer: (D)

Explanation: Mean time between failures is not the average time something works then fail. It’s the
average time between failures Mean time between failure(MTBF)s= total uptime / number of breakdowns
Mean time to repair is the average time taken to repair something. Mean time to repair(MTTR)= total
time down/number of breakdowns
Availability = Total uptime/(total uptime+total downtime)
= MTBF/(MTBF+MTTR)*100
= >25*24/(25*24 + 6) * 100
= 99.009 %
Therefore, Answer is D
This solution is contributed by Shashank Shanker khare
//Availability = MTBF/(MTBF+MTTR) * 100= 25*24/(25*24 + 6) * 100 = 99.009 %

The Function Point (FP) calculated for a software project are often used to obtain an estimate of Lines of
Code (LOC) required for that project. Which of the following statements is FALSE in this context.

(A) The relationship between FP and LOC depends on the programming language used to implement the
software.
(B) LOC requirement for an assembly language implementation will be more for a given FP value, than
LOC for implementation in COBOL
(C) On an average, one LOC of C++ provides approximately 1.6 times the functionality of a single LOC
of FORTRAN
(D) FP and LOC are not related to each other
Answer: (A)

Explanation: As language levels go up, fewer statements to code one Function Point are required.
For example, COBOL may requires about 105 statements per Function Point and php only 67.
process, the Mean Time To Repair (MTTR) increased by 5 days.
What is the MTBF of the enhanced software

(A) 205 days


(B) 300 days
(C) 500 days
(D) 700 days

Answer: (C)

Explanation:
Availability = MTBF/(MTBF + MTTR)
Option 1 : 0.9 = 200/(200 + a) = 22.22
Case 2 :0.95 = b/(b+22.22+5)= 517.18(near to option C)

In a data flow diagram, the segment shown below is identified as having transaction flow characteristics,
with p2 identified as the transaction center

A first level architectural design of this segment will result in a set of process modules with an associated
invocation sequence. The most appropriate architecture is

(A) p1 invokes p2, p2 invokes either p3, or p4, or p5


(B) p2 invokes p1, and then invokes p3, or p4, or p5
(C) A new module Tc is defined to control the transaction flow. This module Tc first invokes pl and then
invokes
(D) A new module Tc is defined to control the transaction flow. This module Tc invokes p2. p2 invokes
p1, and then invokes p3, or p4, or p5

Answer: (D)

Explanation:
N(nodes) = 9
E(edges) = 10
Cyclomatic Complexity = E – N + 2P
= 10 -9 + 2*1
=3

The cyclomatic complexity of the flow graph of a program provides

(A) an upper bound for the number of tests that must be conducted to ensure that all statements have been
executed at most once
(B) a lower bound for the number of tests that must be conducted to ensure that all statements have been
executed at most once
(C) an upper bound for the number of tests that must be conducted to ensure that all statements have been
executed at least once
(D) a lower bound for the number of tests that must be conducted to ensure that all statements have been
executed at least once

Answer: (C)

In the Spiral model of software development, the primary determinant in selecting activities in each
iteration is
(A) Iteration size
(B) Cost
(C) Adopted process such as Rational Unified Process or Extreme Programming
(D) Risk

Answer: (D)

Explanation: Spiral model is used to discover all risks associated as early as possible.

Find if the following statements in the context of software testing are TRUE or FALSE.
(S1) Statement coverage cannot guarantee execution of loops in a program under test.

(S2) Use of independent path testing criterion guarantees execution of each loop in a program under test
more than once.
(A) True, True
(B) True, False
(C) False, True
(D) False, False

Answer: (A)
Explanation:

Which of the following are NOT considered when computing function points for a software project?
 (O1) External inputs and outputs
 (O2) Programming language to be used for the implementation
 (O3) User interactions
 (O4) External interfaces
 (O5) Number of programmers in the software project
 (O6) Files used by the system
(A) O2, O3
(B) O1, O5
(C) O4, O6
(D) O2, O5

Answer: (D)

Explanation: Unadjusted F.P. = No. of Inputs+No of Files+No of O/P + no of enquieries

A software project plan has identified ten tasks with each having dependencies as given in the following
table:
Task Depends On
T1 –
T2 T1
T3 T1
T4 T1
T5 T2
T6 T3
T7 T3, T4
T8 T4
T9 T5, T7, T8
T10 T6, T9
Answer the following questions:
(Q1) What is the maximum number of tasks that can be done concurrently?
(Q2) What is the minimum time required to complete the project, assuming that each task requires one
time unit and there is no restriction on the number of tasks that can be done in parallel ?
(A) 5, 5
(B) 4, 5
(C) 5, 4
(D) 4, 4

Answer: (B)
A software engineer is required to implement two sets of algorithms for a single set of matrix operations
in an object oriented programming language; the two sets of algorithms are to provide precisions of 10-
3
and 10-6, respectively. She decides to implement two classes, Low Precision Matrix and High Precision
Matrix, providing precisions 10–3and 10-6 respectively. Which one of the following is the best alternative
for the implementation?
 (S1) The two classes should be kept independent.
 (S2) Low Precision Matrix should be derived from High Precision Matrix.
 (S3) High Precision Matrix should be derived from Low Precision Matrix.
 (S4) One class should be derived from the other; the hierarchy is immaterial.
(A) S1
(B) S2
(C) S3
(D) S4

Answer: (B)
Which of the following requirement specifications can be validated?<br>
(S1) If the system fails during any operation, there should not be any loss of data<br>
(S2) The system must provide reasonable performance even under maximum load conditions<br>
(S3) The software executable must be deployable under MS Windows 95, 2000 and XP<br>
(S4) User interface windows must fit on a standard monitor’s screen
(A) S4 and S3
(B) S4 and S2
(C) S3 and S1
(D) S2 and S1

Answer: (C)

Explanation: S2: What is meaning of reasonable performance?


S4: How to decide standard size of monitor.
Software does not wear-out in the traditional sense of the term, but software does tend to deteriorate as it
evolves, because :
(A) Software suffers from exposure to hostile environments.
(B) Defects are more likely to arise after software has been used often.
(C) Multiple change requests introduce errors in component interactions.
(D) Software spare parts become harder to order.

Answer: (C)

Explanation: Software doesn’t wear out but as the environment changes and needs change, it becomes
gradually less useful/effective and more errors.
Option (C) is correct.

Software re-engineering is concerned with:


(A) Re-constructing the original source code from the existing machine (low – level) code program and
modifying it to make it more user – friendly.
(B) Scrapping the source code of a software and re writing it entirely from scratch.
(C) Re-organising and modifying existing software systems to make them more maintainable.
(D) Translating source code of an existing software to a new machine (low – level) language.

Answer: (C)

Explanation: Software Re- engineering is the examination and alteration of a system to reconstitute it in
a new form. In other words, Software Re- engineering is a process of software development which is done
to improve the maintainability of a software system.
Option (C) is correct.

Which of the following is not a key issue stressed by an agile philosophy of software engineering ?
(A) The importance of self-organizing teams as well as communication and collaboration between team
members and customers.
(B) Recognition that change represents opportunity.
(C) Emphasis on rapid delivery of software that satisfies the customer.
(D) Having a separate testing phase after a build phase.

Answer: (D)

Explanation: List the key issues stressed by an agile philosophy of software engineering:
 The importance of self-organizing teams
 Communication and collaboration between team members and customers
 Recognition that change represents opportunity
 Emphasis on rapid delivery of software that satisfies the customer
Option (D) is correct.

What is the normal order of activities in which traditional software testing is organized?
(a) Integration Testing
(b) System Testing
(c) Unit Testing
(d) Validation Testing

(A) (1)
(B) (2)
(C) (3)
(D) (4)

Answer: (B)

Explanation: Traditional software testing is organized in following Sequence:


1. Unit Testing
2. Integration Testing
3. validation Testing
4. System Testing
So, option (B) is correct.

Which of the following testing techniques ensures that the software product runs correctly after the
changes during maintenance ?
(A) Path Testing
(B) Integration Testing
(C) Unit Testing
(D) Regression Testing

Answer: (D)

Explanation: Path testing is an approach to testing where you ensure that every path through a program
has been executed at least once.
Integration testing is the phase in software testing in which individual software modules are combined
and tested as a group.
Unit testing is a level of software testing where individual units/ components of a software are tested. The
purpose is to validate that each unit of the software performs as designed. A unit is the smallest testable
part of any software.
Regression testing is a type of software testing that ensures that previously developed and tested
software still performs the same way after it is changed or interfaced with other software.
Option (D) is correct.

Which of the following statements about ERP system is true ?


(A) Most ERP software implementations fully achieve seamless integration.
(B) ERP software packages are themselves combinations of separate applications for manufacturing,
materials, resource planning, general ledger, human resources, procurement and order entry.
(C) Integration of ERP systems can be achieved in only one way.
(D) An ERP package implemented uniformly throughout an enterprise is likely to contain very flexible
connections to allow charges and software variations.

Answer: (B)

Explanation: Enterprise resource planning(ERP) is a combination of software package and used by


business firms for manufacturing, materials, resource planning, general ledger, human resources,
procurement and order entry.
So,option (B) is correct.

Which of the following is not a Clustering method ?


(A) K – Mean method
(B) Self Organizing feature map method
(C) K – nearest neighbor method
(D) Agglomerative method

Answer: (C)

Explanation: k-means clustering is a method of vector quantization, originally from signal processing,
that is popular for cluster analysis in data mining.
Self-Organizing Map Self Organizing Map (SOM) provides a data visualization technique. SOM also
represents clustering concept by grouping similar data together.
k-nearest neighbors algorithm (k-NN) is a non-parametric method used for classification and regression.
Agglomerative hierarchical clustering is a bottom-up clustering method where clusters have sub-clusters,
which in turn have sub-clusters, etc.
Option (C) is correct.

Consider the following :

A. Condition p. Black box

Coverage testing

B. Equivalence q. System

Class partitioning testing

r. White box

C. Volume Testing testing


s. Performance

D. Beta Testing testing

Matching A, B, C, D in the same order gives.


(A) r, p, s, q
(B) p, r, q, s
(C) s, r, q, p
(D) q, r, s, p

Answer: (A)

Explanation:
With respect to CRT, the horizontal retrace is defined as:
(A) The path an electron beam takes when returning to the left side of the CRT.
(B) The path an electron beam takes when returning to the right side of the CRT.
(C) The technique of turning the electron beam off while retracing.
(D) The technique of turning the electron beam on/off while retracing.

Answer: (A)

Explanation: In computer graphics Horizontal Retrace is defined as: The path an electron beam takes
when returning to the left side of the CRT.
So, option (A) is correct.
The three aspects of Quantization, programmers generally concerned with are:
(A) Coding error, Sampling rate and Amplification
(B) Sampling rate, Coding error and Conditioning
(C) Sampling rate, Aperture time and Coding error
(D) Aperture time, Coding error and Strobing

Answer: (C)

Explanation: The three aspects of quantization, programmers generally concerned with are
1. Sampling rate
2. Aperture time
3. Coding error
So, option (C) is correct.

Вам также может понравиться