Вы находитесь на странице: 1из 179

Software Testing Basics

1. In which Software Life cycle phase does testing occur?


Ans : There are no. of phases in SDLC. They are

1.Requirement

2.Analysis

3.Design

4.Code

5.Unit Test

6.Integration Test

7.System Test

8.UAT Test

9.Production/Release/Maintenance

The Software life cycle phase start on Unit Testing. They are done by
developers .
2. Can you explain PDCA cycle and where does testing fit?
Ans : PDCA means PLAN , DO , CHECK , ACT.

Plan: - Define your goal and the plan of how you will achieve that goal.
Do / Execute: - Depending on the plan strategy decided during the plan stage
we do execution accordingly in this phase.
Check: - Check / Test to make sure that we are moving according to plan and we
are getting the desired results.
Act: - During the check cycle if any issues are there then take appropriate
action accordingly and revise your plan again.
So developers and other stake holders of the project do the “plan and build”, while
testers do the check part of the cycle.
3. What is the difference between white box, black box and gray box testing?

Ans : White box : Testing based on an analysis of the Internal structure of the
component or system.

Black box : Testing, either functional or non-functional , without reference to the


internal reference to its internal structure.
Eg : calculator .we give input and get output.

Gray box : Combination of white box and black box.


The test input data will be provided by the testers and testing will be
done by the development team, its know as gray box testing.

4. Define Defect?

Ans: A flaw in a component or system that can cause the component or system
to fail to perform its required function.

Figure: - Defect and the failure

Eg : an incorrect statement or data definition. A defect, if encountered during


execution, may cause a failure of the component or system.

5. What is the difference between Defect and Failure?

Ans : Defect : Tester found the error is called defect.


Failure : deviation between actual result and expected result.

6. What are the broader categories of defects?


Ans : There are mainly three categories of defect:-

Wrong: - The requirements have been implemented incorrectly. This defect is a


variance

from the given specification.


Missing: - There is a requirement given by the customer and is not done. This is
a

variance from specification, an indication that the specification was not


implemented, or

a requirement of the customer was noted properly.

Extra: - A requirement incorporated into the product that was not given by the
end

customer. This is always a variance from specifications, but may be an attribute


desired

by the user of the product. However, it is considered a defect because it’s a


variance from the existing requirements.

Figure: - Broader classification of defects

7. What is the difference between Verification and Validation?


Ans : Verification: Confirmation by examination and through provision of
objective evidence that the requirements for a specific intended use or
application have been fulfilled.

Validation : Confirmation by examination and through provision of objective


evidence that specified requirements have been fulfilled.

8. How does testing affect risk?


Ans : A risk is a condition that can result in a loss. Risk can only be controlled in
many scenarios but not eliminated completely. Defect normally converts to a risk.
For instance let’s say you are developing an accounting application and you have
done wrong tax calculation. There is a huge possibility this will lead to risk of
company running under loss. But if this defect is controlled then we can either
remove this risk completely or minimize it. Below diagram shows how a risk gets
converted to risk and with proper testing how it can be controlled.

9. Does Increase in testing always mean good to the project?


Ans: No increase in testing does not mean always good for the product,
company or the project. In real test scenarios from 100% test plans only 20% test
plans are critical from business angle. Running those critical test plans will
assure that the testing is proper rather than running the full 100% test plans
again and again. Below is the graph which explains the impact of under testing
and over testing. If you under test a system your number of defect will increase,
but on the contrary if you over test a system your cost of testing will increase.
Even if your defects come down your cost of testing has shooted up.

10. As a manager what process did you adopt to define testing policy?

Ans : Below are the important steps to define testing policy in general. But it can
change according to how you implemented in your organization. Let’s understand
in detail the below steps of implementing a testing policy in an organization.
Definition: - The first thing any organization need to do is define one unique
definition for testing within organization. So that every one is on the same mind
set.
How to achieve : - How are we going to achieve our objective?. Is there going to
be a testing committee, will there be compulsory test plans which needs to be
executed etc etc.
Evaluate : - After testing is implemented in project how do we evaluate the same.
Are we going to derive metrics of defect per phase, per programmer etc etc.
Finally it’s
important to let know every one how testing has added value to the project.
Standards : - Finally what are the standards we want to achieve by testing. For
instance we can define saying that more than 20 defects per KLOC will be
considered below standard and code review should be done for the same.
Figure: - Establishing a testing policy
11. Should testing be only after build and execution?
Ans : No, It is not necessary that testing should be done only after build and
execution, in most of the life cycle testing begins from design phase.

Figure: - Modern way of testing

12. Are number of defects more in design phase or coding phase?

Ans : Design phase is the more error prone than the execution phase. One of
the most frequent defects which occur during design is that, it does not cover the
complete requirements of the customer. Second wrong or bad architecture and
technical decision makes the next phase that is execution more prone to defects.
Because the design phase drives the execution phase it’s the most critical phase
to test. The testing of the design phase can be done by good review. On an
average 60% defect occur during design and 40% during execution phase.

Figure: - Phase wise defect percentage

13. What kind of inputs do we need from the end user to start proper testing?
Ans: Test data is required from end-user to start proper valid records to enter
into the database. Test data will be the exact record which end-user will enter
when they use this application.

14. What is the difference between Latent and Masked Defect?

Ans : Latent bug means : An uncovered or unidentified bug which is existing in


the system over a period of time is referred as Latent Bug. The bug may exist in
the system for one or more versions of the software and may also be identified
after its release.

The problems caused by the Lateral bugs will not cause damage as of now, but
they are just waiting to reveal themselves later.

One good example of a Latent bug the reason for the Y2K problem. At the
beginning the year was given only 2 numeric fields, but actually it needs 4
numeric fields. The problem prevails in the system for a long time and identified
later and then fixed. Also the problem does not cause the damage all of a sudden
and it caused only by the year 2000, which certainly needs 4 numeric field.

It is very diffucult to identify the Latent Bug by using conventional testing


techniques, it can be identified by doing code review or by usability testing which
foresees the forth coming problemsAn uncovered or unidentified bug which is
existing in the system over a period of time is referred as Latent Bug. The bug
may exists in the system for one or more versions of the software and may also
be identified after its release.
Masked defect means:

Masked defect is the defect which is hiding the other defect, which is not
detected. Means if there is a existing defect which is not caused (found) to
reproduce another defect. It means the other defect is masked with the previous
defect.

Figure: - Latent and Masked defect

Eg : If you are testing the Help of any application. If there is a defect in one link
say Add Customer. But this defect was not found by QA and it is went to the live.
Then there is one defect residingin the Add Customer link say the link is not
working on the Add Customer page (Say there is bug in help of adding cell no in
the help in the Add Customer)

Then this Add cell no is a masked defect. It means it is been masked with the
Add Customer defect.

15. A defect which could have been removed during initial stage is removed in later
stage how does it affect cost?
Ans : Cost will increase if defects are found in later stages as it will need rework
from the beginning which results in increasing the amount of work needed to do,
the resources needed to complete the additional task, the time spent on
it, meetings and of course amount.‍

16. In testing can you explain the concept of work bench?

Ans : In order to understand testing methodology we need to understand the


concept of workbench. Work bench is a way of documenting how a specific
activity has to be performed. A work bench is referred as phases, steps and tasks
as

There are four sections for every work bench:-

Input : - Every task needs some defined input and entrance criteria. So for every
work bench we need defined inputs. Input fo rms the first steps of the work
bench. Execute : - This is the main task of the work bench which will transform
the input in to expected output.
Check: - Check steps assure that the output after execution meets the desired
result.
Production output : - If the check is right Production output forms the exit criteria
of the workbench.
Rework : - During the check step if the output is not as desired then we need to
again start from the execute step.

Figure: - Details phases in workbench


In real scenarios projects is not made of one work bench but of many connected
work benches. A work bench gives you a way of organized thinking to perform
any kind of task with proper testing. You can visualize every software phase as a
work bench with execute and check steps. The most important point to note is if
we visualize any task as a work bench by default we have the check part in the
task. Below figure shows how every software phase can be visualized with a
concept of workbench. Let us understand the work bench concept in a detailed
fashion:-
Requirement phase work bench: - Input is the customer’s requirement, we
execute the task of writing a requirement document, we check if the requirement
document addresses all the c ustomer needs and the output is the requirement
document.
Design phase work bench: - Input is the requirement document, we execute the
task of preparing a technical document, review / check is done to see if the
design document is technically correct and addresses all the requirements
mentioned in the requirement document and output is a technical document.

Execution phase work bench: - This is the actual execution of the project. Input
is the technical document; execution is nothing but implementation / coding
according to the technical document and output of this phase is the
implementation / source code.
Testing phase work bench: - This is the testing phase of the project. Input is the
source code which needs to be tested; execution is executing the test case and
output is the test results.
Deployment phase work bench: - This is the deployment phase. There are two
inputs
for this phase one is the source code which needs to be deployed and that is
dependent on the test results. Output of this project is that the customer gets the
product which he can
now start using.
Maintenance phase work bench: - Input to this phase is the deployment
results, execution is implementing change request from the end customer, check
part is nothing but running regression testing after every change request
implementation and output is a new release after every change request
execution.
Figure: - Workbench and software life cycles

17. What's the difference between Alpha and Beta testing?

Ans : Alpha Testing : It is a form of UAT performed by end user at developer site
in a controlled environment.

Beta Testing : It is also a form of UAT perform by end user at one or more
customer sites in an uncontrolled environment.

18. Can you explain the concept of defect cascading?

Ans : Defect cascading - In a software development life cycle when a defect is


present in any phase but is not identified, percolates to other phases without
getting noticed. Thus resulting in increase in number of defects.
19. Can you explain how one defect leads to other defects?

Ans : Defect Cascading is a defect which is caused by other defect. So one


defect triggers other defect. For instance in the accounting application below
there is one defect called which leads to negative taxation. So the negative
taxation defect affects the Ledger which in turn affects four other modules.

Figure: - Defect cascading

Example – In a system there is a formula of addition to be used in screen 1 and


its output is used in screen 2, screen 3 for further calculation, but by mistake the
developer enters multiplication sign instead of plus sign, then the output on every
screen will be wrong .

20. Can you explain what is Usability testing?

Ans : Testing to determine the extent to which the software product is


understood, easy to learn, easy to operate and attractive to the users under
specified conditions.

21. What are the different strategies of rollout to the end users?
Ans : Rollout is the last phase of any software development , a final check before
a successful deployment.

The different strategies of Rollout to the end user are


• Train the end user by creating a training program

• Access or evaluate the technical skill of the end users

• Providing Administrator rights to the user

• Intimidating the user about the final release of the system.

22. Can you explain requirement traceability and its importance?

Ans : Tracing requirements from development to testing verifies that each


functional requirement used in the development matches those established for
test cases. This lets testing teams double-check that all required system features
are tested.

23. What is the difference between Pilot and Beta testing?

Ans : Pilot testing – It is real world test done by the group of user before the final
deployment to find as many defects as possible. The main purpose of pilot testing is
to catch potential problems before they become costly mistakes.

Beta testing – It is the testing done by end users before the final release when the
development and testing are essentially completed. The purpose is to find final
problems and defects.
Figure: - Pilot and Beta testing

24. How will you do a risk analysis during software testing?

Ans : The objective of performing risk analysis as part of test planning help allocate
limited test resources to those software components that pose the greatest risk to the
organization. Testing minimizes software risks. To make software testing most effective
it is important to assure all the high risks associated with the software, will be tested
first.

Risk analysis can be done in the following steps

1. Identify the risk analysis team

2. Identify risks - Creation of risk scenarios and checklist

3. Estimate the magnitude of the risk by using formulas

4. Select testing priorities - rank the risk and act on it.

25. How do you conclude which section is most risky in your application?
Ans : As per Requirement analysis the critical section are identified for testing.
They are screens or business scenarios used most by the customers in real time.

26. What does entry and exit criteria mean in a project?


Ans : Entry criteria – It ensures that the proper environment is in place to start
test process of a projecte.g. All hardware/software platforms are successfully
installed and functional, Test plan, test case are reviewed and signed off.

Exit Criteria - It ensures that the project is complete before exiting the test
stage.E.g. Planned deliverables are ready, High severity defects are fixed,
Documentation is complete and updated.

Figure: - Entry and exit criteria

27. On what basis is the Acceptance plan prepared?

Ans : In any project Acceptance document is normally prepared using the


following inputs. This can vary from company to company and from project to
project.
Requirement document: - This document specifies what exactly is needed in the
project from the customer perspective.
Input from customer: - This can be discussions, informal talks, emails etc.

Project plan: - Project plan prepared by the project manager also serves as a
good input to finalize your acceptance test .

28. What's the relation between environment reality and test phases?

Ans : Environment reality becomes more important as test phases start moving
ahead . For instance during unit testing you need the environment to be least
real, but at the acceptance phase you should have a 100% real environment, or
we can say it should be the real environment .
29. What are different types of verifications?

Ans : The different types of verification are Walkthrough, Reviews, and


Inspection.

30. What's the difference between Inspections and Walkthroughs?


Ans : A 'walkthrough' is an informal meeting for evaluation or informational
purposes. Little or no preparation is usually required.

An 'inspection' is more formalized than a 'walkthrough', typically with 3-8 people


including a moderator, reader, and a recorder to take notes. The subject of the
inspection is typically a document such as a requirements spec or a test plan,
and the purpose is to find problems and see what's missing, not to fix anything.

Figure: - Walkthrough and Inspection

31. Can you explain regression testing and confirmation testing?

Ans : Re-testing: After fix the bug or modifying the build, we will verify the same
functionality with different inputs.
Regression testing: After the bug fixed, testing the application whether the fixed
bug is affecting remaining functionality of the application or not.

32. What do you mean by coverage and what are the different types of coverage
techniques?
Ans : Coverage is a form of white box testing activity. It describes the measure to
which the code has been tested The following are the types of coverage
techniques

• Statement Coverage - Execute all statements at least once.


• Decision Coverage- Execute each decision direction at least once.

• Condition Coverage - Execute each decision with all possible outcomes at least
once.

• Decision/Condition Coverage - Execute all possible combinations of condition


outcomes In each decision. Treat all iterations as two-way conditions exercising
the loop zero times and one time.

• Multiple Condition Coverage - Invoke each point of entry at least once.

33. How does fundamentally a coverage tool work?

Ans : Coverage tools are used to test the structure/logic/code of a program.

It acess source code of a program and identify the suitable criteria first needed to
be defined based on the following criteria

1. Functional coverage - identifies how many functions were executed

2. Statement or line coverage - identifies the number of lines in source code


were executed

3. condition/decision coverage - identifies number of loop condition executed

4. Entry and exit coverage - identifies functions or procedures executed from


beginning to the end

5. Path coverage - identifies all possible paths from a given starting point in the
code has been executed.
34. What is configuration management?

Ans : The dynamic nature of most business activities causes software or system
changes. Configuration management also known as change control is process to
keep track of all changes done throughout the software life cycle. It involves
coordination and control of requirements, code, libraries, design, test efforts,
documentation etc. The primary objective of CM is to get the right change
installed at the right time.

35. Can you explain the concept of baseline in software development?

Ans : Baseline is the point at which some deliverable produced during the
software engineering process is put under formal change control.Every baseline

should have a date of when it was baselined, a description of what the baseline
represents, and it should have a version control number.

36. What are the different test plan documents in project?


Ans : The test plan documents that are prepared during a project are
Problem reporting documents
Change Requests documents
Deliverable documents
Test incident documents
Test summary documents
Test case documents.

37. How do test documents in a project span across software development life
cycle?
Ans : The test documents starts with the Test Plan, Test Strategy

documents in planning phase. Later we come out with

High/Low level scenarios document, then Test Cases. Here we

also prepare a traceability documents for Req > Test cases.

Etc.
Last we create a closure document and lesson learnt

document.

38. Can you explain inventories?

Ans : They are list of items to be tested which has a meaning or purpose.

39. How do you do Analysis and design for testing projects?

Ans : Analysis and design for testing projects depends on project planning and
scope and can be done as following

• Creating Test cases

• Identifying test condition and scenarios

• Documentation of RTM and other Test deliverables

• Creating Test data

• Conducting Reviews

• Use of metrics like percentage of requirements covered by test conditions.

40. Can you explain calibration?


Ans : Calibration is a part of ISO 9001 quality model. It includes tracing the
accuracy of the devices used in the production, development and testing.
Devices used must be maintained and calibrated to ensure that it is working
in good order.The records are maintained in Quality system database. Each
record includes

• Tracking number

• Equipment description, type, model

• Location

• Calibration Intervals

• Calibration procedure

• Calibration history

• Calibration Due

41. Which test cases are first written white boxes or black box?

Ans : Black box test cases are written at initial stage based on requirement
documents and project plan.

42. Can you explain Co-habiting software?

Ans : Applications are installed on machine (PC, mainframe, client serve) that
also serves as host for other application.They share common files and resources
on the machine.They are termed Cohabiting software. Cohabiting software
resides in the test environment but don’t interact with the application being
tested.

43. What different impact rating's you have used in your project?

Ans : Normally impact rating for defects is classififed in to three types :

Minor : Very low impact but does not affect operations on large scale.

Major : Affects operations on a very large scale .

Critical : Brings the system to halt and complete show stopper.

44. Can you explain what a test log is?

Ans : Test log is a document which contains information about the passed and
failed test cases.

45. Explain SDLC (Software development Life Cycle) in detail?

Ans : SDLC (Software development life cycle) is a common software


development model used to develop a new system or re engineer an existing
one.

This model follows completion of logical sequence of stages/phases. The output


of one stage becomes input for the next stage.

The following are the steps

Planning - Project management plan and other planning documents are


developed.

Requirements Analysis - Functional requirements documents and user


requirement documents are developed

Design - Detailed requirements are transformed into complete detailed System


designs.

Implementation - Development environment is setup, System designs are


converted in code.

Integration and testing - Code are integrated and tested for defects

Operations and maintenance - operate and maintain the system in production


environment and post delivery in customer environment.

46. Can you explain waterfall model?

Ans : Waterfall model is a software development approach which describes


flowing of phases downwards one by one. The whole process of development is
divided into separate process phases.The next step/process does not start a
unless the previous step/process is completed

The waterfall model consists of the following phases:


• Requirements gathering and Analysis

• System Design

• System Coding

• Integration and system test

• Implementation

• Operations and maintenance.

47. Can you explain big-bang waterfall model?

Ans : Water fall model is divided as Bing bang waterfall model and Phased
waterfall model. In big bang model all stages are freezed one at a time as it
flows down. The following are the stages of big bang waterfall model

• Requirement Analysis

• Design

• Implementation

• Testing

• Integration
• Maintaianence

Figure: - SDLC in action (Waterfall big bang model)

48. Can you explain phased waterfall model?

Ans : It is a type of waterfall model where the project is divided into small phases
and delivered at intervals by different teams.

Different Teams work in parallel for each small phase and integrates at the end of
the project.

49. Explain Iterative model, Incremental model, Spiral model, Evolutionary model
and V-Model?
Ans : Incremental model - It is non integrated development model.This model
divides work in chunks and one team can work on many chunks. It is more
flexible.

Evolutionary model - It is more customer focused model.In this model the


software is divided in small units which is delivered earlier to the customers.

• Requirement stage - Acceptance test

• Specification stage - System test documents

• Design stage - Integration test

• Coding stage - Unit test

50. Explain Unit testing, Integration tests, System testing and Acceptance testing?

Ans : Unit testing - Testing performed on a single, stand-alone module or unit of


code.

Integration Tests - Testing performed on groups of modules to ensure that data


and control are passed properly between modules.

System testing - Testing a predetermined combination of tests that, when


executed successfully meets requirements.

Acceptance testing - Testing to ensure that the system meets the needs of the
organization and the end user or customer (i.e., validates that the right system
was built).
51. What’s the difference between system and acceptance testing?

Ans : UAT VS SIT :

Objective of UAT : To test the application from end users

perspective and to verify for the business rules.

Objective of ST : To test the application for the

correctness of how the application has been built and how

its interface is been done.

Reference document for UAT : BRD

Reference document for ST : SRS or FS

Environment for UAT : Simulated Live Environment(Prod Site)

Environment for ST : Test Environment (Developers site)

Data used for UAT : Simulated live data

Data used for ST : Dummy data

System Testing – 1) A generic term that differentiates various types of


higher order testing from unit testing; 2) a predetermined combination of tests
that, when executed successfully, satisfy IT management that the system
meets requirements.

Acceptance testing - Testing to ensure that the system meets the needs of the
organization and the end user or customer (i.e., validates that the right system
was built).
Figure: - V model cycle flow

52. Which is the best model?

Ans : The best model for testing depends on your company's 1) projects, 2)
resources, 3) budget, and 4) time allotted for testing.

Example : Agile Model is the best model but all the companies uses V-Model as
it is the best model.

53. What group of teams can do software testing?

Ans : When it comes to testing every one in the world can be involved right from
the developer to the project manager to the customer. But below are different
types of team groups which can be present in a project.

Isolated test team: - There is a special team of testers which do only testing.
The testing team is not related to any project. It like having a pool of testers in an
organization, which are picked up on demand by the project and after completion
again pushed back to the pool. This approach is costly but the best is we have a
different angle of thinking from a different group which is isolated from
development. But yes because it’s a complete isolated team it definitely comes at
a cost.

Outsource: - In this we contact an external supplier, hire testing resources and


do testing for our project. Again it has two sides of the coins. Good part is
resource handling is done by the external supplier. So you are freed from the
worry about resource leaving the company, people management etc. But the bad
side of the coin is because they are outsourced vendors they do not have domain
knowledge of your business. Second at the initial stage you need to train them
for domain knowledge which is again added cost.

Inside test team: - In this approach we have a separate team which belongs to
the project. Project allocates separate budget for testing and this testing team
specially works this project only. Good side you have a dedicated team and
because they are involved in the project they have good knowledge about the
same. Bad part you need to budget for them in short it increases the project cost.

Developer as Testers: - In this approach the developers of the project perform


the testing activity. Good part of this approach is developers have a very good
idea of the inner details so they can perform good level of testing. Bad part of this
approach because the developer and tester both are same person; there is no
different angle, so it’s very much likely that many defects can be missed.

QA / QC team: - In this approach the quality team is involved in to testing. Good


part because QA team is involved good quality of testing can be expected. Bad
part QA and QC team of any organization is also involved with lot of other activity
which can hamper testing quality of the project. Below diagram shows the
different team approaches.

Figure: - Types of teams

Testing techniques

54. Can you explain boundary value analysis?

Ans : This technique helps to create test cases around the boundaries of the
valid data.Usually values passed are exact boundary values, + or – 1 at the lower
boundary and+ or – 1 at the higher boundary. This is a technique to prove
that software is error-proof to boundaries.
For Example if you are writing a test case for the condition of age should be
greater that 18 & less than 35, the we have to write test case for 17,18,19 &
34,35,36

55. What is BV in software testing?

Ans : Boundary value analysis is a methodology for

designing test cases that concentrates software testing effort on cases

near the limits of valid ranges Boundary value analysis is a method

which refines equivalence partitioning. Boundary value analysis

generates test cases that highlight errors better than equivalence

partitioning. The trick is to concentrate software testing efforts at

the extreme ends of the equivalence classes. At those points

when input values change from valid to invalid errors are most

likely to occur. As well, boundary value analysis broadens the

portions of the business requirement document used to generate tests.

Unlike equivalence partitioning, it takes into account the output

specifications when deriving test cases.

56. Can you explain Equivalence partitioning?

Ans : This technique will help to narrow down the possible test cases using
equivalence classes. Equivalence class is one, which accepts same types of
input data. Few test cases for every equivalence class will help to avoid
exhaustive testing.
57. Can you explain how state transition diagram can be helpful during testing?

Ans : It lists all possible state transition combinations not just the valid ones, it
unveils combinations that were not identified or documented or dealth with in the
requirements. Its beneficial to discover theses defect before coding begins.

58. Can you explain random testing?

Ans : A black box test design technique where test cases are selected, possibly
using a pseudo – random generation algorithm, to match an operational profile.
This technique can be used for testing non-functional attributes such as reliability
and performance.

59. Can you explain monkey testing?

Ans : Monkey Testing - Testing the application by entering data randomly


without any test case, methodology or plan. Intention is to break the system. Also
known as destructive testing.

60. What is negative and positive testing?

Ans : Negative testing – It is a testing technique to identify the situation where


the system is expected to fail.

Eg : Enter numbers in field which only accept alphabets.

Positive testing – It is a testing technique to identify the situation where the


system is functional as expected. Also known as Happy path testing.

Eg: Enter alphabets in field which accepts only alphabets.

61. Can you explain exploratory testing?


Ans : It is a testing technique where the user learns about the system while
testing. Its an informal software test that is not based on formal test plans or test
cases.

62. What exactly are semi-random test cases?

Ans : As the name specifies semi – random testing is nothing but controlling
random testing and removing redundant test cases. So what we do is we have
random test cases, we apply equivalence partitioning to those test cases, which
in turn removes redundant test case thus giving us semi – random test cases.

63. Can you explain the concept of orthogonal array?

Ans : Orthogonal arrays are two dimensional arrays of numbers where choosing
any two columns in the array covers even distribution of all pair wise
combinations of values in the array.

It is useful to detect pair wise defects and they can reduce redundancy.

64. Can you explain pair-wise defect fundamental?

Ans : Orthogonal array is a two dimension array in which if we choose any two
columns in the array, all the combinations of number will appear in those
columns. Below figure shows a simple L9 (34) orthogonal array is shown. In this
the 9 indicates that it has 9 rows. 4 indicate that it has 4 columns and 3 indicate
that each cell contains a 1, 2 and 3. Choose any two columns. Let’s choose
column 1 and 2. It has (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3)
combination values. If you see closely these values covers all the values in the
array. Compare the values with combination column 3 and 4 and they will fall in
some pair. This is applied in software testing which helps us eliminate duplicate

test cases.
Figure: - Sample orthogonal array

Now let’s try to apply orthogonal array in actual testing field. Let’s say we have a

scenario in which we need to test mobile handset with different plan type, term
and size. So below are the different situations:-

· Handset ( Nokia , 3G and Orange)

· Plan type ( 4x400 , 4x300 and 2x270)

· Term ( Long term , short term and mid term)

· Size ( 3 ,4 and 5 inch)

So we will have the following testing combinations.

· Each handset should be tested with every plan type, term and size.

· Each plan type should be tested with every Handset, term and size.

· Each size should be tested with every handset , plan type and term

So now you must be thinking that means we have 81 combinations, but we can
test all these conditions with only 9 test cases. Below is the orthogonal array for
the same.
Figure: - Orthogonal array in actual testing

Orthogonal array is very useful because most defects are pair wise defects and
with orthogonal array we can reduce redundancy to a huge extent.

65. Can you explain the concept of decision tables?

Ans : A decision table lists causes and effects in a matrix. Each column
represents a unique combination.

It is divided in four quadrants

1. Condition

2. Condition alternatives/combinations

3. Action

4. Action entries
66. How did you define severity ratings in your project?

Ans : Severity defines how severe is the impact of the defect. It can be rated as
Critical, major and minor.

"Critical" means there is a serious impact on the project or on further testing.


“Major” causes an output of the software to be incorrect or stops further testing.
“Minor” means something is wrong, but it does not directly affect the user of the
system or further testing, such as a documentation error or cosmetic GUI error.

Software process

67. What is a Software process?

Ans A process is a series of step to solve a problem. Below figure shows a


pictorial view of how an organization has defined a way to solve any risk
problem. In the below diagram we have shown two branches one is what exactly
is a process and the second branch shows a sample risk mitigation process for
an organization. For instance the below risk mitigation process defines what step
any department should follow to mitigate a risk.

· Identify the risk of the project by discussion, proper requirement gathering and
forecasting.

· Once you have identified the risk prioritize which risk has the most impact and

should be tackled on priority basis.


· Analyze how the risk can be solved by proper impact analysis and planning.

· Finally using the above analysis we mitigate the risk.

Figure: - Software Process

68. What are the different cost elements involved in implementing process in an
organization?

Ans : Salary : This forms the major component of implementing any process
salary of the employees. Normally while implementing process in a company
either organization recruit full time guys or they can share a resource part time on
implementing the process .

Consultant : if the process is new it can also involve in taking consultants which
is again an added cost.

Training Cost : Employee of the company also have to undergo training in order
to implement the new process.

Tools : In order to implement process organization will also need to buy tools
which again need t to be budgeted.
Figure: - Cost of implementing process

69. What is a model?

Ans : Model is nothing but best practices followed in an industry to solve issues
and problems. Models are not made in a day but are finalized and realized by
years of experience and continuous improvements.

Figure: - Model

E.g. CMMI, MBNQA

70. What is maturity level?

Ans : A maturity level is a well-defined evolutionary stage for achieving a mature


software process. Each level contains a set of goals that, when satisfied,
stabilizes specific aspects of the software development process.

Eg CMMI levels.
71. Can you explain the concept of process area in CMMI?

Ans : The following are the different process areas in CMMI:


1. Initial Process: This process involves in software configuration management,
software quality assurance, software subcontract management, software project
tracking and oversight, software project planning and requirements management.
2. Defined Process: This process involves in peer reviews, intergroup
coordination, software product engineering, integrated software management,
training program, organization process definition, organization process focus.
3. Managed Process: This process involves in software quality management,
quantitative process management
4. Optimizing Process: Process change management, technology change
management, and defect prevention.
72. Can you explain the concept of tailoring?

Ans : Tailoring a process is implemented in CMMI. In this process changes or


adaptation are described at the organizational level, for use on a particular
project. Every organization has different process definition and hence need to be
amended.Tailoring guidelines describes what can and what cannot be modified,
also identifies process components that are allowable for modification.

73. What is CMMI and what's the advantage of implementing CMMI in an


organization?

Ans : (CMMI) Capability maturity Model Integration is an approach of process


improvement which helps organizations improve and evaluate their performance.

It is a framework that enable an organization to "build the right products right".

Advantages of CMMI are

1. Organizationals activities get explicitly linked to business objectives

2. Integrates system engineering and software engineering into product


engineering

3. Provides more detailed coverage of the product life cycle


4. Increase of visibility in the organizational activities helps product or service
meets the customers requirements

5. Focus on Requirement management, QA

6. Keeps down cost

7. It is efficient and flexible as users select model representation as per


their business objectives.

74. What’s the difference between implementation and Institutionalization?

Ans : They are the techniques used in CMMI implementation.

Implementation -It is the task performed according to a process. This is the initial
stage when the organization implements any new process.

Institutionalization - It is the task performed according to an organization


standard. It is an ongoing process of implementation.

75. What are different models in CMMI?

Ans : The following are the different models in CMMI

1. CMMI for services, V1.2 - this model is designed for service


provider organization that want to improve their ability to establish, manage
and deliver services.

2. CMMI for Acquisition, V1.2 - this model is designed


for acquisition organizations that want to improve their ability to
acquire products and services.
3. CMMi for Development, V1.2 - this model is designed for development
organisation that want to improve their ability to develop product and services.

76. Can you explain staged and continuous models in CMMI?

It is an organization decision to choose a model which suits their business types.


Quality models are designed for specific purposes. Models are .

Staged Model - It uses predefined sets of process area to define an improvement. Each
level of maturity is further decomposed into number of ...

77. Can you explain the different maturity levels in staged representation?

Maturity level of a process defines the nature and maturity present in the organization.
These levels help to understand and set a benchmark for the organization.
• Level 1 Initial – Processes are characterized as chaotic and ad-hoc, heroic
efforts required by individuals to successfully complete projects. A few Processes
are in place; successes may not be repeatable.

• Level 2 Managed – An organization have installed basic management controls.


Software project tracking, requirements management, realistic planning, and
configuration management processes are in place, At level 2 organization are
summarized as Disciplined as they get the ability to successfully repeat planning
and tracking.

• Level 3 Defined- standard software development and maintenance processes


are integrated throughout an organization; a Software Engineering Process
Group is in place to oversee software processes, and training programs are used
to ensure understanding and compliance. Cost, schedule and functionality are
under control.

• Level 4 Quantitatively Managed – Processes are integrated as whole, metrics are


used to track productivity, processes and products. Project performance is
predictable and quality is consistently high.

• Level 5 Optimizing - The focus is on continuous process improvement. The


impact of new processes and technologies can be predicted and effectively
implemented when required. The Project strive to improve the process capability
and process performance.
78. Can you explain capability levels in continuous representation?

A capability level is a well-defined evolutionary plateau describing the organization's


capability relative to a process area. A capability level consists of related specific and
generic practices for a process area that can improve the organization's processes
associated with that process area. Each level is a layer in the foundation for continuous
process improvement.
Thus, capability levels are cumulative, i.e., a higher capability level includes the
attributes of the lower levels.
In CMMI models with a continuous representation, there are six capability levels
designated by the numbers 0 through 5.
• 0 - Incomplete
• 1 - Performed
• 2 - Managed
• 3 - Defined
• 4 - Quantitatively Managed
• 5 - Optimizing
A short description of each capability level is as follows:
Capability Level 0: Incomplete
An "incomplete process" is a process that is either not performed or partially performed.
One or more of the specific goals of the process area are not satisfied and no generic
goals exist for this level since there is no reason to institutionalize a partially performed
process.
This is tantamount to Maturity Level 1 in the staged representation.
Capability Level 1: Performed
A Capability Level 1 process is a process that is expected to perform all of the
Capability Level 1 specific and generic practices. Performance may not be stable and
may not meet specific objectives such as quality, cost, and schedule, but useful work
can be done. This is only a start, or baby-step, in process improvement. It means that
you are doing something but you cannot prove that it is really working for you.
Capability Level 2: Managed
A managed process is planned, performed, monitored, and controlled for individual
projects, groups, or stand-alone processes to achieve a given purpose. Managing the
process achieves both the model objectives for the process as well as other objectives,
such as cost, schedule, and quality. As the title of this level indicates, you are actively
managing the way things are done in your organization. You have some metrics that are
consistently collected and applied to your management approach.
Remember: metrics are collected and used at all levels of the CMMI, in both the staged
and continuous representations. It is a bitter fallacy to think that an organization can
wait until Capability Level 4 to use the metrics.

Capability Level 3: Defined


A capability level 3 process is characterized as a "defined process." A defined process is
a managed (capability level 2) process that is tailored from the organization's set of
standard processes according to the organization's tailoring guidelines, and contributes
work products, measures, and other process-improvement information to the
organizational process assets.
Capability Level 4: Quantitatively Managed
A capability level 4 process is characterized as a "quantitatively managed process." A
quantitatively managed process is a defined (capability level 3) process that is
controlled using statistical and other quantitative techniques. Quantitative objectives for
quality and process performance are established and used as criteria in managing the
process. Quality and process performance is understood in statistical terms and is
managed throughout the life of the process.
Capability Level 5: Optimizing
An optimizing process is a quantitatively managed process that is improved, based on
an understanding of the common causes of process variation inherent in the process. It
focuses on continually improving process performance through both incremental and
innovative improvements. Both the defined processes and the organization's set of
standard processes are targets of improvement activities.
Capability Level 4 focuses on establishing baselines, models, and measurements for
process performance. Capability Level 5 focuses on studying performance results
across the organization or entire enterprise, finding common causes of problems in how
the work is done (the process[es] used), and fixing the problems in the process. The fix
would include updating the process documentation and training involved where the
errors were injected.

79. Which model should we use and under what scenarios?

System Engineering - This model can be used for development of total system.
Software Engineering - This model can be used for development ...

80. How many process areas are present in CMMI and in what classification do they fall
in?

There are total 22 Key Process areas in CMMI. Ratings are awarded for level 2 through
level 5.
Maturity Level 2 - Managed
• Project Requirement Management
• Project Planning
• Quality process & product Quality Assurance
• Project Monitoring & control
• Measurement & Analysis

• Configuration management
• Supplier Agreement Management

Maturity Level 3 - Defined


• Requirement Development

• Technical solution
• Product Integration
• Verification
• Validation
• Organization process Focus
• Organizational process Definition
• Organizational Training
• Risk Analysis
• Decision Analysis & resolution
• Integrated Project management
Maturity Level 4 - Quantitatively managed
• Organization process performance
• Quantitative project Management
Maturity Level 5 - Optimization
• Casual analysis and Resolution
• Organization innovation & deployment

81. What the difference between every level in CMMI?

The continuous representation consists of capability levels, while the staged


representation consists of maturity levels. The main difference between these two types
of levels is the representation they belong to and how they are applied:

* Capability levels, which belong to a continuous representation, apply to an


organization’s process-improvement achievement in individual process areas. There are
six capability levels, numbered 0 through 5.
* Maturity levels, which belong to a staged representation, apply to an organization’s
overall process-improvement achievement using the model. There are five maturity
levels, numbered 1 through 5. Each maturity level comprises a set of goals that, when
satisfied, improve processes. Maturity levels are measured by the achievement of the
goals that apply to a set of process areas.

82. What different sources are needed to verify authenticity for CMMI implementation?

Ans : There are three different sources from which an appraiser can verify that did
the organization follow process or not.

Instruments: - It is a survey or questionnaire provided to the organization, project or

Individuals before starting the assessment. So that before hand appraiser knows some

Basic details of the project.

Interview: - It’s a formal meeting between one or more members of the organization in

Which they are asked some questions and the appraiser makes some judgments based
on those interviews. During the interview the member represents some process areas
or role which he performs in context of those process areas. For instance the appraiser
may interview a tester or programmer asking him indirectly what metrics he has
submitted to his project manager. By this the appraiser gets a fair idea of CMMI
implementation in that organization.

Documents: - It’s a written work or product which serves as an evidence that a process
is followed. It can be hard copy, word document, email or any type of written official
proof.

Below is the pictorial view of sources to verify how much compliant the organization is

with CMMI.

Figure: - Different data source for verification

83. Can you explain SCAMPI process?

SCAMPI is an acronym for Standard CMMI Appraisal Method for Process Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead App ...

SCAMPI is an acronym for Standard CMMI Appraisal Method for Process Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead App ..

SCAMPI is an acronym for Standard CMMI Appraisal Method for Process Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead Appraiser.
SCAMPI is supported by the SCAMPI Product Suite, which includes the SCAMPI
Method Description, maturity questionnaire, work aids, and templates. Currently,
SCAMPI is the only method that can provide a rating, the only method recognized by
the SEI, and the method of most interest to organizations.
There are 3 SCAMPI methods
• SCAMPI class A Appraisal
• SCAMPI class B Appraisal
• SCAMPI class C Appraisal

84. How is appraisal done in CMMI?


The Appraisal Program oversees the quality and consistency of the SEI's process
appraisal technology and encourages its effective use. Its four main functions include
communications to the appraisal community; appraisal quality control; training,
authorizing, certifying, and providing resources for Lead Appraisers and Team Leaders;
and monitoring and reporting appraisal results.
Through the SEI Appraisal Program, the highest quality candidates are selected and
trained as Lead Appraisers. Ongoing training and resources are provided for Lead
Appraisers.
If you already are a Lead Appraiser, see the resources page for a list of appraiser
tools and services. Or, go to the reporting page to find out how to submit your appraisal
results.
85. Which appraisal method class is the best?

Appraisal Classes
For benchmarking against other organizations, appraisals must result in consistent
ratings. The SEI has developed a document to assist in identifying or developing
appraisal methods that are compatible with the CMMI Product Suite. This document is
the Appraisal Requirements for CMMI (ARC).
SEI Appraisal Classes
The ARC describes a full benchmarking class of appraisal as Class A. Other CMMI-
based appraisal methods might be more appropriate for a given set of sponsor needs,
including self-assessments, initial appraisals, quick-look or mini-appraisals, incremental
appraisals, and external appraisals.
Thus, a particular appraisal method is declared an ARC Class A, B, or C appraisal
method. This designation implies the sets of ARC requirements that the method
developer has addressed when designing the method.
The SCAMPI family of appraisals includes Class A, B, and C appraisal methods.

SCAMPI A is the most rigorous method and the only method that can result in a rating.
SCAMPI B provides options in model scope, but the characterization of practices is
fixed to one scale and is performed on implemented practices.
SCAMPI C provides a wide range of options, including characterization of planned
approaches to process implementation according to a scale defined by the user.
Using SCAMPI B, every practice in the appraisal scope is characterized on a three
point scale indicating the risk of CMMI goal satisfaction if the observed practices were
deployed across the organizational unit. Model scope is not limited to the Process Areas
but could include sets of related practices.
SCAMPI C can be scoped at any level of granularity and the scale can be tailored to the
appraisal objectives, which might include the fidelity of observed practices to model/goal
achievement or the return on investment to the organization from implementing
practices.
Reliability, rigor, and cost might go down from A to B to C, but risk might go up.
Characteristics of Appraisal Classes
Clas Clas Clas
Characteristic
sA sB sC

Amount of objective Med


High Low
evidence ium
Ratings generated Yes No No
Med
Resource needs High Low
ium
Larg Med Sma
Team size
e ium ll

86. Can you explain the importance of PII in SCAMPI?

What is the importance of PII in SCAMPI?

The Practice Implementation Indicators is based on the fundamental idea of the


assumption that the performance of an activity or the implementation of a practice will
always results in “footprints” those are attributable to the activity or the practice.

Testing - What is the importance of PII in SCAMPI? - August 11, 2008 at 13:10 pm
by Raj meet Ghai

What is the importance of PII in SCAMPI?

PII is Practice Implementation Indicator. As the name suggests, P II serves as an


indicator or evidence that a certain practice that supports a goal has been implemented.
P II could be a document and be served as a proof.

87. Can you explain implementation of CMMI in one of the Key process areas?

A Process Area is a cluster of related practices in an area that, when implemented


collectively, satisfy a set of goals considered important for making significant
improvement in that area. All CMMI process areas are common to both continuous and
staged representations.
The continuous representation enables the organization to choose the focus of its
process improvement efforts by choosing those process areas, or sets of interrelated
process areas, that best benefit the organization and its business objectives. Although
there are some limits on what an organization can choose because of the dependencies
among process areas, the organization has considerable freedom in its selection.
Once you select the process areas, you must also select how much you would like to
improve the processes associated with those process areas (i.e., select the appropriate
capability level). Capability levels and generic goals and practices support the
improvement of processes in individual process areas.
Conversely, you will see that the staged representation encourages you to always look
at process areas in the context of the maturity level to which they belong. The process
areas are organized by maturity levels to reinforce this concept. When you use a
process area, you use the entire process area: all goals and all practices.
The CMMI Process Areas (PAs) can be grouped into the following four categories to
understand their interactions and links with one another regardless of their defined level:
• Process Management
• Project Management
• Engineering
• Support
Each process area is defined by a set of goals and practices. There are two categories
of goals and practices:
• Generic goals and practices: They are part of every process area.
• Specific goals and practices: They are specific to a given process area.
A process area is satisfied when company processes cover all of the generic and
specific goals and practices for that process area.

88. Explanation of all process areas with goals and practices?

89. Can you explain the process areas?

CMMI - Explain the different Process Area in CMMI. - March 02, 2010 at 5:00 AM
by Vidya Sagar

Explain the different Process Area in CMMI.

The following are the different process areas in CMMI:


1. Initial Process: This process involves in software configuration management,
software quality assurance, software subcontract management, software project
tracking and oversight, software project planning and requirements management.
2. Defined Process: This process involves in peer reviews, intergroup coordination,
software product engineering, integrated software management, training program,
organization process definition, organization process focus.
3. Managed Process: This process involves in software quality management,
quantitative process management
4. Optimizing Process: Process change management, technology change management,
and defect prevention

CMMI - Explain the different Process Area - Aug 12, 2009 at 10:00 AM by Shuchi
Gauri

Explain the different Process Area.

Different process areas are:


Project Management Concepts: This consists of Project planning, Project monitoring
and control, Risk Management, Process and product quality assurance, Configuration
management, Supplier agreement management, Integrated supplier management,
Measurement analysis,
Engineering Concepts: This includes Requirements development, Technical solution,
Requirements management, Product integration, Verification, Validation, Decision
analysis and resolution.
Process Management concepts: This includes Organizational process definition,
Integrate project management.
Integrated teaming concepts: This includes integrated project management,
Organizational environment for integration, and integrated teaming.
Quantitative management concepts: This includes Quantitative project and process
management.
Optimizing concepts: This includes Causal analysis and resolution, Organizational
innovation and deployment.
90. What is six sigma?

Motorola developed a concept called Six Sigma. Six Sigma focuses on defect rates, as
opposed to percent performed correctly.
“Sigma” is a statistical term meaning one standard deviation. Six Sigma means six
standard deviations. At the Six Sigma statistical level, only 3.4 items per million are
outside of the acceptable level. Thus, the Six Sigma quality level means that out of
every one million items/opportunities 999,996.6 will be correct, and not more than 3.4
will be defective.
Sigma level Defects per million opportunities
Level 1 690,000
Level 2 308,537
Level 3 66,807
Level 4 6,210
Level 5 233
Six Sigma
90. what does it mean?
Six Sigma at many organizations simply means a measure of quality that strives for
near perfection. Six Sigma is a disciplined, data-driven approach and methodology for
eliminating defects (driving toward six standard deviations between the mean and the
nearest specification limit) in any process -- from manufacturing to transactional and
from product to service.
The statistical representation of Six Sigma describes quantitatively how a process is
performing. To achieve Six Sigma, a process must not produce more than 3.4 defects
per million opportunities. A Six Sigma defect is defined as anything outside of customer
specifications. A Six Sigma opportunity is then the total quantity of chances for a defect.
Process sigma can easily be calculated using a Six Sigma calculator.
The fundamental objective of the Six Sigma methodology is the implementation of a
measurement-based strategy that focuses on process improvement and variation
reduction through the application of Six Sigma improvement projects. This is
accomplished through the use of two Six Sigma sub-methodologies: DMAIC and
DMADV. The Six Sigma DMAIC process (define, measure, analyze, improve, control) is
an improvement system for existing processes falling below specification and looking for
incremental improvement. The Six Sigma DMADV process (define, measure, analyze,
design, verify) is an improvement system used to develop new processes or products at
Six Sigma quality levels. It can also be employed if a current process requires more
than just incremental improvement. Both Six Sigma processes are executed by Six
Sigma Green Belts and Six Sigma Black Belts, and are overseen by Six Sigma Master
Black Belts.
According to the Six Sigma Academy, Black Belts save companies approximately
$230,000 per project and can complete four to 6 projects per year. General Electric, one
of the most successful companies implementing Six Sigma, has estimated benefits on
the order of $10 billion during the first five years of implementation. GE first began Six
Sigma in 1995 after Motorola and Allied Signal blazed the Six Sigma trail. Since then,
thousands of companies around the world have discovered the far reaching benefits of
Six Sigma.
Many frameworks exist for implementing the Six Sigma methodology. Six Sigma
Consultants all over the world have developed proprietary methodologies for
implementing Six Sigma quality, based on the similar change management philosophies
and applications of tools.
91. Can you explain the different methodology for execution and design process in SIX
sigma?
The main focus of SIX sigma is on reducing defects and variations in the
processes.DMAIC and DMADV are the models used in most SIX sigma initiatives.
DMADV is model for designing process while DMAIC is for improving the process.

DMADV model has the below five steps:-


• Define: - Determine the project goals and the requirements of customers
(external
and internal).
• Measure: - Assess customer needs and specifications.
• Analyze: - Examine process options to meet customer requirements.
• Design: - Develop the process to meet the customer requirements.
• Verify: - Check the design to ensure that it’s meeting customer requirements

92. What does executive leaders, champions, Master Black belt, green belts and black
belts mean?

SIX sigma is not only about techniques, tools and statistics, but the main thing depends

upon people. In SIX sigma there five key players:-

Executive
 leaders

Champions

Master
 black belt

Black
 belts

Green
 belts

Let’s try to understand all the role of players step by step.

Executive leaders : - They are the main person who actually decides that we need to
do

SIX sigma. They promote it throughout organization and ensure commitment of the

organization in SIX sigma. Executive leaders are the guys who are mainly either CEO or

from the board of directors. So in short they are the guys who fund the SIX sigma

initiative. They should believe that SIX sigma will improve the organization process and

that they will succeed. They should be determined that they ensure resources get
proper

training on SIX sigma, understand how it will benefit the organization and track
themetrics.

Champions : - Champion is a normally a senior manager of the company. He promotes

SIX sigma mainly between the business users. He understand SIX sigma thoroughly ,

serves as a coach and mentor , selects project , decides objectives , dedicates resource
to

black belts and removes obstacles which come across black belt players. Historically
Champions always fight for a cause. In SIX sigma they fight to remove black belt

hurdles.

Master Black-Belt: - This role requires highest level of technical capability in SIX

sigma. Normally organizations that are just starting up with SIX sigma will not have the

same. So normally outsiders are recruited for the same. The main role of Master Black

belt is to train, mentor and guide. He helps the executive leaders in selecting
candidates,

right project, teach the basic and train resources. They regularly meet with black belt
and

green belt training and mentor them.

Black-Belt: - Black belt leads a team on a selected project which has to be show cased

for SIX sigma. They are mainly responsible to find out variations and see how these

variations can be minimized. Mast black belt basically selects a project and train

resources, but black belt are the guys who actually implement it. Black belt normally

works in projects as team leads or project manager. They are central to SIX sigma as
they

are actually implementing SIX sigma in the organization.

Green Belt: - Green belt assist black belt in their functional areas. They are mainly in

projects and work part time on SIX sigma implementation. They apply SIX sigma

methodologies to solve problems and improve process at the bottom level. They have
just

enough knowledge of SIX sigma and they help to define the base of SIX sigma

implementation in the organization. They assist black belt in SIX sigma implementation

actually.

93. What are the different kinds of variations used in six sigma?

Variation is the basis of six sigma. It defines how much changes are happening in an

output of a process. So if a process is improved then this should reduce variations. In


six

sigma we identify variations in the process, control them and reduce or eliminate
defects.

Now let’s understand how we can measure variations.

There are four basic ways of measuring variations Mean, Median, Mode and Range.
Let’s

understand each of these variations in more depth for better analysis.

Figure: - Different variations in Six sigma

Mean: - In mean the variations are measured and compared using math’s averaging

techniques. For instance you can see the below figure which shows two weekly
measures

of how many computers are manufactured. So for that we have tracked two weeks one
we

have named as Week 1 and the other as Week 2. So to calculate variation by using
mean

we calculate the mean of week1 and week2. You can see from the calculations below
we

have got 5.083 for week and 2.85 for week2. So we have a variation of 2.23.
Figure: - Measuring variations by using Mean

Median: - Median value is a mid point in our range of data. Mid point can be found out

using by finding the difference between highest and lowest value then divide it by two

and finally add the lowest value to the same. For instance for the below figure in week1

we have 4 as the lowest value and 7 as the highest value. So first we subtract the
lowest

value from the highest value i.e. 7 -4. Then we divide it by two and add the lowest value.

So for week1 the median is 5.5 and for week2 the median is 2.9. So the variation is 5.5

2.9.

Figure: - Median for calculating variations


Range: - Range is nothing but spread of value for a particular data range. In short it is
the

difference between highest and lowest values in particular data range. For instance you

can see for recorded computer data of two week we have found out the range values by

subtracting the highest value from the lowest.

Figure: - Range for calculating variations

Mode: - Mode is nothing but the most occurred values in a data range. For instance in
our

computer manufacturing data range 4 is the most occurred value in Week1 and 3 is the

most occurred value in week 2. So the variation is 1 between these data ranges.

Figure: - Mode for calculating variations

94. Can you explain the concept of standard deviation?

The most accurate method of quantifying variation is by using standard deviation. It


indicates the degree of variation in a set of measurement or a process by measuring the

average spread of data around the mean. It’s but complicated than the deviation
process

discussed in the previous question, but it does give accurate information.

Below is the formula for Standard deviation. “s “ symbol stands for standard deviation.

X is the observed values; X (with the top bar) is the arithmetic mean and n is the
number

of observations. The formulae must be looking complicated by but let’s break up in to

steps and understand it better.

Figure: - Standard deviation formulae

The first step is to calculate the mean. This can be calculated by adding all the
observed

values and dividing the same by the number of observed values.


Figure: - Step 1 Standard deviation

The second step is to subtract the average from each observation, square them and
then

sum them. Because we square them we will not get negative values. Below figure

indicates the same in very detail manner.

Figure: - Step 2 Standard deviation

In the third step we divide the same with the number of observations as shown the
figure.

Figure: - Step 3 Standard deviation

In the final step we take the square root which gives the standard deviation.

Figure: - Step 4 standard deviation

95. Can you explain QFD?

Quality function deployment (QFD) is a “method to transform user demands into


design quality, to deploy the functions forming quality, and to deploy methods for
achieving the design quality into subsystems and component parts, and ultimately to
specific elements of the manufacturing process.” , as described by Dr. Yoji Akao, who
originally developed QFD in Japan in 1966, when the author combined his work in
quality assurance and quality control points with function deployment used in Value
Engineering.
QFD is designed to help planners focus on characteristics of a new or existing product
or service from the viewpoints of market segments, company, or technology-
development needs. The technique yields graphs and matrices.
QFD helps transform customer needs (the voice of the customer [VOC]) into
engineering characteristics (and appropriate test methods) for a product or service,
prioritizing each product or service characteristic while simultaneously setting
development targets for product or service.

96. Can you explain FMEA?

A failure modes and effects analysis (FMEA), is a procedure in product development


and operations management for analysis of potential failure modes within a system for
classification by the severity and likelihood of the failures. A successful FMEA activity
helps a team to identify potential failure modes based on past experience with similar
products or processes, enabling the team to design those failures out of the system with
the minimum of effort and resource expenditure, thereby reducing development time
and costs. It is widely used in manufacturing industries in various phases of the product
life cycle and is now increasingly finding use in the service industry. Failure modes are
any errors or defects in a process, design, or item, especially those that affect the
customer, and can be potential or actual. Effects analysis refers to studying the
consequences of those failures.

97. Can you explain X bar charts?

In industrial statistics, the X-bar chart is a type of control chart that is used to monitor
the arithmetic means of successive samples of constant size, n. This type of control
chart is used for characteristics that can be measured on a continuous scale, such as
weight, temperature, thickness etc.
For the purposes of control limit calculation, the sample means are assumed to be
normally distributed, an assumption justified by the Central Limit Theorem.
The X-bar chart is often used in conjunction with a variation chart such as the R-chart or
s-chart. The average sample range, R, or the average sample standard deviation, s,
can be used to derive the X-bar chart's control limits.

98. Can you explain Flow charting and brain storming?

A flowchart is a diagram displaying the sequential steps of an event, process, or


workflow. Flowcharts may be a simple high-level process flow, a detailed task flow, or
anywhere in between.
Brainstorming is a technique used to quickly generate creative or original ideas on or
about a process, problem, product, or service.It is a group activity where the facilitator
establishes basic ground rules and a code of conduct.All members have equal
opportunity to participate and share ideas. Ideas are listed on a board. The process
stops when idea becomes redundant. Duplicate ideas are eliminated and the remaining
ideas are evaluated.

99. Can you explain the concept of fish bone/ Ishikawa diagram?

project. Fish bone or Ishikawa diagram is one of the important concept which can help

you list down your root cause of the problem. Fish bone was conceptualized by
Ishikawa,

so in the honor of its inventor this concept was named as Ishikawa diagram. Inputs to

conduct a fish bone diagram comes from discussion and brain storming with people
who

were involved in the project. Below figure shows how the structure of the Ishikawa

diagram is.

Below is a sample fish bone diagram. The main bone is the problem which we need to

address and to know what caused the failure. For instance the below fish bone is

constructed to know what caused the project failure. To know this cause we have taken

four main bones as inputs Finance, Process, People and Tools. For instance on the
people

front there are many resignations  this was caused because there was no job
satisfaction

this
 was caused because the project was a maintenance project. In the same way

causes are analyzed on the Tools front also. In tools No tools were used in the project

because
 no resource had enough knowledge about the same this happened
because

of lack of planning. In process front the process was adhoc this was because of tight

dead lines this was caused because marketing people over promised and did not

negotiate properly with the end customer.

Now once the diagram is drawn the end bones of the fish bone signify the main cause
of
project failure. From the below diagram here’s a list:-

No
 training was provided fo r the resources regarding tool.

Marketing
 people over promised with customer which lead to tight dead lines.

Resources
 resigned because it’s a maintenance project.

Figure: - Fish bone / Ishikawa diagram

100. What is meant by measure and metrics?

Measures are quantitative ly unit defined elements for instance Hours, Km etc.
Measures

basically comprises of more than one measure for instance we can have metrics like

Km/Hr, M/S etc.


Figure: - Measure and Metrics

101. Can you explain Number of defects measure?

Number of defects is one of the measures used to measure test effectiveness. One of
the

side effects of number of defects is that all bugs are not equal. So it becomes necessary
to

weight bugs according to there criticality level. If we using Number of defects as the

metric measurement following are the issues:-

Number
 of bugs that originally existed significantly impacts the number of bugs

discovered, which in turns gives a wrong measure of the software quality.

All
 defects are not equal so defect should be numbered with criticality level to get

the right software quality measure.

Below are three simple tables which show number of defects SDLC phase wise ,
module

wise and developer wise.


Figure: - Number of defects phase wise

Figure: - Number of defects module wise.

Figure: - Number of defects

102. Can you explain number of production defects measure?

This is one of the most effective measure. Number of defects found in production or the

customer is recorded. The only issue with this measure is it can have latent and masked

defects which can give us wrong value regarding software quality.


103. Can you explain defect seeding?

Defect seeding is a technique that was developed to estimate the number of defects

resident in a piece of software. It’s a bit offline technique and should not be used by

every one. The process is the following we inject the application with defects and the
see

if the defect is found or not. So for instance if we have injected 100 defects we try to get

three values first how many seeded defects where discovered, how many were not

discovered and how many new defects (unseeded) are discovered. By using defect

seeding we can predict the number of defect remaining in the system.

Figure: - Defect seeding

Let’s understand the concept of defect seeding by doing some detail calculation and
also

try to understand how we can predict the number of defects remaining in a system.
Below
is the calculation for the same.

First
 calculate the seed ratio using the below given formulae i.e. Number of seed

bugs found divided by total number of seeded bugs.

After
 that we need to calculate the total number of defect by using the formulae

Number of defects divided by seed ratio.

Finally
 we can know the estimated defects by using the formulae Total number of

defects – Number of defect calculated by step 3.

Below figure shows a sample with step by step calculation. You can see first we

calculate the seed ratio, then total number of defects and finally we get the estimated

defects.

Figure: - Seed calculation


104. Can you explain DRE?

DRE(defect removal efficiency) is a powerful metric to measure test effectiveness. From

this metric we come to know how many bugs we found out from the set of bugs which

we could have found. Below is the formula for calculating DRE. So we need two inputs

for calculating this metric number of bugs found during development and number of

defects detected at the end user.

Figure:-DRE formulae

But success of DRE depends on lot of factors. Below are listed some factors:-

Severity
 and distribution of bugs must be taken in to account.

Second
 how do we confirm when the customer has found all the bugs. This is

normally achieved by looking at the past history of the customer.

105. How do you measure test effectiveness?

Test effectiveness is measure of the bug finding ability of our tests. In short it measures

how good the tests were?. So effectiveness is the ratio of measure of bugs found during
testing to total bugs found. Total bugs are the sum of new defects found by user + bugs

found in test. Below figure explains the calculation in a more pictorial format.

QUESTIONS 106-120

106. Can you explain Defect age and Defect spoilage?

Defect Age is the difference in time between the date a defect is detected and the
current date (if the defect is still open) or the date the defect was fixed. It is a useful
measure of defect effectiveness.
Defect Spoilage is a metric.
Spoilage =Sum of ( Number of defects * Discovered Phage)/total number of defects
The defect age is often calculated as a phase age (ie: how many phases it exists).
Because defects are more expensive the later in the test they are found, it is a good
idea to also calculate the defect spoilage which is a quotient.

sum of (number x phaseAge)


defect spoilage = ---------------------------------------
total number of defects
Defect Age : The time or phase since the defect is open.
Defect Age Calculated in Time : The Number of hours /days the defect is open.
if the defect is fixed then
Defect Age=Date Fixed - Defect found date.

Defect Age Calculated in Phases : Defect Fixed Phase - Defect Injection phase.
Let’s say the software life cycle has the following phases:

1. Requirements Development
2. High-Level Design
3. Detail Design
4. Coding
5. Unit Testing
6. Integration Testing
7. System Testing
8. Acceptance Testing
If a defect is identified in ‘System Testing’ and the defect was introduced in
‘Requirements Development’, the Defect Age is 6.

Defect age is used in another metric called defect spoilage to measure the effectiveness
of defect removal activities.

Spoilage = Sum of (Number of Defects x defect age) / Total number of defects.


low values of defect spoilage mean more effective defect discovery process.
Defect age is a measure of the duration for which a defect remains in the product or in
other words the defect age is a measure of the duration between defect detection and
defect injection. Ideally the defect age should be zero, which means that the defect is
removed as soon as it is injected.

107. What are good candidate for automation in testing?

Most, but not all, types of tests can be automated. Certain types of tests like user
comprehension tests test that run only once and tests that require constant human
intervention are usually not worth the investment incurred to automate. The following
are examples of criteria that can be used to identify tests that are prime candidates for
automation.

High path frequency – Automated testing can be used to verify the performance of
application paths that are used with a high degree of frequency when the software is
running in full production. Examples include: creating customer records.

Critical Business Processes – Mission-critical processes are prime candidates for


automated testing. Examples include: financial month-end closings, production
planning, sales order entry and other core activities. Any application with a high –degree
of risk associated with a failure is a good candidate for test automation.
Repetitive Testing – If a testing procedure can be reused many times, it is also a prime
candidate for automation

Applications with a Long Life Span – If an application is planned to be in production


for a long period of time, the greater the benefits are from automation.
The Test team needs to review all the test cases to determine which test cases are
good automation candidates and which should be performed manually. Analyzing what
to automate is one of the most crucial aspects of the QTLM. This QA Test Life-Cycle
Methodology outlines several guidelines for performing the automation versus manual
test analysis. The criteria used by the QTLM to gauge a test case’s suitability for
automation is outlined below:
Considering Limitations of Test automation – For instance the verification of a print
output cannot be automated. The test engineers should be cognizant of these
limitations when selecting the Test automation candidates.
Focus on Testing Goals and Objectives – The goals and objectives outlined in the
Test Strategy should be used to determine how much automation is needed and what is
the best time to automate in a test program. Eloquent test scripts can be developed at
the expense of immediate defect detection when a test engineer is responsible both for
executing manual tests and automation. Misdirected efforts might postpone the
immediate discovery of defects because the test engineer is too involved in creating
complex automated test scripts. Whenever the test cases are review for their
automation suitability it is important to be sensitive to the test schedule. In some cases
the schedule may not permit the creation of elaborate automated scripts.
Do not Duplicate Application Program Logic – Test cases that require the
application’s logic to be duplicated should not be automated. A different test approach is
warranted for these test cases. Mimicking the logic of the application does not detect
logic errors since those errors are also propagated into the automation script that
merely mirrors the application logic.
Consider Time Required for Test Automation – During the analysis for the selection
of automation candidates, the time frame required to automate a test case should
always be considered. One rule of thumb is to consider that if it takes as much or more
effort to automate a test script for a specific test requirement as it did to code the
function, a new testing approach should be developed.
Evaluate the Reuse Potential of Automated Modules – Automating the highest risk
functionality can prove futile if the reuse potential is absent. To avoid wasting
automation efforts the test team should examine the ability to reuse the test scripts in
subsequent software releases. The expected changes to the baseline functionality in
subsequent releases should also be investigated. If the functionality represents a
complex temporary requirement that could change in the following release, automation
is unlikely to produce any return on investment.
Focus Automation on Repetitive Tasks – It is extremely beneficial to consider the
automation of repetitive and mundane tasks. This frees up test engineers for creative
testing of complex functionality. Automation lends itself to repetitive tasks very well
through loops and conditional constructs.
Focus Automation on Data Driven Tasks – High Volume Data driven test cases are
good candidates for creating robust automation scripts. The automated scripts are the
ideal means to execute the same task with various sets of data. Repeating mundane
tasks manually is an error prone process and should be avoided whenever possible.
Consider Test Requirement Risk – An incremental test automation approach should
be phased where the depth and breadth of the automation effort gradually increases
with time. The test automation candidates should be prioritized to reflect the results of
the Risk analysis conducted on the Requirements. The highest priority test cases based
on the Risk analysis of the requirements should be used to create the Top priority
automation candidates.
It is impossible to automate all testing, so it is important to determine what tests will
produce the most benefit. The point of
automation is not to eliminate testers but to make better use of their time. Tests that
require large amounts of data to be input and
tests that are run frequently, such as regression tests, are good candidates for
automation. Tests that require human intervention
are not. Any test that has predictable results and meets one of the following criteria is a
candidate for automation:
– The test is repetitive (for example, choose every item on a Web page or repeat the
test for each release).
–The test evaluates high risk conditions.
–The test is impossible or costly to perform manually.
–The test requires multiple data values to perform the same action (i.e., a data-driven
test).
–The test is a baseline test run on several different configurations.
Automating the repetitive and boring tasks frees up testers to perform more interesting
and in-depth tests.
Keep an eye out for tests or areas of an application that are frequently modified. These
are barriers to automation. If there is an
area of yourWeb site that changes daily, it will be difficult to define exactly what
constitutes a passing result

108. Which automation tool have you worked and can you explain them in brief?

109. Can you explain how does load testing conceptually work for websites?

Web Load Testing involves checking how a website or service performs as the load on
it (the number and volume of requests) is increased.

There are many reasons for load-testing a Web application. The most basic type of load
testing is used to determine the Web application’s behavior under both normal and
anticipated peak load conditions. As you begin load testing, it is recommended that you
start with a small number of virtual users and then incrementally increase the load from
normal to peak. You can then observe how your application performs during this
gradually increasing load condition. Eventually, you will cross a threshold limit for your
performance objectives. For example, you might continue to increase the load until the
server processor utilization reaches 75 percent, or when end-user response times
exceed 8 seconds.
A website needs to stay up under normal and expected load conditions, but in addition
to that, the load will not remain constant, since there are several reasons why the
number of requests might suddenly spike:
• Increased popularity of the website - This could happen very quickly, for example when
a popular site links to yours (Digg and Slashdot effects). If your site is not able to cope
with the sudden onrush of requests, it can render your site unusable.
• Denial of Service Attacks - This is what happens when somebody tries to bring down
your site by making more requests than it can handle. The most common form is a
distributed denial of service attack, in which case numerous computers are used to
make requests on the site at the same time.
Here are some things to keep in mind when creating Web LoadTests:
• Test the most common behaviour that you expect from the users first
• Test behaviours that you expect will cause high loads (e.g. Operations that involve a lot
of database access might fall into this category)
• Spend at least some of the time doing exploratory testing.
The basic approach to performing load testing on a Web application is:
1. Identify the performance-critical scenarios.
2. Identify the workload profile for distributing the entire load among the key scenarios.
3. Identify the metrics that you want to collect in order to verify them against your
performance objectives.
4. Design tests to simulate the load.
5. Use tools to implement the load according to the designed tests, and capture the
metrics.
6. Analyze the metric data captured during the tests.
By using an iterative testing process, these steps should help you achieve your
performance objectives.

110. Can you explain how did you perform load testing using tool?

111. Can you explain the concept of data driven testing?

Data-driven testing (DDT) is a term used in the testing of computer software to


describe testing done using a table of conditions directly as test inputs and verifiable
outputs as well as the process where test environment settings and control are not
hard-coded. In the simplest form the tester supplies the inputs from a row in the table
and expects the outputs which occur in the same row. The table typically contains
values which correspond to boundary or partition input spaces. In the control
methodology, test configuration is "read" from a database.

In the testing of software or programs, several methodologies are available for


implementing this testing. Each of these methods co-exist because they differ in the
effort required to create and subsequently maintain. The advantage of Data-driven
testing is the ease to add additional inputs to the table when new partitions are
discovered or added to the product or System Under Test. The cost aspect makes DDT
cheap for automation but expensive for manual testing. One could confuse DDT with
Table-driven testing, which this article needs to separate more clearly in future.
Methodology Overview
• Data-driven testing is the creation of test scripts to run together with their related data
sets in a framework. The framework provides re-usable test logic to reduce
maintenance and improve test coverage. Input and result (test criteria) data values can
be stored in one or more central data sources or databases, the actual format and
organization can be implementation specific.
The data comprises variables used for both input values and output verification values.
In advanced (mature) automation environments data can be harvested from a running
system using a purpose-built custom tool or sniffer, the DDT framework thus performs
playback of harvested data producing a powerful automated regression testing tool.
Navigation through the program, reading of the data sources, and logging of test status
and information are all coded in the test script.
Automated tests play back a recorded (or programmed) sequence of user actions that
cover a certain area of the tested application. To get larger coverage, you can perform
tests with different input data. Suppose, for example, you recorded actions that input
data into an application’s form. The recorded test contains only those values that you
entered during the recording and, most likely, these values do not cause errors in the
application, but other data may cause them. So, you have to run your test with different
set of input data to ensure that the application works as expected for various input
values. This testing approach is called data-driven testing.
Typically, a data-driven test performs the following operations in a loop:
1. Retrieves the portion of test data from a storage.
2. Enters the data in an application form and simulates other actions.
3. Verifies results.
4. Continues testing with the next set of input data.
Once u have successfully debugged and run your tests u may want to see how the
same test performs with multiple tests of data which is nothing but data driven testing.
To do this you convert your test to data driven test and create a corresponding data
table with the sets of data you want to test.
1. Converting your test to a data-driven test involves the following steps:
2. Adding statements to your script that open and close the data table.
3. Adding statements and functions to your test so that it will read from the data table
and run in a loop while it applies each set of data.
4. Replacing fixed values in recorded statements and checkpoint statements with
parameters known as parameter zing the test.
5. You can convert your test to a data-driven test using the Data Driver Wizard or you
can modify your script manually.
When we test our application we may want to check how it performs the
same operations with multiple sets of data.
For Example:-
Suppose we want to check how our application responds to multiple sets of data
we should record multiple separate tests each with it s own set of data. We
should create a data driven test with a loop that runs multiple times (how much
we should specify).
In fact we can say that testing the functionality with more test-cases
becomes laborious and time consuming as functionality grows. For multiple sets
of data or Test-cases. We can execute the test once in which we can figure out
for which data it has failed and which data the test has passed. The features is
available in the Winrunner with Data Driven Test where the data can be taken
from excel sheet or notepad.

112. Can you explain table-driven testing?

The Keyword-Driven or Table-Driven Testing Framework Keyword-driven testing and


table-driven testing are interchangeable terms that refer to an application- independent
automation framework. This framework requires the development of data tables and
keywords, independent of the test automation tool used to execute them and the test
script code that” drives" the application-under-test and the data. Keyword-driven tests
look very similar to manual test cases. In a keyword-driven test, the functionality of the
application-under-test is documented in a table as well as in step-by-step instructions
for each test.

Keyword-driven testing separates the test creation process into two distinct stages: a
Planning Stage, and an Implementation Stage.
Although keyword testing can be used for manual testing, it is a technique particularly
well suited to automated testing. The advantages for automated tests are the reusability
and therefore ease of maintenance of tests that have been created at a high level of
abstraction.
The keyword-driven testing methodology divides test creation into two stages:-
• Planning Stage
• Implementation Stage

Planning Stage

A simple keyword (one action on one object), e.g. entering a username into a text field.
Actio
Object Data
n

Text field Enter <userna


(username) text me>

A more complex keyword (a combination of keywords into a meaningful unit), e.g.


logging in.
Actio
Object Data
n
Enter
Text field (domain) <domain>
text

Text field Enter <usernam


(username) text e>

Text field Enter <passwor


(password) text d>

One left
Button (login) Click
click

Implementation Stage

The implementation stage differs depending on the tool or framework. Often,


automation engineers implement a framework that provides keywords like “check” and
“enter” . Testers or test designers (who don’t have to know how to program) write test
cases based on the keywords defined in the planning stage that have been
implemented by the engineers. The test is executed using a driver that reads the
keywords and executes the corresponding code.
Other methodologies use an all-in-one implementation stage. Instead of separating the
tasks of test design and test engineering, the test design is the test automation.
Keywords, such as “edit” or “check” are created using tools in which the necessary code
has already been written. This removes the necessity for extra engineers in the test
process, because the implementation for the keywords is already a part of the tool.
Tools such as GUIdancer and Worksoft Certify use this approach.

Pros

1. Maintenance is low in a long run


1. Test cases are concise
2. Test Cases are readable for the stake holders
3. Test Cases easy to modify
4. New test cases can reuse existing keywords more easily
2. Keyword re-use across multiple test cases
3. Not dependent on Tool / Language
4. Division of Labor
1. Test Case construction needs stronger domain expertise - lesser tool / programming
skills
2. Keyword implementation requires stronger tool/programming skill - with relatively lower
domain skill
5. Abstraction of Layers

Cons

1. Longer Time to Market (as compared to manual testing or record and replay technique)
2. Moderately high learning curve initially
113. How can you perform data-driven testing using Automated QA?

114. What are the different ways of doing black box testing?

Black Box Testing: Also called as Behavioural Testing. It is a Function-based testing,


which focuses on testing the functional requirements. The test cases are designed with
a view point that for a particular set of input conditions, you will get a particular set of
output values. It is mostly being done at later stages of testing.
Black Box Testing Methods: Following are the various ways in which Black Box
testing is carried out:
1. Graph- Based Testing: Identifying the objects and relationships between them. And
then testing whether the relationship behave as expected or not. Graphs are used to
prepare the test cases, in which: objects are represented as Nodes and relationships as
Links.
2. Eqivalence Class Testing: A set of input domain is partitioned into different classes.
And selecting the test data from each class. The equivalence class represents the valid
and invalid states for input conditions.
3. Boundary Value Testing: It is carried out by selecting the test cases that exercises
bounding values. For eg, if a test case accepts values with the range (a to d), then
testing the behaviour at ‘a’ and ‘d’.
4. Comparison Testing: Also called back-to-back testing. In this method, different
software teams build a product using the same specification but different technologies
and methodologies. After that all the versions are tested and their ouput is compared. Its
not a full proof testing method since even if they all give the same results, if one is
incorrect all of them will be incorrect.
115. Can you explain TPA analysis?

TPA is a technique to estimate test effort for black box testing. Inputs for TPA are the

counts which are derived from function points (function points will be discussed in more

detail in the next sections).

Below are the features of TPA:-

Used to estimate only black box testing.

Requires function points as an input.


The function point analysis productivity factor covers the white-box testing, it does not
cover system testing or acceptance testing.
3 important elements – size , strategy and productivity
1) Size :-
Size of an information system is determined mainly by the number of function points
assigned to it.

Other factors are

Complexity; complexity relates to the number of conditions in a function. More


conditions almost always means more test cases and therefore a greater volume of
testing work.

Interfacing; the degree of interfacing of a function is determined by the number of


data sets maintained by a function and the number of other functions, which make use
of those data sets. Interfacing is relevant because these “other” functions will require
testing if the maintenance function is modified.

Uniformity; it is important to consider the extent to which the structure of a


function allows it to be tested using existing or slightly modified specifications, i.e. the
extent to which the information system contains similarly structured functions.
2) Strategy
The importance attached to the various quality characteristics for testing purposes and
the importance of the various subsystems and/or functions determine the test strategy.
Any requirement importance is from two perspectives: one is the user importance and
the other is the user usage. Depending on these two characteristics a requirement
rating can be generated and a strategy can be chalked out accordingly, which also
means that estimates vary accordingly.
3) Productivity
Productivity has two important aspects: environment and productivity figures.
Environmental factors define how much the environment affects a project
estimate. Environmental factors include aspects such as tools, test environments,
availability of test ware, etc. While the productivity figures depend on knowledge,
how many senior people are on the team, etc.
User-importance

The user-significance is an expression of the importance that the user attaches to a


given function relative to the other system functions.

Rating:

3 Low: the importance of the function relative to the other functions is low.

6 Normal: the importance of the function relative to the other functions is normal.

12 High: the importance of the function relative to the other functions is high.

Usage-intensity

The usage intensity has been defined as the frequency with which a certain function is
processed by the users and the size of the user group that uses the function. As with
user-

importance the usage-intensity is being determined at a user-function level.

Rating:

2 Low: the function is only used a few times per day or per week.

4 Normal: the function is being used a great many times per day

12 High: the function is used continuously throughout the day.

Interfacing
Interfacing is an expression of the extent to which a modification in a given
function affects
other parts of the system. The degree of interfacing is determined by ascertaining
first the
logical data sets (LDSs) which the function in question can modify, then the other
functions
which access these LDSs.

Complexity
The complexity of a function is determined on the basis of its algorithm. The general
structure
of the algorithm may be described using pseudo code, Nassi-Shneiderman or ordinary
text.
The complexity rating of the function depends on the number of conditions in the
function’s
algorithm.
Rating:
3 The function contains no more than five conditions.
6 The function contains between six and eleven conditions.
12 The function contains more than eleven conditions.
Uniformity (U):
This factor defines how reusable a system is. Clones and dummies come under
this heading.

A uniformity factor of 0.6 is assigned in cases of the kinds where there are clone
functions , dummy functions and virtually unique function reoccurring ;
otherwise a uniformity factor of 1 is assigned

Df = ((Ue + Uy + I + C)/16) * U
Df = weighting factor for the function-dependent factors
Ue = user-importance
Uy = usage-intensity
I = interfacing
C = complexity
U = uniformity
Dynamic quality characteristics (Qd)

The third step is to calculate Qd. Qd, i.e, dynamic quality characteristics, have two
parts: explicit characteristics (Qde) and implicit characteristics (Qdi).Qde has five
important characteristics: Functionality, Security, Suitability,Performance, and
Portability.

Qdi defines the implicit characteristic part of the Qd. These are not standard and vary
from project to project. For instance, we have identified for this accounting application
four characteristics: user friendly, efficiency, performance, and maintainability.

Qd = Qde + Ddi
TPf = FPf * Df * Qd
TPf = number of test points assigned to the function
FPf = number of function points assigned to the function
Df = weighting factor for the function-dependent factors
Qd = weighting factor for the dynamic quality characteristics
Calculate static test points Qs
In this step we take into account the static quality characteristic of the project. This is
done by defining a checklist of properties and then assigning a value of 16 to those
properties. For this project we have only considered easy-to-use as a criteria and
hence assigned 16 to it.
Total number of test points
The total number of test points assigned to the system as a whole is calculated by
entering the data so far obtained into the following formula:
TP = ΣTPf + (FP * Qi) / 500
TP = total number of test points assigned to the system as a whole
ΣTPf = sum of the test points assigned to the individual functions (dynamic test points)
FP = total number of function points assigned to the system as a whole (minimum
value 500)
Qi = weighting factor for the indirectly measurable quality characteristics
Calculate Productivity/Skill factors
Productivity/skill factors show the number of test hours needed per test points. It’s a
measure of experience, knowledge, and expertise and a team’s ability to perform.
Productivity factors vary from project to project and also organization to organization.
For instance, if we have a project team with many seniors then productivity increases.
But if we have a new testing team productivity decreases. The higher the productivity
factor the higher the number of test hours required.
Calculate environmental Factor (E)
The number of test hours for each test point is influenced not only by skills but also by
the environment in which those resources work.
Calculate primary test hours (PT)
Primary test hours are the product of test points, skill factors, and
environmental factors. The following formula shows the concept in more detail:
Primary test hours = TP * Skill factor * E

116. Can you explain in brief Function points?

Function points are measurement of a unit for software which resembles an hour
measuring time. The functionality of the software is quantified by function points on the
request provided by the customer primarily based on logical design. Function points
measures software development and its maintenance consistently among all projects
and enterprises.
Function points are used a metric in software testing. They are used to measure the
size of the software, functionality by measuring the requirements. Function points are
consistent and independent of design. FP can be used in estimations,
E.g. counting the number of screens, Menus in the application.

117. Can you explain the concept Application boundary?

An application boundary defines the scope of an application. A process can contain


multiple application boundaries. An application running inside one application boundary
cannot directly access the code running inside another application boundary. However,
it can use a proxy to access the code running in other application boundaries.

Application boundary considers users perspective. It indicates the margin between the
software measured and the end user. It helps to identify what is available to the end
user externally from the interface to interact with the internal of the system. This helps to
identify the scope of the system.

The first step in FPA is defining boundary. There are two types of major boundaries:
Internal Application Boundary

External Application Boundary

We will state features of external application boundary, so that internal application

boundary would be self explained.

External Application Boundary can be identified using following litmus test:

Does it have or will have any other interface to maintain its data, which is not

Developed by you. Example: Your Company is developing an “Accounts

Application” and at the end of accounting year, you have to report to tax

Department. Tax department has his own website where companies can connect

and report there Tax transaction. Tax department application has other

Maintenance and reporting screens been developed by tax software department.

These maintenance screens are used internally by the Tax department. So Tax

Online interface has other interface to maintain its data which is not your scope,

thus we can identify Tax website reporting as External Application.

Does your program have to go through a third party API or layer? In order

your application interacts with Tax Department Application probably your

code have to interact through Tax Department API.

The best litmus test is to ask yourself do you have full access over the system.

If you have full rights or command to change then its internal application

boundary or else external application boundary.

118. Can you explain the concept of elementary process?

An Elementary process is the smallest unit of any business activity. It has to have a
meaning or a purpose. An elementary process is complete, when the user comes to
closure on the process and all the business information are in a static and complete
condition.
When it is complete, the elementary process leaves the business area in a self-
consistent state. That is, the business person has come to closure on the process and
all the business information is in a static and complete condition. These elementary
processes can occur at any level, at or below level 1, within the business model.
Conversely, a complex high level process may require analysis through a number of
levels of diagram in order to break it down into functional components that are
sufficiently low level to be termed elementary processes. If there is sufficient interest in
any given process to analyze it further, then clearly it is not an elementary process.
Elementary processes can be described in elementary process descriptions (or EPD's).
These are typically about half a page of narrative and can add useful detail to a
business process model.

A software application is in essence a defined set of elementary processes. When these

elementary processes are combined they interact to form what we call a software
system or

software application. An elementary process is not totally independent existing alone,


but the

elementary processes are woven together becoming interdependent. There are two
basic types

of elementary processes (data in motion and data at rest) in a software application.


Data in

motion has the characteristic of moving data inside to outside the application boundary
or outside

to inside the application boundary. An elementary process is similar to an acceptance


test case.

As said in introduction FPA is breaking huge systems in to smaller pieces and analyzing

them. Software application is combination of set of elementary processes.

EP is smallest unit of activity that is meaningful to the user. EP must

be self contained and leave the application in a consistent state.

When elementary processes come together they form a software application.

Elementary process is not necessarily completely independent or


can exist by itself.So, we can define elementary process as small units

of self contained functionality from user perspective.

119. Can you explain the concept of static and dynamic elementary process?

There are two types of elementary process: -

Dynamic Elementary process.

Static Elementary process.

.Dynamic elementary is a process where data moves from internal application boundary
to external application boundary or vice-versa.

Example: Input data screen where user inputs data in to application. Data moves from
the input screen inside application.
Static elementary is a process where data of application is maintained either inside
application boundary or in external application boundary.
Example: For instance in a customer maintenance screen maintaining customer data is
static elementary process

120. Can you explain concept of FTR, ILF, EIF, EI, EO , EQ and GSC ?

Following are the elements of FPA.

Internal Logical Files (ILF)

Following are points to be noted for ILF: -

ILF are logically related data from user point of view.

They reside in Internal Application boundary and are maintained through

Elementary process of application.

ILF can have maintenance screen or probably not.


Caution: - Do not make a mistake of mapping one to one relationship

between ILF and technical database design, then FPA can go very

misleading. The main difference between ILF and technical database is

ILF is logical view and database is physical structure (Technical

Design). Example Supplier database design will have tables like

Supplier, Supplier Address, SupplierPhonenumbers, but from ILF point of

view its only Supplier. As logically they are all Supplier details.

External Interface File (EIF)

They are logically related data from user point of view.

EIF reside in external application boundary.

EIF is used only for reference purpose and are not maintained by internal

application.

EIF is maintained by external application.

Record Element Type (RET)

Following are points to be noted for RET

RET are sub-group element data of ILF or EIF.

If there is no sub-group of ILF then count the ILF itself as one RET.

A group of RET’s within ILF are logically related. Most probably with a parent

child relationship. Example: - Supplier had multiple addresses and every address

can have multiple phone numbers (See detail image below which shows database

diagrams).So Supplier, Supplier Address and Supplier phone numbers are RET’s.

DET (Data element types)

Following are the points to be noted for DET counting: -


Each DET should be User recognizable. Example in the above given figure

we have kept auto increment field (Supplierid) for primary key. Supplierid field

from user point of view never exists at all , its only from software designing

aspect, so does not qualifies for DET.

DET should be non-recursive field in ILF. DET should not repeat in the same

ILF again, it should be counted only once.

Count foreign keys as one DET. “Supplierid” does not qualifies as DET but

its relationship in “supplieraddress” table is counted as DET. So “Supplierid_fk”

in supplieraddress table is counted as DET. Same folds true for“Supplieraddressid_fk”.

File Type Reference (FTR)

Following are points to be noted for FTR: -

FTR is files or data referenced by a transaction.

FTR should be ILF or EIF. So count each ILF or EIF read during process.

If the EP is maintaining an ILF then count that as FTR. So by default you will

always have one FTR in any EP.

External Input (EI)

Following are points to be noted for EI: -

It’s a dynamic elementary process [For definition see “Dynamic and Static

Elementary Process” Section] in which data is received from external

application boundary.

Example: - User Interaction Screens, when data comes from User Interface

to Internal Application.

EI may maintain ILF of the application, but it’s not compulsory rule.

Example: - A calculator application does not maintain any data, but still the

screen of calculator will be counted as EI.


Most of time User Screens will be EI, again no hard and fast rule. Example: -

An import batch process running from command line does not have screen,

but still should be counted as EI as it helps passing data from External

Application Boundary to Internal Application Boundary.

External Inquiry (EQ)

Following are points to be noted for EQ: -

It’s a dynamic elementary process in which result data is retrieved from one or

more ILF or EIF.

In this EP some input request has to enter the application boundary.

Output results exits the application boundary.

EQ does not contain any derived data. Derived data means any complex

calculated data. Derived data is not just mere retrieval but are combined with additional
formulae to generate results. Derived data is not part of ILF or EIF,they are generated
on fly.

EQ does not update any ILF or EIF.

EQ activity should be meaningful from user perspective.

EP is self contained and leaves the business in consistent state.

DET and processing logic is different from other EQ’s.

Simple reports form good base as EQ.

External Output (EO)

Following are points to be noted for EO: -

It’s a dynamic elementary process in which derived data crosses from Internal

Application Boundary to External Application Boundary.

EO can update an ILF or EIF.

Process should be the smallest unit of activity that is meaningful to end user
in business.

EP is self contained and leaves the business in a consistent state.

DET is different from other EO’s.So this ensures to us that we do not count

EO’s twice.

They have derived data or formulae calculated data.

Major difference between EO and EQ is that data passes across application boundary.

Example: - Exporting Accounts transaction to some external file format like XML or

some other format. Which later the external accounting software can import. Second

important difference is in EQ its non-derived data and EO has derived data.

General System Characteristic Section (GSC)

This section is the most important section. All the above discussed sections are
counting

sections. They relate only to application. But there are other things also to be
considered

while making software, like are you going to make it an N-Tier application, what's the

performance level the user is expecting etc these other factors are called GSC. These
are

external factors which affect the software a lot and also the cost of it. When you submit

a function point to a client, he normally will skip everything and come to GSC first. GSC

gives us something called as VAF (Value Added Factor).

121-130 MISSING

131. You have people in your team who do not meet there deadlines or do not perform
what are the actions you will take?

Ans: In such kind of question they want to see your delegation skills. The best answer
to this question is a job of a project manager is managing projects and not problems of
people, so I will delegate this work to HR or upper authority….

132.Are risk constant through out the project?

Ans: Risk is high at the start of projects, but by proper POC (Proof of concept) risk is
brought in control. Good project managers always have proper risk mitigation plan at
the start of project. As the project continues one by one risk is eliminated thus bringing
down the risk.

133. Explain SDLC (Software development Life Cycle) in detail?

Ans: Software Development life cycle is a process of building an application through


different phases. Here The phases are 5 types, they are: - Requirement Analysis,
Design, Coding, Testing and Maintenance.

Analysis: Here the Company level people and Client or Customer side people will
participate in a meeting called Kickoff meeting. The Client provides the information and
The Company side people (Business Analyst will participate to gather the information
from the client. The Business Analyst who is well in Domain Skills, Technical Skills and
Functionality Skills.

By the Gathered information the Business Analyst will prepare the BRS Document
which is also called as Business Requirement Specification. Then later the same
document is also called as FRD document. That's Functional Requirement Document.

Project Manager will prepare SRS Document i.e: System Requirement Specification
Document.

Test Lead will prepare the Test Plan Document.

Later all these documents are verified by Quality Analyst. Here the quality Analyst will
check the gaps or loopholes in between the document to map the client Specification
document and Business Requirement Specification Document.

Again Business Analyst will involve to prepare the Use Case Document and later these
all documents are maintained as baseline Document, The Base line Document which is
called called as Stable document.
---------
Output: Here the Analysis output is BRS, SRS, FRS, Use case and Test plan
Documents

134. Can you explain waterfall model?

Ans: In "The Waterfall" approach, the whole process of software development is divided
into separate process phases. The phases in Waterfall model are: Requirement
Specifications phase, Software Design, Implementation and Testing & Maintenance. All
these phases are cascaded to each other so that second phase is started as and when
defined set of goals are achieved for first phase and it is signed off, so the name
"Waterfall Model". All the methods and processes undertaken in Waterfall Model are
more visible.

The stages of "The Waterfall Model" are:


Requirement Analysis & Definition: All possible requirements of the system to be
developed are captured in this phase. Requirements are set of functionalities and
constraints that the end-user (who will be using the system) expects from the system.

Implementation & Unit Testing: On receiving system design documents, the work is
divided in modules/units and actual coding is started. The system is first developed in
small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality; this is referred to as Unit Testing. Unit testing
mainly verifies if the modules/units meet their specifications.

Integration & System Testing: As specified above, the system is first divided in units
which are developed and tested for their functionalities. These units are integrated into
a complete system during Integration phase and tested to check if all modules/units
coordinate between each other and the system as a whole behaves as per the
specifications. After successfully testing the software, it is delivered to the customer.

Operations & Maintenance: This phase of "The Waterfall Model" is virtually never
ending phase (Very long). Generally, problems with the system developed (which are
not found during the development life cycle) come up after its practical use starts, so the
issues related to the system are solved after deployment of the system. Not all the
problems come in picture directly but they arise time to time and needs to be solved;
hence this process is referred as Maintenance.

135. Can you explain big-bang waterfall model?


Ans: The waterfall model is also known as the Big-bang model because all modules
using waterfall module follows the cycle independently and then put together. Big Bang
model follows a sequence to develop a software application. It slowly moves to the next
phase starting from requirement analysis followed by design, implementation, testing
and finally integration and maintenance.

Big bang waterfall model delivers the complete solution once and in one go. That’s why
it’s termed Big Bang.
Following is the approach:
• Customer provides complete overall requirements.
• Design follows
• The design is built / developed.
• The development work is tested.
• The system is implemented.
One should go with Big bang waterfall model if:
• Contract of work is completely defined and accurate.
• Requirement and acceptance criteria is completely defined and accurate
• It is feasible to finish the work within given constraints.
• No change in requirements is expected.
• Problem and proposed solution both are clearly understood by all stakeholders.
• No mistakes can occur in requirements, design phases.
136. Can you explain phased waterfall model?

Ans: Unlike the big bang waterfall model, the phased model is suitable if the work can
be grouped into separate units and delivered in steps rather than everything once and
together, by different teams. Consider a system that consists of 4 subsystems, each
being developed by a separate team. In the end all the 4 subsystems make up one
complete system, giving the flexibility of breaking the system down in 4 parts and
allowing each being developed separately. It’s more like a collection of mini projects run
by different teams approach.

137. Explain Iterative model, Incremental model, Spiral model, Evolutionary model and
V-Model?

Ans: Iterative model: Iterative and Incremental development is a cyclic software


development process developed in response to the weaknesses of the waterfall
model. It starts with an initial planning and ends with deployment with the cyclic
interaction in between.

Incremental model: Incremental model is an evolution of waterfall model. The product is


designed, implemented, integrated and tested as a series of incremental builds. It is a
popular model software evolution used many commercial software companies and
system vendor.
Incremental software development model may be applicable to projects where:

 Software Requirements are well defined, but realization may be delayed.

 The basic software functionality are required early

Spiral model: The spiral model is a software development process combining elements
of both design and prototyping-in-stages, in an effort to combine advantages of top-
down and bottom-up concepts.

Evolutionary model: The Learnable Evolution Model (LEM) is a novel, non-Darwinian


methodology for evolutionary computation that employs machine learning to guide the
generation of new individuals (candidate problem solutions). Unlike standard,
Darwinian-type evolutionary computation methods that use random or semi-random
operators for generating new individuals (such as mutations and/or recombination’s),
LEM employs hypothesis generation and instantiation operators

V model: A framework to describe the software development life cycle activities from
requirements specification to maintenance. The V-model illustrates how testing activities
can be integrated into each phase of the software development life cycle.

138. Explain Unit testing, Integration tests, System testing and Acceptance testing?

Ans: unit testing: In computer programming, unit testing is a software verification and
validation method in which a programmer tests if individual units of source code are
fit for use. A unit is the smallest testable part of an application. In procedural
programming a unit may be an individual function or procedure.

Integration test: Integration testing (sometimes called Integration and Testing,


abbreviated "I&T") is the phase in software testing in which individual software
modules are combined and tested as a group. It occurs after unit testing and before
system testing. ...

Sustem testing: System testing of software or hardware is testing conducted on a


complete, integrated system to evaluate the system's compliance with its specified
requirements. ...

To verify and validate behaviors of the entire system against the original system
objectives Software testing is a process that identifies the correctness,
completeness, and quality of software.
Acceptance testing: In engineering and its various subdisciplines, acceptance testing
is black-box testing performed on a system (e.g. software, lots of manufactured
mechanical parts, or batches of chemical products) prior to its delivery. ...

139. What’s the difference between system and acceptance testing?

Ans: System Testing: Done by QA at development end. It is done after integration is


complete and all integration P1/P2/P3 bugs are fixed. the code is freezed. No more
code changes are taken. Then All the requirements are tested and all the integration
bugs are verified.
UAT: Done by QA(trained like end users ). All the requirement are tested and also
whole system is verified and validated.

140- 16 MISSING (26 questions & answers missing)

17. What is CAR (Causal Analysis and Resolution)?

The purpose of Causal Analysis and Resolution (CAR) is to identify causes of


defects and other problems and take action to prevent them from occurring in the
future.

The Causal Analysis and Resolution process area involves the following:
 Identifying and analyzing causes of defects and other problems
 Taking specific actions to remove the causes and prevent the occurrence of
those types of defects and problems in the future
The advantage of CAR is that root causes are scientifically identified and their
corrective and preventive actions are carried out. CAR needs to be performed at
project initiation, all phase and project ends and on a monthly basis. Fishbone
diagram is one of the ways you can do CAR.

18. What is DAR (Decision Analysis and Resolution)?

The Decision Analysis and Resolution (DAR) process is a quick and effective
method of evaluating key decisions and proposed solutions. DAR can apply to all
levels of decisions made within a program or project. Typically it is applied to
management or technical decisions that are high-risk or that have a significant
consequence later in the project. DAR ensures a controlled decision process, rather
than a reactionary one for critical choices.

The DAR process has the following benefits:

• the process is applied at critical or high-risk decision points in a project


• alternatives are enumerated that cause the team to seek non-mainstream
choices resulting in a final decision that is thoroughly considered
• the final decision is captured in one or two pages and provides a reference point
to avoid future rehashing and wasted time

19. Can you explain the concept of baseline in software development?

Configuration management is the process of managing change in hardware, software,


firmware, documentation, measurements, etc. As change requires an initial state and
next state, the marking of significant states within a series of several changes becomes
important. The identification of significant states within the revision history of a
configuration item is the central purpose of baseline identification.[1]
Typically, significant states are those that receive a formal approval status, either
explicitly or implicitly (approval statuses may be marked individually, when such a
marking has been defined, or signified merely by association to a certain baseline).
Nevertheless, this approval status is usually recognized publicly. Thus, a baseline may
also mark an approved configuration item, e.g. a project plan that has been signed off
for execution. In a similar manner, associating multiple configuration items with such a
baseline indicates those items as being approved.
20. What is the software you have used for project management?

21. What does a project plan consist?

Step 1: Project Goals


A project is successful when the needs of the stakeholders have been met. A
stakeholder is anybody directly, or indirectly impacted by the project.
As a first step, it is important to identify the stakeholders in your project. It is not always
easy to identify the stakeholders of a project, particularly those impacted indirectly.
Examples of stakeholders are:
• The project sponsor.
• The customer who receives the deliverables.
• The users of the project outputs.
• The project manager and project team.
Step 2: Project Deliverables
Using the goals you have defined in step 1, create a list of things the project needs to
deliver in order to meet those goals. Specify when and how each item must be
delivered.
Add the deliverables to the project plan with an estimated delivery date. More accurate
delivery dates will be established during the scheduling phase, which is next.
Step 3: Project Schedule
Create a list of tasks that need to be carried out for each deliverable identified in step 2.
For each task identify the following:
• The amount of effort (hours or days) required to complete the task.
• The resource who will carryout the task.
Step 4: Supporting Plans
This section deals with plans you should create as part of the planning process. These
can be included directly in the plan.

Human Resource Plan

Communications Plan

Risk Management Plan

22. When do you say the project has finished?

23. Can you explain what a PMO office is?

The Project Management Office (PMO) is the department or group that defines and
maintains the standards and processes related to project management within an
organisation.

For years, IT departments have struggled to deliver projects on time and within budget.
But with today’s emphasis on getting more bang for the buck, IT has to rein in projects
more closely than ever. That challenge has led many to turn to project management
offices (PMOs) as a way to boost IT efficiency, cut costs, and improve on project
delivery in terms of time and budget.
While not a new solution, the trend toward implementing PMOs to instill much-needed
project management discipline in IT departments is spreading fast. "More people lately
have been talking to me about PMOs than they have in the last 10 years," says Don
Christian, a partner at PricewaterhouseCoopers. PMOs can help CIOs by providing the
structure needed to both standardize project management practices and facilitate IT
project portfolio management, as well as determine methodologies for repeatable
processes. The Sarbanes-Oxley Act—which requires companies to disclose
investments, such as large projects, that may affect a company’s operating performance
—is also a driver, since it forces companies to keep closer watch on project expenses
and progress. W.W. Grainger, an industrial products distributor, has a PMO that
"enables us to complete more projects on time and on budget with fewer resources,"
says Tim Ferrarell, senior vice president of enterprise systems.
But PMOs are no panacea for project challenges, including battling today’s tepid
business climate. For one thing, there is no uniform recipe for success—it’s important
that the PMO structure closely hews to a company’s corporate culture. PMOs also won’t
give organizations a quick fix or deliver immediate, quantifiable savings. And companies
with PMOs report that they don’t necessarily yield easy to use cost-saving benchmarks
and performance metrics. In a survey conducted by CIO and the Project Management
Institute (PMI), 74 percent of respondents said that lower cost was not a benefit of their
PMOs.
However, survey respondents still reported positive benefits from the formation of a
PMO, even if quantifiable ROI is elusive. Out of 450 people surveyed, 303, or 67
percent, said their companies have a PMO. Of those with a PMO, half said the PMO
has improved project success rates, while 22 percent didn’t know or don’t track that
metric, and 16 percent said success rates stayed the same. There is also a strong link
between the length of time a PMO has been operating and project success rates: The
longer the better. While 37 percent of those who have had a PMO for less than one year
reported increased success rates, those with a PMO operating for more than four years
reported a 65 percent success rate increase. The top two reasons for establishing a
PMO, according to the survey: improving project success rates and implementing
standard practices. In a finding that indicates PMOs’ importance, a survey-leading 39
percent of respondents said the PMO is a strategic entity employed at the corporate
level, meaning it sets project standards across the enterprise and is supported by upper
managers.

24. How many members in your team you have handled?

25. Is Gantt chart a project plan?

NO

Gantt chart is a chart that depicts progress in relation to time, often used in planning
and tracking a project. Eg.

26. Two resources are having issues how do you handle the same?

27. What is a change request?

A change request is a document containing a call for an adjustment of a system; it is of


great importance in the change management process. A change request is not raised
for a wording change in a letter.
Change requests normally crop up during the client acceptance testing phase of a
project. They are changes to the system that were not included within the original
scope, which normally means that they are chargeable. See Bug for a comparison

(CR) A formally submitted artifact that is used to track all stakeholder requests (including
new features, enhancement requests, defects, changed requirements, etc.) along with
related status information throughout the project lifecycle. ...

28. How did you manage change request in your project?

29. Can you explain traceability matrix?

Requirement traceability matrix is a table to match the functional requirements with the
prepared Test cases. Its importance is to ensure that the all requirements are covered
and all changes which are made in the requirement are tracked and covered in the test
cases.

OR

The concept of Traceability Matrix is very important from the Testing perspective. It is
document which maps requirements with test cases. By preparing Traceability
matrix, we can ensure that we have covered all the required functionalities of the
application in our test cases. Some of the features of the traceability matrix:

It is a method for tracing each requirement from its point of origin, through each
development phase and work product, to the delivered product

Can indicate through identifiers where the requirement is originated, specified,


created, tested, and delivered

Will indicate for each work product the requirement(s) this work product satisfies

Facilitates communications, helping customer relationship management and


commitment negotiation

30. What is configuration management?

The dynamic nature of most business activities causes software or system changes.
Configuration management also known as change control is process to keep track of all
changes done throughout the software life cycle. It involves coordination and control of
requirements, code, libraries, design, test efforts, documentation etc. The primary
objective of CM is to get the right change installed at the right time.

31. What is CI?


Continuous Integration is a software development practice where members of a team
integrate their work frequently, usually each person integrates at least daily - leading to
multiple integrations per day. Each integration is verified by an automated build
(including test) to detect integration errors as quickly as possible. Many teams find that
this approach leads to significantly reduced integration problems and allows a team to
develop cohesive software more rapidly. This article is a quick overview of Continuous
Integration summarizing the technique and its current usage.

32. Define stakeholders?

Person, group, or organization that has direct or indirect stake in an organization


because it can affect or be affected by the organization's actions, objectives, and
policies. Key stakeholders in a business organization include creditors, customers,
directors, employees, government (and its agencies), owners (shareholders), suppliers,
unions, and the community from which the business draws its resources. Although
stake-holding is usually self-legitimizing (those who judge themselves to be
stakeholders are de facto so), all stakeholders are not equal and different stakeholders
are entitled to different considerations. For example, a firm's customers are entitled to
fair trading practices but they are not entitled to the same consideration as the firm's
employees.

33. Can you explain versioning?

In order to provide support for multiple users creating and updating large amounts of
geographic information in an enterprise geodatabase, ArcSDE provides an editing
environment that supports concurrent multiuser editing without creating multiple copies
of the data. This editing environment is called versioning.

Versioning involves recording and managing changes to a multiuser geodatabase by


creating a version of the database—an alternative, independent, persistent view of the
database that does not involve creating a copy of the data and supports multiple
concurrent editors. Versioning can only be implemented on ArcSDE geodatabases
hosted on a database management system (DBMS) platform that supports concurrent
multiuser editing. Personal geodatabases, on the other hand, which support only single-
user editing, do not support versioning.

34. Can you explain the concept of sign off?

35. How will you start a project?

Let's think again about what a project is: You might remember that we defined that a
project requires a start date in order that it can be called a project. Similarly, you could
say that a project needs a kick-off meeting or a start workshop in order to work, or in
other words, to really get off the ground. The main reason behind this is that even
though you might know what you want to achieve within your project, your team
members may not (yet): You have to explicitly communicate these facts to your team.
The people you are going to work with need at least to know what you are up to (the
goals of the project), who will be doing what (responsibilities and roles) and how
project administration and controlling will work. In addition, if the goals are not clear
enough you might also want to specify a number of "not-goals", i.e., things you
specifically do not want to be done within the scope of the project.

When deciding on overall responsibilities you should not forget that this is often more
about leadership, communication, organizational skills and trust rather than about
technical competency. Regarding project administration and controlling methods, it is
important that you choose the "right" method for your organization/project size and type:
This is often called "adaptive project management".

It is especially important that the project management methods you choose are not
oversized for your project: The people working with you need to understand why they
have to do the things they are required to do. Otherwise, they will work against you and
the methods you chose, because they think that the methods do not make sense and
that they are not really needed. On the others side, if everyone on the team
understands the methods and why they are needed, your team members will probably
even make suggestions during the project on how the process you have chosen can be
improved.

Finally, an explicit kick-off meeting or start workshop makes sense in order to really
"rally the troops", i.e., it can be used to motivate people and to formally give the "Go"
for the project.

36. What is an MOU?

A memorandum of understanding is an agreement between two parties in the form of a


legal document. It is not fully binding in the way that a contract is, but it is stronger and
more formal than a traditional gentleman's agreement. Sometimes, a memorandum of
understanding is used as a synonym for a letter of intent, particularly in private law. A
letter of intent expresses an interest in performing a service or taking part in an activity,
but does not legally obligate either party.

In international public law, a memorandum of understanding is used frequently. It has


many practical advantages when compared with treaties. When dealing with sensitive or
private issues, a memorandum of understanding can be kept confidential, while a treaty
cannot.

37. What where the deliverables in your project?

38. Can you explain your project?

39. Do you also participate in technical activities?

40. How did you manage code reviews?


41. You have team member who does not meets his deadlines how do you handle
it?

42. Did you have project audits if yes how was it handled?

43. What is a non-conformance report (NCR)?

A Report raised during Quality System Audit on the processes whenever it violates from
the Quality System Standards, Policies, Procedures and whenever it fails to deliver
consistency in the Customer Satisfaction and Continual Improvement.

Examples:

A measured part dimension is spec'd to be .250" +/- .002. It fails as measured at .


257". It is a non-conforming part and must be reported.

An operator fails to follow procedure.

A regulatory report is not filed on time.

The non-conformance report includes who, what, where, when. The report generally
initiates an investigation into Root Cause (why). It generally escalates to CAPA
(Corrective and Preventative Action)

44. How did you estimate your project?

To remedy these shortcomings, below are 12 ideas for boosting the accuracy of your
estimates:

Maintain an ongoing "actual hours" database of the recorded time spent on each
aspect of your projects. Use the data to help estimate future projects and identify
the historically accurate buffer time needed to realistically perform the work.

Create and use planning documents, such as specifications and project plans.

Perform a detailed task analysis of the work to be performed.

Use a "complexity factor" as a multiplier to determine whether a pending project is


more or less complex than a previous one.
Use more than one method to arrive at an estimate, and look for a midpoint among
all of them.

Identify a set of caveats, constraints, and assumptions to accompany your


calculations, which would bound the conditions under which your estimates
would be meaningful. (Anything that occurs outside of those constraints would be
considered out of scope.)

If the proposed budget or schedule seems inadequate to do the work, propose


adjusting upward or downward one or more of the four project scoping criteria:
cost, schedule, quality, and features.

Consider simpler or more efficient ways to organise and perform the work.

Plan and estimate the project rollout from the very beginning so that the rollout won't
become a chaotic scramble at the end. For instance, you could propose using a
minimally disruptive approach, such as a pilot programme or a phased
implementation.

In really nebulous situations, consider a phase-based approach, where the first


phase focuses primarily on requirements gathering and estimating.

Develop contingency plans by prioritising the deliverables right from the start into
"must-have" and "nice-to-have" categories.

Refer to your lessons-learned database for "20:20 foresight" on new projects, and
incorporate your best practices into future estimates.

In conclusion, by using a set of proactive estimating techniques to scope, plan, and


constrain your project conditions, you can dramatically improve your estimating
practices, reduce and mitigate risks, and greatly increase your project success rate!

45. How did you motivate your team members?

The success of your business or organization depends largely on the people that make
up your team. Whether they are salespeople, customer service representatives,
executive managers or service providers, your team can make or break the success of
your organization or business.

Therefore, motivating your team to continually meet and exceed goals and expectations
is essential to the overall success of your organization. How to motivate your team?
Here are seven useful tips that will help you keep your team motivated and working
hard to achieve your organization's goals.

1. Set clear and realistic goals. The first essential step towards meeting your team's
goals and inspiring your team to participate and achieve, is to set clear and realistic
goals. Set both short-term and long-term goals and build on each success as a goal is
achieved.
2. Clearly communicate goals and expectations. In order for your team to meet and
exceed goals and expectations, they must have a solid understanding of what they are
working to obtain. Clearly communicate the goals and expectations that have been
established in order to set your team up for success.

3. Provide all necessary tools. Morale and motivation can only get your team so far.
To maintain a happy team that is motivated to work hard to achieve their goals, ensure
that they have the proper tools for the job. This may include equipment, training,
supplies, support or coaching.

4. Use work plans. Work plans are incredibly effective tools that act both as to do lists
complete with goals and deadlines, and well-organized agendas for follow up meetings
and check-ins.

5. Stay connected and follow up. Schedule regular check-ins with each team
member, as well as the whole team. This is a great way to help team members maintain
accountability to the team, as well as an easy way to immediately discover if anyone is
falling behind on their responsibilities or facing challenges that need to be addressed.

6. Involve your team setting goals and creating work plans. There is no better way
to establish ownership of goals and expectations than to involve your team in setting the
goals and establishing the expectations that they will then work together to meet.

7. Celebrate successes and use incentives. Celebrate both individual and team
successes whenever goals are met or exceeded. Incentives can range from small gifts
or public praise, to workplace perks and recognition. Incentives don't have to be
expensive to show your team that you appreciate them and that their efforts are noticed.

46. Did you create leaders in your team if yes how?

47. How did you confirm that your modules are resource independent?

48. Was your project show cased for CMMI or any other project process
standardization?

49. What are the functions of the Quality Assurance Group (QAG)?

The quality assurance group is typically responsible for:

A. ensuring that the output received from system processing

is complete.

B. monitoring the execution of computer processing tasks.

C. ensuring that programs and program changes and


Documentation adheres to established standards.

D. designing procedures to protect data against accidental

Disclosure, modification or destruction.

50. Can you explain milestone?

A milestone is a significant event in the project, usually completion of a major


deliverable. A milestone, by definition, has duration of zero and no effort. Milestones are
essential to manage and control a project, but there is no task associated with it
(although preparing a milestone can involve significant work) Usually a milestone is
used as a project checkpoint to validate how a project is progressing and revalidate the
work.

Example:

Project Plan Odessa Mobile Technology Project

Project Approach

This section should outline the way you will roll out the technology, including the
highest level milestones.

For example:

Phase I: Secure agreement with vendors (L3 and Tiburon)

Phase II: Order/Install Equipment

Phase III: Install/Test Software

Phase IV: Conduct Hardware/Software Testing

Phase V: Conduct Training

Phase VI: Implement ARS/AFR

51. How did you do assessment of team members?


52. What does entry and exit criteria mean in a project?

Entry criteria – It ensures that the proper environment is in place to start test process of
a project

e.g. All hardware/software platforms are successfully installed and functional, Test plan,
test case are reviewed and signed off.

Exit Criteria - It ensures that the project is complete before exiting the test stage.E.g.
Planned deliverables are ready, High severity defects are fixed, Documentation is
complete and updated.

53. How much are you as leader and how much are you as PM?

54. How can he handle the conflicts between peers and subordinates?

55. In your team you have highly talented people how did you handle their motivation?

Highly talented people have very different values and motivation from the majority of
people. More is expected of them and they expect more in return. They are often high-
impact but high-maintenance too. They think differently (and faster). They get bored
more readily. They need different kinds of challenges. They can deal with more
complexity but are more complex in themselves. They get frustrated more readily and
express themselves readily.

They are a different kind of person - and they need a different kind of management. The
manager of a talented team needs to learn quickly how to spot and respond to talent,
how to encourage it to grow, whilst gently directing its course. The manager of talent
needs to be able to cope with the fact that certain members of the team may be in some
respects brighter and more able than they are - and they need to be comfortable about
that. The manager of a talented team needs to completely understand what role they
play in the team's success and communicate that subtly but effectively. The manager
must be respected and be the person that the talented individual is happy to be led by.

Additional information:
What do we mean by talented? Proposition: there is something that makes non-
conformity or independent-mindedness an essential ingredient in our contemporary
definition of talent. Proposition: defining talent simply as above average performance
doesn’t get us very far. By whose definition? Different people may have different
definitions? In particular, the managers and the managed may have different definitions.
Is the label ‘talent’ purely subjective? Or is there some pattern in how people define it?
Can there be an objective basis for calling someone ‘talented’? What synonyms are
used for ‘talent’? Is the definition of talent essentially contextual, a function of time and
place? Different domains are likely to define talent differently Are there some domains
where the word talented tends not to be used? When did you ever hear ‘talented’
applied to a clerk or administrator, secretary, labourer, taxi driver, tax officer…? Are
definitions of talent changing? If so, how? How has talent been defined through history?
(Any off-beat definitions?) Are there various discernible types of talent? If so, what are
the types and what do they consist of? What are the common ingredients? Is versatility
an essential ingredient in talent? Or speed of learning? 3. How do you manage these
people? Why are they difficult to manage?

Proposition: you’ve spent years empowering people; now you’ve got


empowered/talented people, you’ve got a new set of challenges! (PS This doesn’t mean
you shouldn’t empower people!) Proposition: talent can be double-edged: it’s good to
have talented people working for you, but they can be problematic to manage
Proposition: talented people tend to be highly motivated but what motivates them can
sometimes be at odds with managerial/organizational priorities and requirements.
Hence a tension to be managed. What are talented people looking for from work (these
days)? What do they value? Respect? Freedom of action? What? What values do they
tend to hold? What will they/won’t they tolerate? Any patterns here? How do you
motivate talented people? Perhaps a better question is how do you ensure that you
don’t demotivate talent? Do talented people tend to be highly motivated? What are the
problems of managing talented people? Which are the most common/most typical
problems? Examples of the challenges of managing talented people from different
domains would be interested; e.g. not just business organizations, but sport, education,
science, entertainment…

What allowances get made for talented people? With what effects? What allowances
should be made? Is Belbin’s idea of ‘allowable weakness’ more trouble than it’s worth?
What can you do about it? Manage expectations Continue to develop their talent
Manage them on the move Trust them Talk to them Get clued up How do managers
tend to tackle these problems? (What is current practice?) With what effects? (as
respectively assessed by the managers, the talent, others…) How should managers
tackle these problems? Why? What’s the rationale? What’s the evidence that other
approaches work? Manage Expectations One of the ways talent gets wasted is through
the unrealistic expectations which are held of it. (Something about the time dimension in
here.) We all think we’re talented. Or do we? Is it part of a manager’s job to let people
know that they are talented? Or that they are not? Proposition: there are higher
expectations of talent; more is expected of them and they expect more in return, that’s
an aspect of what makes them different, and perhaps the essence of what makes them
difficult to manage. So what do managers these days expect of talented people? Part of
the price of being reckoned to have talent is that you are given bigger, more difficult,
more demanding jobs AND expected to do them better and faster than others. Is one of
the ways in which talent is squandered by managers…not managing expectations about
the rate at which talent is going to develop. And what do talented people expect of their
managers?
What do managers perceive that they get? And how do they feel about that? What do
talented people perceive that they get? And how do they feel about that? Example
cases of ‘good’ and ‘bad’ manager-talent relationships? How do the parties respectively
define good and bad? Continue to develop their talent Talent is not an entity but an
incremental/emergent property…needs investment and maintenance. (Some interesting
connections with Carol Dweck’s work here.) Proposition: a key part of the role in
managing talent is to give it time and space (‘fighter cover’), and scaffold its
development. Talent is seldom a given, it has to be developed. Managers have a role in
developing (realising) rather than stifling talent. How? What are the key things to do to
develop and nurture talent, to get the most out of it? This still leaves prime, but not sole
responsibility for developing talent with the talented person themselves. Manage them
on the move

Proposition: there is something fluid, mobile, dynamic, quick about talent which requires
that it is managed in a fluid, mobile, dynamic (clued up) way. Trust them Where does
trust come into all this? Proposition: trust is an essential ingredient in the greater give
and take required to manage – and get the best out of – talented people. (Link to
Herriot’s stuff on how you can’t get innovation or change from people if they don’t trust
you…and you look to your talented people to lead innovation and change.) Talk to them
Proposition: dialogue is the key to developing the relationship and the trust. (So you’d
better be skilful at managing dialogue.) Get clued up How do the clued up angles play
into this issue of managing talent?

Proposition: one of the characteristics of talented people is that they like to put ideas
into action. Proposition: talented people tend to think differently to most people. So you
need good thinking to understand them. Proposition: talented people can handle more
complexity (and may be more complex themselves) but may consequently be harder for
others to understand Proposition: there is a lot of politics around talented people!!!!!
Proposition: talented people often express themselves differently. You need to talk their
language (and certainly not talk down to them.) Proposition: they are individuals (by
definition?) so you need to develop a tailored management approach – pay attention to
the clues 4. How does having talented people change a manager’s role? Managing
talent is not a bolt-on activity; it’s part of the day job as a manager. Proposition: having
talented people makes a manager’s role more ambiguous, more fluid, more dynamic.
Because talent will push the boundaries, the manager often experiences a lack of role
clarity, a more confused relationship than with other staff. Proposition: therefore
managers have to work harder and more continuously at the relationship with their
talented people. Paradoxically, the talented ones may take up the most time! Or may
need periodic bouts of intense attention (a different pattern of managing than other
staff.)

Proposition: managers have to negotiate and contract with talented people in a way/to
an extent that they don’t with other staff. Talented people are more demanding, though
you get more in return. Proposition: talent, people who are particularly good at
something which is of value for the organisation, present two fundamental challenges
for any manager:- balancing the value of the talented person’s individualism with the
need for control balancing the value of the talented person’s individualism with the need
for teamwork How does having talented people affect the balancing act that any
manager has to maintain between the three perennial requirements for control, co-
operation and autonomy? When do these dilemmas occur? How does this relate to the
situations where managers report finding talented people problematic? How is balance
achieved in practice? What are the options and choices? What are the consequences
and associated with each?

Co-operation Control Autonomy Proposition: it’s not all down to the manager... Talented
people have a responsibility, if they want to realise their talents fully, to recognise and
engage with (though certainly not just give in to) the demands of the context they are
working in. These demands include some requirement for control and co-operation. You
can’t have pure autonomy. It’s only on offer if you go and work for yourself (and actually
not even then!) But the manager does have a particular responsibility, which is to make
sure that is understood and not to shirk the difficult discussions (too often done as
managers accommodate difficult talents to placate and keep them on-side) How do you
manage your whole team? Can you have too much talent in your team? (reflections on
what Belbin found out about team workers and specialists) How do you deal with those
without talent? How do/should managers differentiate their role from that of their
talented people? How can a manager make a distinct and additive contribution to what
talent does? As a ‘talented’ manager what do you want from work? Do you have all the
same issues as we have been describing for talent? What are the consequences of
that? 5. What does this all mean for you? Proposition: Being effective as a manager of
talent will be a combination of acting appropriately with your team and dealing with you
own shit. (Nothing new there then!) How much do you know about your team? Who has
talent and who doesn’t? More specifically what are their capabilities, motivations etc.
What about you? How well do you manage you team now? What evidence have you
got? What do you do well/badly? Under what circumstances? Have you got talent?
What are the consequences? How are you being managed? How could you improve
your managers performance? So what are you going to do differently then? By when
etc. What will help and hinder etc. etc.

Reviews "It is an easy-to-read text, written in a conversational style, refreshingly free


from jargon and pomposity. Any managers who want to get the best from whatever
talent exists in their teams would do well to read it." People Management magazine
"Managing Talented People is worth a read. It explores important issues directly
affecting the creative industry. .. Its structure is very easy to dip into - with clear
headings, bullet-points and quotes - and has an easy-going style." Design Week
"...contains bullet points and definitions of terms, making it easy to absorb the
information." Supply Management

56. How can you balance between underperforming and outperforming people?

57. You need to make choice between delivery and quality what’s your take?
58. Define risk?

There may be external circumstances or events that cannot occur for the project to be
successful. If you believe such an event is likely to happen, then it would be a risk.
Identifying something as a risk increases its visibility, and allows a proactive risk
management plan to be put into place.

If an event is within control of the project team, such as having testing complete by a
certain date, then it is not a risk. If an event has 100 percent chance of occurring, then it
is not a risk, since there is no "likelihood" or risk involved (it is just a fact).

Examples of risks might be a"Reorganization may result in key people being


reassigned," or "The new hardware may not be able handle the expected sales
volume."

Risk Analysis:

Test Plan

Test Item Tree Risk


Strategy

Risk
Identification
Testing,
Risk Inspection etc.
Assessment

Risk
Matrix: Cost Mitigation Test Metrics
and Probability
Risk
Reporting

Risk
Prediction

59. What is risk break down structure?

Risk Breakdown Structure

Risk breakdown structure is a hierarchical list of risks. Risk breakdown


structure helps to identify and manage project risks. In RiskyProject you can
define different risk breakdown structures at the project level and for each
separate task or resource. Each risk is defined by its chance of occurrence,
outcome (increase duration, cost, cancel task, etc.) and time of occurrence.

Risk breakdown structure is implemented with Global Risks view as well as


Local Risks tab of RiskyProject Lite and RiskyProject professional.

RiskyProject also has a number of risk templates. Risk templates are standard
risk breakdown structures that allow you to quickly and simplify add risks to
tasks and projects. A number of templates are included to RiskyProject
package. In addition, it is easy to create your own templates, which you can
use for your own projects.

Software Development Risk Breakdown Structure

Risks affecting whole


company/division

Budgetary Risks

Environmental Risks

Legal Risks

Resource Risks

Lack of knowledge of the specific area

Lack of knowledge of tools

Staff turnover

Risks related to the competence of the


management

Requirement/Client
Relationship

New or updated requirement

Risks related to interpretation of requirements

Results are not accepted by the client

Risks related to communication with the client

Risks related to hardware or


IT infrastructure

Hardware performance or other parameters


are not suitable for the project

Risks related to communication infrastructure

Problems with development


tools

Selected software tools are not suitable for


particular task
Selected third party are not suitable for
particular task

Other Risks

Critical bugs are discovered

Configuration management issues

Chosen software architecture is not


suitable

60. How did you plan your risk?

Risk Management

In rushing to take advantage of SMS features, organizations might overlook the risks
involved in running a technically complex implementation project that touches nearly
every component of your infrastructure.

You must actively manage any risk. To manage risks effectively, identify the risks, and
then design contingency plans for dealing with those risks.

Also, it is important to perform a risk assessment and to re-evaluate your risk


management plan after you complete each phase of the project.

Risk Analysis

To conduct a comprehensive risk analysis, use a system such as Microsoft Readiness


Framework, available through Microsoft Consulting Services.

Microsoft Readiness Framework is a guide created and used by Microsoft partners to


provide an approach for organizations in preparing their people and processes for
technology adoption. The Microsoft Readiness Framework risk model helps you
manage risks that are specific to technology readiness efforts and projects that prepare
an organization to fully adopt new technology, and to realize the business benefits
driven by this change.
Avoiding Risks

The best way to avoid risks is to plan your SMS implementation carefully. For example,
using the default settings provided by Express Setup to install SMS presents
considerable risks to your computing environment. The default settings cannot
guarantee a successful deployment for every organization. Properly planning
configuration settings before deploying SMS in your production environment is the
preferred method of performing an SMS installation.

Table 7.2 outlines some potential risks that you should be aware of before completing
your project plan.

Table 7.2 Risk Avoidance and Best Practices

Action Risk Best practice

Hindered network
Create a project plan
infrastructure stability,
and follow the planning
reduction in available
and installation
Deploying SMS without bandwidth, reduced
guidelines in this book or
planning performance due to
in the Microsoft
improper server sizing, and
Solutions Framework
the potential for SMS to
documentation.
collect data that is not valid

For large- and medium-


sized organizations,
Use Custom Setup
using the Express Network infrastructure
unless you are
Setup feature to install instability, performance
evaluating SMS within a
an SMS site server issues, and productivity
lab environment that is
without planning or interruptions due to a
physically isolated from
considering the reduction in available
your production
customizable SMS and network bandwidth
environment.
Microsoft SQL
Server(tm) settings

Interoperability problems
and reduced ability to:
Thoroughly test your
· Provide support staff SMS deployment, run a
Not testing in a lab with needed skills pilot project, and
environment before and experience document your results
deployment before deploying any
· Eliminate the costs SMS component on your
associated with incorrect production network.
design, which could lead to
a costly redeployment
Develop a formal
change management
process and tracking
system to ensure that
No use of change Inability to troubleshoot
changes are made only
control or change system failure if changes to
where necessary to fulfill
management system are not tracked
objectives, and that all
implications and risks
are understood in
advance.

Plan for recovery as you


Not planning for SMS data loss and complex plan your deployment,
recovery recovery process not after you have
already deployed SMS.

Security breaches -
Plan for security early,
Not understanding and unauthorized access of
so that you can ensure
planning for SMS client computers or
the security of your
security policies malicious destruction of
computing environment.
client computers

As you assign roles to


Improper installation and your SMS project staff
use of SMS, failure to meet and trainers, ensure that
requirements, and poor these individuals are
Not planning for
support for end users, all of trained in the areas of
training and education
which can result in forming expertise needed for
a negative reputation for planning, installing,
SMS in the organization supporting, and
maintaining SMS.

Plan a schedule for


Not planning and Insufficient support from
informing the SMS team
carrying out a good management, colleagues,
and all other groups of
communications end users, or other groups
planning and
strategy in the organization
deployment progress.

Change Control and Change Management

Most of your significant project design changes are likely to occur as the result of
testing. In the pre-planning phase, begin thinking about how you want to control and
manage change throughout the planning and deployment phases of the project.
Change control requires tracking and reviewing changes to your implementation plan
made during testing cycles and after deployment. Change management requires testing
potential system changes in a lab environment before implementing them in your
production environment. By identifying all affected systems and processes before a
change is implemented, you can mitigate or eliminate potential adverse effects.

61. What is DR, BCP and contingency planning?

A disaster recovery plan (DRP) - sometimes referred to as a business continuity plan


(BCP) or business process contingency plan (BPCP) - describes how an organization is
to deal with potential disasters. Just as a disaster is an event that makes the
continuation of normal functions impossible, a disaster recovery plan consists of the
precautions taken so that the effects of a disaster will be minimized and the organization
will be able to either maintain or quickly resume mission-critical functions. Typically,
disaster recovery planning involves an analysis of business processes and continuity
needs; it may also include a significant focus on disaster prevention.

Disaster recovery is becoming an increasingly important aspect of enterprise


computing. As devices, systems, and networks become ever more complex, there are
simply more things that can go wrong. As a consequence, recovery plans have also
become more complex. According to Jon William Toigo (the author of Disaster Recovery
Planning). For example, fifteen or twenty years ago if there was a threat to systems
from a fire, a disaster recovery plan might consist of powering down the mainframe and
other computers before the sprinkler system came on, disassembling components, and
subsequently drying circuit boards in the parking lot with a hair dryer. Current enterprise
systems tend to be too large and complicated for such simple and hands-on
approaches, however, and interruption of service or loss of data can have serious
financial impact, whether directly or through loss of customer confidence.

Appropriate plans vary from one enterprise to another, depending on variables such as
the type of business, the processes involved, and the level of security needed. Disaster
recovery planning may be developed within an organization or purchased as a software
application or a service. It is not unusual for an enterprise to spend 25% of its
information technology budget on disaster recovery.

Nevertheless, the consensus within the DR industry is that most enterprises are still ill-
prepared for a disaster. According to the Disaster Recovery site, "Despite the number of
very public disasters since 9/11, still only about 50 percent of companies report having a
disaster recovery plan. Of those that do, nearly half have never tested their plan, which
is tantamount to not having one at all."

62. Can you explain WBS?


Introduction

Company owners and project managers use the Work Breakdown Structure (WBS) to
make complex projects more manageable. The WBS is designed to help break down a
project into manageable chunks that can be effectively estimated and supervised.

Some widely used reasons for creating a WBS include:

· Assists with accurate project organization

· Helps with assigning responsibilities

· Shows the control points and project milestones

· Allows for more accurate estimation of cost, risk and time

· Helps explain the project scope to stakeholders

A work breakdown structure is just one of many project management forms and
templates.

Constructing a Work Breakdown Structure

To start out, the project manager and subject matter experts determine the main
deliverables for the project. Once this is completed, they start decomposing the
deliverables they have identified, breaking them down to successively smaller chunks of
work.

"How small?" you may ask. That varies with project type and management style, but
some sort of predetermined “rule” should govern the size and scope of the smallest
chunks of work. There could be a two weeks rule, where nothing is broken down any
smaller than it would take two weeks to complete. You can also use the 8/80 rule, where
no chunk would take less than 8 hours or longer than 80 hours to complete.
Determining the chunk size “rules” can take a little practice, but in the end these rules
make the WBS easier to use.

Regarding the format for WBS design, some people create tables or lists for their work
breakdown structures, but most use graphics to display the project components as a
hierarchical tree structure or diagram. In the article Five Phases of Project
Management, author Deanna Reynolds describes one of many methods for developing
a standard WBS.

What is a Work Breakdown Structure Diagram?

A WBS diagram expresses the project scope in simple graphic terms. The diagram
starts with a single box or other graphic at the top to represent the entire project. The
project is then divided into main, or disparate, components, with related activities (or
elements) listed under them. Generally, the upper components are the deliverables and
the lower level elements are the activities that create the deliverables.
Information technology projects translate well into WBS diagrams, whether the project is
hardware or software based. That is, the project could involve designing and building
desktop computers or creating an animated computer game. Both of these examples
have tasks that can be completed independently of other project tasks. When tasks in a
project don’t need to be completed in a linear fashion, separating the project into
individual hierarchical components that can be allotted to different people usually gets
the job done quicker.

One common view is a Gantt chart. In a recent article, Joe Taylor, Jr. discusses the Top
Ten Benefits of a Gantt Chart.

Simple WBS Examples

Building a Desktop Computer - Say your company plans to start building desktop
computers. To make the work go faster, you could assign teams to the different aspects
of computer building, as shown in the diagram E-1 shown below. This way, one team
could work on the chassis configuration while another team secured the components.

Creating an Animated Computer Game – Now we switch to software project


managment, where you startup a computer animation company. To be the first to get
your computer game on the market, you could assign teams to the different aspects of
writing, drawing and building animated computer games, as shown in diagram E-2
below. Perhaps your key programmer is also a pretty good artist. Rather than have him
divide his time and energy by trying to do both tasks, you will realize faster results if the
programmer concentrates on programming while his cousin Jenny draws the scenery.
Conclusion

At the risk of sounding melodramatic, the efficacy of a project’s Work Breakdown


Structure can determine that project’s success. The WBS provides the foundation for
project planning, cost estimation, scheduling and resource allocation, not to mention risk
management.

63. Can you explain WBS numbering?

The first number in WBS denotes the project. For instance in figure ‘WBS numbering’
we have show the number ‘1’ as the project number which is further extended according
to level. Numbering and numeric and alphanumeric or combination of both. Figure
‘Different Project Number’ shows the project number is ‘528’

Figure: - WBS Numbering

Figure: - Different Project Number

64. How did you do resource allocation?

There are two steps for doing resource allocation: -


Break up the project in to WBS and extract the task from the same. For instance below
figure ‘Task from WBS’ shows how we have broken the accounting project in to small
section and the final root is the tasks.

Figure: - Task from WBS

Now the tasks at the final root are assigned to the resources. Table ‘Assign task to
resource’ shows how the task are now allocated to resourc

Figure: - Assign task to resource

65. Can you explain the use of WBS?

Below is a pictorial view of numerous uses of WBS.


Figure: - Use of WBS

One of the main uses of WBS is for scheduling. WBS forms as a input to network
diagrams from scheduling aspect.

Figure: - WBS and Network

66. Can you explain network diagram?

Network diagram shows logical relationship between project activities. Network diagram
helps us in the following ways: -

It helps us understand which activity is independent of other activity. For instance you
can start coding/execution of transactional screens with out master screens being
completed. This also gives an other view saying that you can execute both the activities
in a parallel fashion.

Network diagram also gives list of activities which can not be delayed. Like we can
delay the master screens of a project, but not the transactional.
67. What are the different types of network diagram?

we have two types of network diagrams one is AON (Activity Networks) and other is
AOA (Arrow Networks). Below figure ‘Types of Network Diagrams’ shows the
classification in a more visual format. CPM / CPA (Critical Path Method / Critical Path
Analysis) and PERT (Program Evaluation and Review Technique) come under Arrow
networks. PDM (Precedence Diagrams) comes under activity diagram.

Figure: - Types of Network Diagrams

68. What is the advantage of using network diagrams?

Network diagrams help us in the following ways: -

Helps us find our critical / non-critical activities. So if we know our critical activities we
would like to allocate our critical people on the critical task and medium performing
people on the non-critical activities.

This also helps us to identify which activities we can run in parallel, thus reducing the
total project time.

68.What is the advantage of using network diagrams?

Network diagrams help us in the following ways: -


Helps us find our critical / non-critical activities. So if we know our critical activities we
would like to allocate our critical people on the critical task and medium performing
people on the non-critical activities.

This also helps us to identify which activities we can run in parallel, thus reducing the
total project time.

69.Can you explain Arrow diagram and Precedence diagram?

ARROW DIAGRAM METHOD:

is a network diagramming technique in which activities are represented by arrows.s a


network diagramming technique in which activities are represented by arrows.

It is used for scheduling activities in a project plan.

PRECEDENCE DIAGRAM METHOD:

The Precedence Diagram Method is a tool for scheduling activities in a project plan. It is
a method of constructing a project schedule network diagram that uses boxes, referred
to as nodes, to represent activities and connects them with arrows that show the
dependencies.

Critical Tasks, noncritical tasks, and slack time

Shows the relationship of the tasks to each other.

70.What are the different types of Network diagrams?

two types of network diagrams one is AON (Activity Networks) and other is AOA (Arrow
Networks). Below figure ‘Types of Network Diagrams’ shows the classification in a more
visual format. CPM / CPA (Critical Path Method / Critical Path Analysis) and PERT
(Program Evaluation and Review Technique) come under Arrow networks. PDM
(Precedence Diagrams) comes under activity diagram.
71. Can you explain Critical path?

CPA / CPM (Critical path analysis / method) are an effective way to analyze complex
projects. A project consists of set of activities. CPA represents the critical set of activities
to complete a project. Critical path helps us to focus on essential activities which are
critical to run the project. Once we identify the critical activities we can devote good
resources and prioritize the same accordingly.

· Critical path method, an algorithm for scheduling of activities

72. Can you define EST, LST, EFT, and LFT?

CPM (Critical Path Method) uses the following times for an activity.

(EST)Early start Time is the earliest time the activity can begin.
(LST)Late start Time is the latest time the activity can begin and still allow the project to
be completed on time.
(EFT) Early finish Time is the earliest time the activity can end.
(LFT) Late finish Time is the latest time the activity can end and still allow the project to
be completed on time.
figure: start and end

According to CPM calculation the start date should be minimum 1-Jan-2009 and
maximum end date is 30-jan-2009. Our EST, EFT, LST and LFT should fall between
these lines.

Figure: - Forward Calculation

First we need to calculate EST and EFT. EST and EFT are calculated using the forward
pass methodology. Figure ‘EST and EFT’ shows how the forward calculation works. We
add "0" to the start date i.e. 1-Jan-2009 which becomes the EST of ‘Get Faculties’. ‘Get
Faculties’ task takes the 6 days and adds to EST which gives us 7-Jan-2009 which is
the EFT for ‘Get Faculties’. EFT becomes the EST of the next task i.e. ‘Buy Computers’.
Again we add number of days of ‘Buy Computers’ task to get EFT and so on. In short
EFT is calculated by subtracting number of days from EST. EFT of this task becomes
the EST of the next task.
Figure: -EST and EFT

Figure: - Backward Calculation

73. Can you explain Float and Slack?

Float (also known as slack, total float and path float) is computed for each task by
subtracting the EFT from the LFT (or the early start from the late start). Float is the
amount of time the task can slip without delaying the project finish date. Free float is the
amount of time a task can slip without delaying the early start of any task that
immediately follows it.

74. Can you explain PERT?

A PERT chart is a project management tool used to schedule, organize, and coordinate
tasks within a project. PERT stands for Program Evaluation Review Technique,
75. Can you explain Gantt chart?

GANTT chart is a time and activity bar chart. Gantt charts are easy-to-read charts
thatdisplay the project schedule in task sequence and by the task start and finish dates.
Gantt charts are simple chart which display the project schedule in task sequence and
by thetask start and finish dates. Lets consider the below given simple four activity
network figure.
Figure: -Simple Activity Network

Figure: -Activity Bar

The top bar shows the total activity period. Dependencies are shown by one arrow
connecting to the other arrow; we have circled how the dependencies are shown. Task
B can only start if task A is completed. GNATT chart is a helpful way to communicate
schedule information to top management since it provides an easy-to-read visual picture
of the project activities.

Figure: -GANTT Chart

76. What is the disadvantage of Gantt chart?

It does not show clear dependencies/relationships between tasks, for instance, which
task comes first, then second, and so on. It also fails in showing the critical and non-
critical tasks. GANTT chart is best used to show summary of the whole project to the
top management as it does not show detail information for every activity.

77. What is Monte-Carlo simulation?

Monte Carlo simulation, or probability simulation, is a technique used to understand the


impact of risk and uncertainty in financial, project management, cost, and other
forecasting models.

How It Works

In a Monte Carlo simulation, a random value is selected for each of the tasks, based on
the range of

estimates. The model is calculated based on this random value. The result of the model
is recorded,

and the process is repeated. A typical Monte Carlo simulation calculates the model
hundreds or

thousands of times, each time using different randomly-selected values

In the Monte Carlo simulation, we will randomly generate values for each of the tasks,
then calculate

the total time to completion1. The simulation will be run 500 times. Based on the results
of the

simulation, we will be able to describe some of the characteristics of the risk in the
model.

To test the likelihood of a particular result, we count how many times the model returned
that result in

the simulation. In this case, we want to know how many times the result was less than
or equal to a

particular number of months.

Time Number of Times (Out of 500) Percent of Total (Rounded)

12 Months 1 0%

13 Months 31 6%

14 Months 171 34%

15 Months 394 79%

16 Months 482 96%

17 Months 499 100%

18 Months 500 100%

: Results of a Monte Carlo Simulation

Monte Carlo
simulation, however, we can see that out of 500 trials using random values, the total
time was 14

months or less in only 34% of the cases.

Put another way, in the simulation there is only a 34% chance – about 1 out of 3 – that
any individual

trial will result in a total time of 14 months or less. On the other hand, there is a 79%
chance that the

project will be completed within 15 months. Further, the model demonstrates that it is
extremely

unlikely, in the simulation, that we will ever fall at the absolute minimum or maximum
total values.

78. Can you explain PV, AC a7nd EV?

PV (Planned Value):- PV is also termed as (BCWS) Budgeted cost of work scheduled.


It answers “How much do we plan to spend till this date?”. It’s the total budgeted cost for
the project.

AC (Actual Cost):- AC is also termed as ACWP (Actual cost of Work Scheduled). It


answers “How much have we actually spent?”.

EV (Earned Value):- EV is also termed as BCWP (Budgeted Cost of Work Performed).


It answers “How much work has actually been completed?”.

Figure: - PV, AC and EV

79. Can you explain BCWS, ACWS and BCWP?


80. What are the derived metrics from Earned Value?

Earned Value gives us three metric views for a project.

Figure: - Earned Value Metrics

Current Progress: - This shows how we are performing in the project.


Forecasting: - Will help us answer how we will do in the project in future.
How will we catch up: - In case the project is moving behind schedule or over budget
how do we make up?

Current Progress metrics

Schedule Variance (SV)

Schedule variance is the difference between Earned value and planned value

SV = EV – PV
SV Description

You are on right


0
schedule.

You are behind


Negative
schedule.

you are ahead of


Positive
schedule.

Cost Variance (CV)

Cost variance is the difference between earned value and the actual cost.

CV = EV – AC

CV Description

You are on right on


0
budget.

You are over


Negative
budget.

you are under


Positive
budget.
Cost performance Index (CPI)

CPI is the ratio of Earned value to Actual cost.

CPI = EV / AC

Descripti
CPI
on

You are
1 right on
budget.

You are
Less than 1 over
budget.

you are
Greater than 1 under
budget.

81.Can you explain earned value with a sample?

Let’s take a small sample project. We need to make 400 breads and following is the
estimation of the project:-

We need to make 400 breads.

It will take 10 hours to make 400 breads.

Each bread will cost 0.02 $.

Total cost of making 400 breads is 400 X 0.02 = 8$.

In one hour we should be able to make 40 breads.


Below graph “Bread Project” shows the planned value and the actual cost graph.
According to the planned value in 3 hours we will make 120 breads, in 7 hours we will
make 280 breads and finally we will complete the 400 bread target in 10 hours. As the
project moves ahead actually in the 3 rd hour we have only completed 80 breads with
3$ spent.

82. Estimation, Metrics and Measure?

software metric is a measure of some property of a piece of software or its


specifications. Since quantitative measurements are essential in all sciences, there is a
continues effort by computer science practitioners and theoreticians to bring similar
approaches to software development.

83. What is meant by measure and metrics?

84. Which metrics have you used for tracking purpose?

85. What are the various common ways of estimation?

Test Estimation will be done based on three attributes.

- Size

- Effort

- Schedule.

Size - The size of the project needs to be determined in order to estimate the testing.
Size can be measured in 3 ways 1. LOC(Lines of Code) 2. Function Points(Functions
Features in the application has to be taken as inputs) 3. No. of Screens/Forms. While
estimating the size we should take the time required for automation and number of
configuration the application to be tested.

Effort - Once the Size estimation is done the effort requried to be estimated which can
be done using the Size estimates+Productivity(Time that can be taken for Test
authoring/execution per organization standards)+amount of time takes to test the
product on multiple combinations.

Schedule - The testing should be categorized into different phases and based on the
effort estimated for the project the schedule will be estimated.
86.Can you explain LOC method of estimation?

Lines of Code (LOC) method measures software and the process by which it is being
developed. Before an estimate for software is made, it is important and necessary to
understand software scope and estimate its size.

Lines of Code (LOC) is a direct approach method and requires a higher level of detail by
means of decomposition and partitioning. In contrast, Function Points (FP) is indirect an
approach method where instead of focusing on the function it focuses on the domain
characteristics.

An expected value is then computed using the following formula.

where,

EV stand for the estimation variable.

Sopt stand for the optimistic estimate.

Sm stands for the most likely estimate.

Spess stand for the pessimistic estimate.

Example:

Problem Statement: Take the Library management system case. Software developed
for library will accept data from operator for issuing and returning books. Issuing and
returning will require some validity checks. For issue it is required to check if the
member has already issued the maximum books allowed. In case for return, if the
member is returning the book after the due date then fine has to be calculated. All the
interactions will be through user interface. Other operations include maintaining
database and generating reports at regular intervals.

Major software functions identified.

1. User interface

2. Database management

3. Report generation

For user interface

Sopt : 1800

Sm : 2000

Spess : 4000
EV for user interface

EV = (1800 + 4*2000 + 4000) / 6

EV = 2300

For database management

Sopt : 4600

Sm : 6900

Spess : 8600

EV for database management

EV = (4600 + 4*6900 + 8600) / 6

EV = 6800

For report generation

Sopt : 1200

Sm : 1600

Spess : 3200

EV for report generation

EV = (1200 + 4*1600 + 3200) / 6

EV = 1800

87. How do we convert LOC in to effort?

88. Can you explain COCOMO?

Short for Constructive Cost Model, a method for evaluating and/or estimating the cost of
software development. There are three levels in the COCOMO hierarchy:

Basic COCOMO: computes software development effort and cost as a function of


program size expressed in estimated DSIs. There are three modes within Basic
COCOMO:

o Organic Mode: Development projects typically are uncomplicated and involve small
experienced teams. The planned software is not considered innovative and requires a
relatively small amount of DSIs (typically under 50,000).
o Semidetached Mode: Development projects typically are more complicated than in
Organic Mode and involve teams of people with mixed levels of experience. The
software requires no more than 300,000 DSIs. The project has characteristics of both
projects for Organic Mode and projects for Embedded Mode.

o Embedded Mode: Development projects must fit into a rigid set of requirements
because the software is to be embedded in a strongly joined complex of hardware,
software, regulations and operating procedures.

Intermediate COCOMO: an extension of the Basic model that computes software


development effort by adding a set of "cost drivers," that will determine the effort and
duration of the project, such as assessments of personnel and hardware.

Detailed COCOMO: an extension of the Intermediate model that adds effort multipliers
for each phase of the project to determine the cost driver??s impact on each step.

89. Can you explain Intermediate COCOMO and COCOMO II?

Introduction to COCOMO II Estimates

COCOMO (Constructive Cost Model) is a model that allows software project managers
to estimate project cost and duration. It was developed initially (COCOMO 81) by Barry
Boehm in the early eighties. The COCOMO II model is a COCOMO 81 update to
address software development practices in the 1990's and 2000's. The model is by now
invigorative software engineering artifact that has, from customer perspective, the
following features:

The model is simple and well tested

Provides about 20% cost and 70% time estimate accuracy

In general, COCOMO II estimates project cost, derived directly from person-months


effort, by assuming the cost is basically dependent on total physical size of all project
files, expressed in thousands single lines of code (KSLOC). The estimation formulas
have the form:

There are similar COCOMO formulas for project duration (expressed in months) and
average size of project team. Interestingly, project duration in COCOMO is
approximately cube root of effort (in person-months).

In practice, COCOMO parameters can be greatly different from its typical values.
COCOMO II provides classification of factors that can have an influence on project cost,
and lets you make better approximation of coefficients and scaling factors for your
particular project.

91. Can you explain in brief Function points?


Ans. This approach computes the total function points (FP) value for the project, by
totaling the number of external user inputs, inquiries, outputs, and master files, and then
applying the following weights: inputs (4), outputs (5), inquiries (4), and master files
(10). Each FP contributor can be adjusted within a range of ±35% for a specific project
complexity.

92. Can you explain the concept Application boundary?

Ans. An application boundary defines the scope of an application. A process can contain
multiple application boundaries. An application running inside one application boundary
cannot directly access the code running inside another application boundary. However,
it can use a proxy to access the code running in other application boundaries.

93. Can you explain the concept of elementary process?

Ans. An Elementary process is the smallest unit of any business activity. It has to have a
meaning or a purpose. An elementary process is complete, when the user comes to
closure on the process and all the business information are in a static and complete
condition.

94. Can you explain the concept of static and dynamic elementary process?

Ans. Dynamic elementary is a process where data moves from internal application
boundary to external application boundary or vice-versa. Example: Input data screen
where user inputs data in to application. Data moves from the input screen inside
application.Static elementary is a process where data of application is maintained either
inside application boundary or in external application boundary.Example: In a customer
Information screen maintaining customer data.

95. Can you explain concept of FTR, ILF, EIF, EI, EO , EQ and GSC ?

Ans. External Inputs (EI) - is an elementary process in which data crosses the boundary
from outside to inside. This data may come from a data input screen or another
application. The data may be used to maintain one or more internal logical files. The
data can be either control information or business information. If the data is control
information it does not have to update an internal logical file. The graphic represents a
simple EI that updates 2 ILF's (FTR's).

External Outputs (EO) - an elementary process in which derived data passes across the
boundary from inside to outside. Additionally, an EO may update an ILF. The data
creates reports or output files sent to other applications. These reports and files are
created from one or more internal logical files and external interface file. The following
graphic represents on EO with 2 FTR's there is derived information (green) that has
been derived from the ILF's

External Inquiry (EQ) - an elementary process with both input and output components
that result in data retrieval from one or more internal logical files and external interface
files. The input process does not update any Internal Logical Files, and the output side
does not contain derived data. The graphic below represents an EQ with two ILF's and
no derived data.

Internal Logical Files (ILF’s) - a user identifiable group of logically related data that
resides entirely within the applications boundary and is maintained through external
inputs.

External Interface Files (EIF’s) - a user identifiable group of logically related data that is
used for reference purposes only. The data resides entirely outside the application and
is maintained by another application. The external interface file is an internal logical file
for another application.

96. How can you estimate number of acceptance test cases in a project?

97. Can you explain the concept of Use Case’s?

Ans. A use case describes what the system must do to provide value to the
stakeholders.

A use case describes the interactions between one of more Actors and the system in
order to provide an observable result of value for the initiating actor.

98. Can you explain the concept of Use case points?

99. What is a use case transaction?

The concept of a (use case) transaction helps to deal with the variation in length and
conciseness typical of use case descriptions. use case specifications can be tersely
written, or be rather verbose/detailed, depending on the use case template used, the
approach adopted, the business context involved, or the personal taste of the
Requirements Specifier. The number of steps in a use case flow, which describes the
interaction between an actor and the system, can also vary widely both across and
within scenarios. You can test for "sameness of size" by detecting and counting the use
case transactions that are involved in your use case specifications. If two use case
specifications have the same number of unique transactions, they have the same size.

100.How do we estimate using Use Case Points?

Ans. The number of use case points in a project is a function of the following:

• the number and complexity of the use cases in the system

• the number and complexity of the actors on the system

• various non-functional requirements (such as portability, performance,


maintainability) that are not written as use cases

• the environment in which the project will be developed (such as the language,
the team’s motivation, and so on)
101.Can you explain on what basis does TPA actually work?

Ans. Size, test strategy and productivity are the three elements which determine the test
efforts for black box testing.Based on them TPA are calculated.

102. How did you do estimation for black box testing?

103. How did you estimate white box testing?

Ans. White box testing can be estimated by using function points.

104. Is there a way to estimate acceptance test cases in a system?

Ans Acceptance test cases can be estimated by calculating function points


Function points were defined in 1979 in A New Way of Looking at Tools by Allan
Albrecht at IBM.[2] The functional user requirements of the software are identified and
each one is categorized into one of five types: outputs, inquiries, inputs, internal files,
and external interfaces. Once the function is identified and categorized into a type, it is
then assessed for complexity and assigned a number of function points. Each of these
functional user requirements maps to an end-user business function, such as a data
entry for an Input or a user query for an Inquiry. This distinction is important because it
tends to make the functions measured in function points map easily into user-oriented
requirements, but it also tends to hide internal functions (e.g. algorithms), which also
require resources to implement, however, there is no ISO recognized FSM Method that
includes algorithmic complexity in the sizing result. Recently there have been different
approaches proposed to deal with this perceived weakness, implemented in several
commercial software product.
105. Can you explain Number of defects measure?

Ans: Measure: The verb means "to ascertain the measurements of"
Measurement: The figure, extent, or amount obtained by measuring"
Metric: "A standard of measurement"
Benchmark: "A standard by which others may be measured"
So we collect data (measurements), determine how those will be expressed as a
standard (metric), and compare the measurement to the benchmark to evaluate
progress. For example, we measure number of lines of code written by each
programmer during a week. We measure (count) the number of bugs in that code. We
establish "bugs per thousand lines of code" as the metric. We compare each
programmer's metric against the benchmark of "fewer than 1 defect (bug) per thousand
lines of code".

What To Measure
Measure those activities or results that are important to successfully achieving your
organization's goals. Key Performance Indicators, also known as KPI or Key Success
Indicators (KSI), help an organization define and measure progress toward its goals.

They differ depending on the organization. A business may have as one of its Key
Performance Indicators the percentage of its income that comes from return customers.
A Customer Service department may have as one of its KPIs the percentage of
customer calls answered in the first minute. A Key Performance Indicator for a
development organization might be the number of defects in their code.

You may need to measure several things to be able to calculate the metrics in your
KPIs. To measure progress toward its customer calls KPI, the Customer Service (CS)
department will need to measure (count) how many calls it receives. It must also
measure how long it takes to answer each call. Then the CS Manager can calculate the
percentage of customer calls answered in the first minute and manage toward
improving that KPI.

How To Measure

How you measure is as important as what you measure. In the previous example, we
can measure the number of calls by having each CS representative (CSR) count their
own calls and tell their supervisor at the end of the day. We could have an operator
counting the number of calls transferred to the CS department. The best option,
although the most expensive, would be to purchase a software program that counts the
number of incoming calls, measures how long it takes to answer each, records who
answered the call, and measures how long the call took to complete. These
measurements are current, accurate, complete, and unbiased.

Collecting the measurements in this way enables the manager to calculate the
percentage of customer calls answered in the first minute. In addition, it provides
additional measurements that help him or her manage toward improving the percentage
of calls answered quickly. Knowing the call durations lets the manager calculate if there
is enough staff to reach the goal. Knowing which CSRs answer the most calls identifies
for the manager expertise that can be shared with other CSRs.

106. Can you explain number of production defects measure?

107. Can you explain defect seeding?

Ans Defect Seeding : For identify the Capability of tester


team , One Group will insert Defect in application ,This

Bug will found by another Group

Example : In Real Application this group will find 650 bug

but in defect seeding software they will find 30 bug out of

50 Bug total Bug in real application 50*650/30 = 1084 bug is available in Application

Or

In this method, intentionally the developer/lead will

introduce the bugs in to product... we don’t know in which

module they will occur.. So we have to do regression

testing to identify that bugs as well as residual bugs(more

bugs). The main intention of this is to get more bugs.

Or

Defect seeding is actually the process of inserting some code in the program
intentionally for the software to miss-behave...this practice is carried out to test the team
performance and how much they know about the product. Developers have this
knowledge where they have kept that code...so in this way we can check out the testing
team performance and their capabilities.......

108. Can you explain DRE?

Ans Defect removal efficiency is a Quality metrics for defects


DRE= (Defect Found by Internal Team/Total no of defects found) * 10
109. Can you explain Unit and system test DRE?

Ans

110. How do you measure test effectiveness?

Ans In recent times, independent testing teams have become increasingly popular.
Many organizations have either created an independent test department within the
organization or outsourced testing work to other organizations which specialize in
providing test services. In the current paper, a model has been proposed to measure
effectiveness of either kind of independent testing team.

Though there are a number of metrics available for tracking test life cycle and product

quality, most of them do not provide much insight into how well test team is doing and
improving.

The commonly used metrics are:

1. Related to test execution & tracking – Schedule variance, effort variance, etc

2. Related to test coverage – Requirements to test case mapping, Test cases per

KLOC, etc

3. Defects related measurements – Defect arrival rate, Defect closure rate, Defect

leakage, etc

Most of the above mentioned metrics are more focused towards measuring product

quality and whether we are on track for meeting test timelines or not. A good

performance on these metrics may not mean that test team is doing well as the good

performance could be just because of good quality product available for testing.

Similarly, a poor performance on something like schedule could be simply due to poor

quality of product under test even though test team might be doing a great job in

identifying bugs and getting slowed down due to them.

The focus of this paper is solely on those metrics which can measure how effectively
test

team is working on its own.

The other consideration was for the number of metrics to be tracked. With addition of

each additional metrics that we choose to track, we are increasing the overhead for

maintaining them.

Besides, the metrics used should not be too complex to measure for test team as well
as

people evaluating effectiveness of test team.

2 Different Metrics in testing and what they measure

There are a variety of metrics that are used during test phase of project life cycle. Some

of the most common ones have been described and categorized below:

1. Related to test execution & tracking: These metrics measure how well test

execution is going in a particular test phase. Some examples are:


a. Schedule variance: Variance of actual test schedule vs planned test

schedule. This could be due to multiple reasons eg, delayed start of

testing, error in planning test phase, poor/better quality of system under

test resulting in test team logging many more/less bugs than anticipated,

dependencies on other third parties which do not get met etc

b. Effort variance: This measures how planned effort for testing a system

varies from actual effort required. This could happen due to poor

understanding of system under test, different quality of system under test

than expected etc

c. Regression test timeline: This measures how much time does test team

take to do one pass of regression test on the system under test. This

number can be used for planning future releases as well as budgeting for

test team on the basis of product roadmap. This also measures the

improvement in test team over a period of time.

2. Related to test coverage: These metrics measure what kind of test coverage test

team is providing. Some examples are:

a. Requirements to test case mapping: Ideally all requirements should map to

at least one test case. Usually, it is one-to-many mapping between

requirements and test cases. The metric tracked is = Requirements mapped

to test cases/Total requirements. Ideally, this number should be 1.

b. Test cases per KLOC: This metric measures how test suite is growing with

increasing size of system under test. Though code and test cases may not

have 1-1 mapping but by tracking this metric, we can identify when there

is any discrepancy in the test cases identified for any new functionality

that has been added. If we add a lot of code for a new functionality but

number of test cases added is low, then this should be investigated. This

could be expected but there is no harm in investigating if we are getting


unexpected value for this metric.

c. Number of new test cases written/executed: For each test cycle/phase, we

can track number of new test cases being written for new/existing

requirements.

d. Test case efficiency: This metric measures the efficiency of test cases in

identifying bugs in the system. This is equal to bugs found which are

mapped to test cases/ total bugs logged. Ideally this should be 1.

3. Defects related measurements: There are a number of metrics falling under this

category.

a. Defect arrival rate: For a test phase, we can measure the rate at which

defects arrived. Ideally, the defect arrival rate should reduce over the test

phase.

b. Defect closure rate: Rate at which defects are getting closed. A combined

chart of defect arrival and closure gives a good picture of how product is

stabilizing while under testing.

c. Defects reassigned to testers for clarification/rejected bugs: This metric

measures how well test team understands the product as well as their

communication skill.

d. Defect identification efficiency: Ideally, a test team should identify the

bug in the build it is introduced but it may not happen due to a variety of

reasons like testing on the functionality in question was not scheduled to

be tested in that build, impact of changes done was not understood or test

team just missed it. This metric compares when a bug was identified to

when it was introduced.

A note of caution on this metric: This is not an easy metric to measure.

There will always be different opinions about when the bug got introduced

and in some cases it may not be possible go back to older builds and check
this fact and everyone may have to take the word of development team in

this regard. This should be used in places where test process has matured a

lot and there is good deal of trust between Development and Test teams.

e. Defect Leakage: This metric measures the defects leaked to production

environment as a percentage of total defects logged by test team. This

metric measures the efficiency of test team in identifying all issues with

the product in house.

3 Proposed Model

The following two metrics are must to have in evaluating independent test teams.

1. Regression Test timeline

This measures how much time does test team take to do one pass of regression test on

the system under test. This number can be used for planning future releases as well as

budgeting for test team on the basis of product roadmap. This also measures the

improvement in test team over a period of time.

Since regression is something test team would be running very frequently for most

systems, this number can be measured with a very good accuracy. Also, since test

schedules and budget would be calculated using this, it will be in best interests of

both test team as well as management to calculate this correctly. Test team would not

like to fudge any improvement in this metric as they know they would end up burning

their fingers in case their budget is based on an incorrect number. Moreover, this

metric is mostly influenced by how efficiently test team is managing itself and free of

external factors. Management can see any improvement in this metric as improving

efficiency of test team in test execution.

Various ways of improving in this metric:

a. Automation: Most popular way of cutting down on regression timelines.

But it is important to do RoI on automation before taking this up.

b. Faster execution: As test team learns the product better, their speed in
executing test cases pick up and this would result in better test timeline.

c. Optimal number of test cases: After multiple test execution, a test team

would get better at understanding product and its issues. This would help

the team in optimizing the test suite and removing useless test cases.

How to calculate this metric?

A sample table for calculating and maintaining this metric:

Feature # of test cases Time to complete regression

Feature 1 200 20 person-days

Feature 2 … …..

Depending upon the complexity of the product, we may have sub features also and

test cases might be divided into different complexities with different time taken for

execution for each of the complexities.

Finally, depending on circumstances, it might be a good idea to add some buffer for

defect logging, dependencies into this which should be based on experience while

executing test cases for the system. This would be applicable if schedule & budget

decisions are to be made on the basis of this metric. If this metric is being used for

evaluation of test team’s progress only, then this is not required.

Please note that it is not possible to predict any test timeline with 100% accuracy. The

buffer element takes care of some of the unknowns and based on experience it should

be modified but still there would be times when test schedule goes awry.

2. Defect Leakage

This metric is used to measure the defects leaked to production environment as a

percentage of total defects logged by test team. This metric measures the efficiency of

test team in identifying all issues with the product in house.

Total Defects found in production

Defect Leakage = -----------------------------------------------------

Total Defects found (in-house + production)


It would be a good idea to consider only those production bugs for consideration

which can be reproduced in house and were part of test team’s execution plan. Some

products are deployed under diverse conditions in production and some of the issues

found are impossible to be replicated in-house. For such products, this metric can

give skewed results and one must consider these factors before calling any production

defect as leakage.

An excel sheet can be maintained with all production issues in one tab and all inhouse

defects in other tab for calculating this metric. One can select what all

production issues to be marked as leakage. Depending on bug logging date and

production bug logging date, one can create a graph showing trend of defect leakage

over a period of time.

The above mentioned two metrics complement each other very well. The first one

shows how efficient test team is becoming in test execution and second metric

shows that improvement in time is not coming at the cost of quality as seen by

outside people/users.

For most test teams, these two metrics are sufficient to track their progress and how
they

have improved over a period of time. In some cases where test teams are internal to the

organization or outsourced vendors are for long term and have become comfortable
with

client’s environment, some more metrics can be considered.

1. Schedule variance: We can measure schedule variance for different releases and

see how test team is improving in this across releases. For a long standing test

team, it is being assumed that they would have gained knowledge of

organization’s environment and can predict schedule to a high degree of accuracy

depending on various factors in the environment.

2. Test case efficiency: Ideally all defects should be found through documented test

cases. This is especially true for long standing teams as they are expected to
understand the system under test very well

111. Can you explain Defect age and Defect spoilage?

Ans Defect Age is the difference in time between the date a defect is detected and
the current date (if the defect is still open) or the date the defect was fixed. It is a useful
measure of defect effectiveness. Defect Spoilage is a metric. Spoilage =Sum of
( Number of defects * Discovered Phage)/total number of defects Phage – Defect age
Or

The defect age is often calculated as a phase age (ie: how many phases it exists).
Because defects are more expensive the later in the test they are found, it is a good
idea to also calculate the defect spoilage which is a quotient.

sum of (number x phaseAge)


defect spoilage = ---------------------------------------
total number of defects

You can find a lot more numbers in "Systematic software testing "
Both numbers can be very hard to calculate or even finding all the facts.

Or

Defect Age : The time or phase since the defect is open.


Defect Age Calculated in Time : The Number of hours /days the defect is open. if
the defect is fixed then Defect Age=Date Fixed less Defect found date.

Defect Age Calculated in Phases : Defect Fixed Phase - Defect Injection phase.
Let’s say the software life cycle has the following phases:

1. Requirements Development
2. High-Level Design
3. Detail Design
4. Coding
5. Unit Testing
6. Integration Testing
7. System Testing
8. Acceptance Testing
If a defect is identified in ‘System Testing’ and the defect was introduced in
‘Requirements Development’, the Defect Age is 6.

Defect age is used in another metric called defect spoilage to measure the
effectiveness of defect removal activities.

Spoilage = Sum of (Number of Defects x defect age) / Total number of defects.


low values of defect spoilage mean more effective defect discovery process.

112. What is a Software process?

Ans A software development process, also known as a software development


lifecycle, is a structure imposed on the development of a software product. Similar
terms include software life cycle and software process. There are several models for
such processes, each describing approaches to a variety of tasks or activities that take
place during the process. Some people consider a lifecycle model a more general term
and a software development process a more specific term. For example, there are
many specific software development processes that 'fit' the spiral lifecycle model.
Software development activities
The activities of the software development process represented in the waterfall model.
There are several other models to represent this process.

Planning

The important task in creating a software product is extracting the requirements or


requirements analysis. Customers typically have an abstract idea of what they want as
an end result, but not what software should do. Incomplete, ambiguous, or even
contradictory requirements are recognized by skilled and experienced software
engineers at this point. Frequently demonstrating live code may help reduce the risk
that the requirements are incorrect.
Once the general requirements are gathered from the client, an analysis of the scope of
the development should be determined and clearly stated. This is often called a scope
document.
Certain functionality may be out of scope of the project as a function of cost or as a
result of unclear requirements at the start of development. If the development is done
externally, this document can be considered a legal document so that if there are ever
disputes, any ambiguity of what was promised to the client can be clarified.

Implementation, testing and documenting

Implementation is the part of the process where software engineers actually program
the code for the project.
Software testing is an integral and important part of the software development process.
This part of the process ensures that defects are recognized as early as possible.
Documenting the internal design of software for the purpose of future maintenance and
enhancement is done throughout development. This may also include the writing of an
API, be it external or internal. It is very important to document everything in the project.

Deployment and maintenance

Deployment starts after the code is appropriately tested, is approved for release and
sold or otherwise distributed into a production environment.
Software Training and Support is important and a lot of developers fail to realize that. It
would not matter how much time and planning a development team puts into creating
software if nobody in an organization ends up using it. People are often resistant to
change and avoid venturing into an unfamiliar area, so as a part of the deployment
phase, it is very important to have training classes for new clients of your software.
Maintaining and enhancing software to cope with newly discovered problems or new
requirements can take far more time than the initial development of the software. It may
be necessary to add code that does not fit the original design to correct an unforeseen
problem or it may be that a customer is requesting more functionality and code can be
added to accommodate their requests. If the labor cost of the maintenance phase
exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality of at
least one prior phase is poor.[citation needed] In that case, management should consider the
option of rebuilding the system (or portions) before maintenance cost is out of control.
Bug Tracking System tools are often deployed at this stage of the process to allow
development teams to interface with customer/field teams testing the software to
identify any real or perceived issues. These software tools, both open source and
commercially licensed, provide a customizable process to acquire, review,
acknowledge, and respond to reported issues. (software maintenance).

113. What is the different cost element involved in implementing process in an


organization?

Ans

• Employee salary and compensation


• Cost of the tools required to implement process including hardware and software
• Cost to provide training
• Vendors, outsourcing or consultant fees
• Penalties for failure

114. What is a model?

Ans A model can come in many shapes, sizes, and styles. It is important to
emphasize that a model is not the real world but merely a human construct to help us
better understand real world systems. In general all models have an information input,
an information processor, and an output of expected results. Modeling Methodology for
Physics Teachers (more info) (1998) provides an outline of generic model structure that
is useful for geoscience instruction. In "Modeling the Environment" Andrew Ford gives a
philosophical discussion of what models are and why they are useful. The first few
paragraphs of Chapter 1 of Ford's book are worth a look.

Key features in common with the development of any model is that:


• simplifying assumptions must be made;
• boundary conditions or initial conditions must be identified;
• the range of applicability of the model should be understood
Below we identify 4 types of models for discussion and reference. Follow the link to a
model type for an introduction to its use in the classroom and example activities. In
practice a well developed model of a real-world system will likely contain aspects
of each individual model type described here.

Conceptual Models are qualitative models that help highlight important connections in
real world systems and processes. They are used as a first step in the development of
more complex models.

Interactive Lecture Demonstrations Interactive demonstrations are physical models of


systems that can be easily observed and manipulated and which have characteristics
similar to key features of more complex systems in the real world. These models can
help bridge the gap between conceptual models and models of more complex real world
systems.

Mathematical and Statistical Models involve solving relevant equation(s) of a system or


characterizing a system based upon its statisical parameters such as mean, mode,
variance or regression coefficients. Mathematical models include Analytical models and
Numerical Models. Statistical models are useful in helping identify patterns and
underlying relationships between data sets.

Teaching with Visualizations By this we mean anything that can help one visualize how
a system works. A visualization model can be a direct link between data and some
graphic or image output or can be linked in series with some other type of model so to
convert its output into a visually useful format. Examples include 1-, 2-, and 3-D
graphics packages, map overlays, animations, image manipulation and image analysis

115. What is maturity level?

Ans The Capability Maturity Model (CMM) is a service mark owned by Carnegie
Mellon University (CMU) and refers to a development model elicited from actual data.
The data was collected from organizations that contracted with the U.S. Department of
Defense, who funded the research, and became the foundation from which CMU
created the Software Engineering Institute (SEI). Like any model, it is an abstraction of
an existing system.
When it is applied to an existing organization's software development processes, it
allows an effective approach toward improving them. Eventually it became clear that the
model could be applied to other processes. This gave rise to a more general concept
that is applied to business processes and to developing people
The CMM was originally intended as a tool to evaluate the ability of government
contractors to perform a contracted software project. It has been used for and may be
suited to that purpose, but critics pointed out that process maturity according to the
CMM was not necessarily mandatory for successful software development. There
were/are real-life examples where the CMM was arguably irrelevant to successful
software development, and these examples include many shrinkwrap companies (also
called commercial-off-the-shelf or "COTS" firms or software package firms). Such firms
would have included, for example, Claris, Apple, Symantec, Microsoft, and Lotus.
Though these companies may have successfully developed their software, they would
not necessarily have considered or defined or managed their processes as the CMM
described as level 3 or above, and so would have fitted level 1 or 2 of the model. This
did not - on the face of it - frustrate the successful development of their software.
Level 1 - Initial (Chaotic)

It is characteristic of processes at this level that they are (typically)


undocumented and in a state of dynamic change, tending to be driven in an ad
hoc, uncontrolled and reactive manner by users or events. This provides a
chaotic or unstable environment for the processes.

Level 2 - Repeatable

It is characteristic of processes at this level that some processes are repeatable,


possibly with consistent results. Process discipline is unlikely to be rigorous, but
where it exists it may help to ensure that existing processes are maintained
during times of stress.

Level 3 - Defined

It is characteristic of processes at this level that there are sets of defined and
documented standard processes established and subject to some degree of
improvement over time. These standard processes are in place (i.e., they are the
AS-IS processes) and used to establish consistency of process performance
across the organization.

Level 4 - Managed

It is characteristic of processes at this level that, using process metrics,


management can effectively control the AS-IS process (e.g., for software
development ). In particular, management can identify ways to adjust and adapt
the process to particular projects without measurable losses of quality or
deviations from specifications. Process Capability is established from this level.

Level 5 - Optimizing

It is a characteristic of processes at this level that the focus is on continually


improving process performance through both incremental and innovative
technological changes/improvements.

At maturity level 5, processes are concerned with addressing statistical common


causes of process variation and changing the process (for example, to shift the mean of
the process performance) to improve process performance. This would be done at the
same time as maintaining the likelihood of achieving the established quantitative
process-improvement objectives.
116 – 117- 118 Questions & answers missing

119. What is the difference between implementation and institutionalization ?

Ans. They are the techniques used in CMMI implementation.


Implementation -It is the task performed according to a process. This is the initial stage
when the organization implements any new process.
Institutionalization - It is the task performed according to an organization standard.It is
an ongoing process of implementation.
120. What are the different models in CMMI ?
Ans. CMMI best practices are published in documents called models, each of which
addresses a different area of interest. The current release of CMMI, version 1.3,
provides models for three areas of interest: development, acquisition, and services.
• CMMI for Development (CMMI-DEV), v1.3 was released in November 2010. It
addresses product and service development processes.
• CMMI for Acquisition (CMMI-ACQ), v1.3 was released in November 2010. It
addresses supply chain management, acquisition, and outsourcing processes in
government and industry.
• CMMI for Services (CMMI-SVC), v1.3 was released in November 2010. It
addresses guidance for delivering services within an organization and to external
customers.
121. Can you explain Staged and Continuous model in CMMI ?
Ans. Staged Model - It uses predefined sets of process area to define an improvement.
Each level of maturity is further decomposed into number of processes that are fixed to
that level of maturity. It has 5 maturity levels - Initial, Managed, Defined, Quantitatively
managed and Optimizing.
Continuous Model - In this model processes are individually improved along a capability
scale independent of each other.It provides flexibility for organizations to choose which
processes to emphasize for improvement . It has 6 capability levels - Incomplete,
Performed, Managed, Defined, Quantitatively managed and Optimizing.

122. Can you explain different maturity levels in Staged representation ?


Ans. Maturity level of a process defines the nature and maturity present in the
organization. These levels help to understand and set a benchmark for the organization.
• Level 1 Initial – Processes are characterized as chaotic and adhoc, heroic efforts
required by individuals to successfully complete projects. A few Processes are in
place; successes may not be repeatable.

• Level 2 Managed – An organization have installed basic management controls.


Software project tracking, requirements management, realistic planning, and
configuration management processes are in place, At level 2 organization are
summarized as Disciplined as they get the ability to successfully repeat planning
and tracking.
• Level 3 Defined- standard software development and maintenance processes
are integrated throughout an organization; a Software Engineering Process
Group is in place to oversee software processes, and training programs are used
to ensure understanding and compliance. Cost, schedule and functionality are
under control.

• Level 4 Quantitatively Managed – Processes are integrated as whole, metrics are


used to track productivity, processes and products. Project performance is
predictable and quality is consistently high.

• Level 5 Optimizing - The focus is on continuous process improvement. The


impact of new processes and technologies can be predicted and effectively
implemented when required. The Project strive to improve the process capability
and process performance.

123. Can you explain capability levels in continuous representation?]

Ans.The compatibility level of continuous representation is designed for allowing the


user in order to focus on the specific processes which are considered most important
for the enterprise’s immediate business objectives. Also for the organizations which
assigns a very high level degree of risk.

Capability levels are relevant to organization’s process improvement in specific areas.


Capability levels in Continuous representations:
Level0 Incomplete: It depicts an incomplete process which does not implement all
Capability level1 processes and practices.
Level1 Performed: Process that implements all Capability level1 processes and
practices. Some work can be done even though major objectives such as performance
are not achieved.
Level2 Managed: They are processes which are planned, managed, performed,
monitored, and controlled for specific projects to achieve specific goals.
Level3 Defined: It’s a customized set of standard and managed process for an
organization. The processes are tailored a bit as per the organization’s benefit.
Level4 Quantitatively managed: It is a defined process that is managed and controlled
using statistical and quantitative methods.
Level5 Optimizing: It’s a quantitatively managed and improved process which is based
upon common roots and causes of process variation. Focus is on improving
performance of the process using incremental and innovative methods.
124.Which model should we use and under what scenarios ?
Ans.
125.. How many process areas are present in CMMI and in what classification do they
fall in ?
Ans. There are total 22 Key Process areas in CMMI. Ratings are awarded for level 2
through level 5.
Maturity Level 2 - Managed
• Project Requirement Management
• Project Planning
• Quality process & product Quality Assurance
• Project Monitoring & control
• Measurement & Analysis
• Configuration management
• Supplier Agreement Management
Maturity Level 3 - Defined
• Requirement Development
• Technical solution
• Product Integration
• Verification
• Validation
• Organization process Focus
• Organizational process Definition
• Organizational Training
• Risk Analysis
• Decision Analysis & resolution
• Integrated Project management
Maturity Level 4 - Quantitatively managed
• Organization process performance
• Quantitative project Management
Maturity Level 5 - Optimization
• Casual analysis and Resolution
• Organization innovation & deployment
126. What is the difference between every level in CMMi ?
Ans. Capability Levels Versus Maturity Levels

The continuous representation consists of capability levels, while the staged


representation consists of maturity levels. The main difference between these two types
of levels is the representation they belong to and how they are applied:

* Capability levels, which belong to a continuous representation, apply to an


organization’s process-improvement achievement in individual process areas. There are
six capability levels, numbered 0 through 5.
• Maturity levels, which belong to a staged representation, apply to an
organization’s overall process-improvement achievement using the model. There
are five maturity levels, numbered 1 through 5. Each maturity level comprises a
set of goals that, when satisfied, improve processes. Maturity levels are
measured by the achievement of the goals that apply to a set of process areas.

127. What different sources are needed to verify authenticity for CMMI implementation ?

Ans. An appraiser can evaluate and verify authencity of CMMI implementation using the
following
• Conducting formal Interviews with the leads and the team members
• Documents prepared by the team while following the model
• Conducting survey and questionnaires
128. Can you explain SCAMPI process ?
Ans. SCAMPI is an acronym for Standard CMMI Appraisal Method for Process
Improvement.
A SCAMPI assessment must be led by an SEI Authorized SCAMPI Lead Appraiser.
SCAMPI is supported by the SCAMPI Product Suite, which includes the SCAMPI
Method Description, maturity questionnaire, work aids, and templates. Currently,
SCAMPI is the only method that can provide a rating, the only method recognized by
the SEI, and the method of most interest to organizations.
There are 3 SCAMPI methods
• SCAMPI class A Appraisal
• SCAMPI class B Appraisal
• SCAMPI class C Appraisal
129. How is appraisal done in CMMI ?
Ans. The CMMI Appraisal is an examination of one or more processes by a trained
team of professionals using an appraisal reference model as the basis for determining
strengths and weaknesses of an organization.
130. Which appraisal method class is the best ?
Ans.

131. Can you explain the importance of PII in SCAMPI ?


Ans. The Practice Implementation Indicators is based on the fundamental idea of the
assumption that the performance of an activity or the implementation of a practice will
always results in “footprints” those are attributable to the activity or the practice.
132. Can you explain implementation of CMMI in one of the Key process
areas?

133. Explanation of all process areas with goals and practices?

Generic goals and practices:


Generic goals and practices are a part of every process area.
NOTATIONS:GG --> Generic Goals and GP --> Generic Practice
• GG 1 Achieve Specific Goals
o GP 1.1 Perform Specific Practices
• GG 2 Institutionalise a Managed Process
o GP 2.1 Establish an Organizational Policy
o GP 2.2 Plan the Process
o GP 2.3 Provide Resources
o GP 2.4 Assign Responsibility
o GP 2.5 Train People
o GP 2.6 Manage Configurations
o GP 2.7 Identify and Involve Relevant Stakeholders
o GP 2.8 Monitor and Control the Process
o GP 2.9 Objectively Evaluate Adherence
o GP 2.10 Review Status with Higher Level Management
• GG 3 Institutionalise a Defined Process
o GP 3.1 Establish a Defined Process
o GP 3.2 Collect Improvement Information
• GG 4 Institutionalise a Quantitatively Managed Process
o GP 4.1 Establish Quantitative Objectives for the Process
o GP 4.2 Stabilise Subprocess Performance
• GG 5 Institutionalise an Optimising Process
o GP 5.1 Ensure Continuous Process Improvement
o GP 5.2 Correct Root Causes of Problems
134. Can you explain the process areas?

S Process Areas Detail:


The CMMI contains 22 process areas indicating the aspects of product development
that are to be covered by company processes.
Causal Analysis and Resolution (CAR)
• A Support process area at Maturity Level 5
Purpose
The purpose of Causal Analysis and Resolution (CAR) is to identify causes of defects
and other problems and take action to prevent them from occurring in the future.
Specific Practices by Goal
• SG 1 Determine Causes of Defects
o SP 1.1 Select Defect Data for Analysis
o SP 1.2 Analyze Causes
• SG 2 Address Causes of Defects
o SP 2.1 Implement the Action Proposals
o SP 2.2 Evaluate the Effect of Changes
o SP 2.3 Record Data
Configuration Management (CM)
• A Support process area at Maturity Level 2
Purpose
The purpose of Configuration Management (CM) is to establish and maintain the
integrity of work products using configuration identification, configuration control,
configuration status accounting, and configuration audits.
Specific Practices by Goal
• SG 1 Establish Baselines
o SP 1.1 Identify Configuration Items
o SP 1.2 Establish a Configuration Management System
o SP 1.3 Create or Release Baselines
• SG 2 Track and Control Changes
o SP 2.1 Track Change Requests
o SP 2.2 Control Configuration Items
• SG 3 Establish Integrity
o SP 3.1 Establish Configuration Management Records
o SP 3.2 Perform Configuration Audits
Decision Analysis and Resolution (DAR)
• A Support process area at Maturity Level 3
Purpose
The purpose of Decision Analysis and Resolution (DAR) is to analyze possible
decisions using a formal evaluation process that evaluates identified alternatives
against established criteria.
Specific Practices by Goal
• SG 1 Evaluate Alternatives
o SP 1.1 Establish Guidelines for Decision Analysis
o SP 1.2 Establish Evaluation Criteria
o SP 1.3 Identify Alternative Solutions
o SP 1.4 Select Evaluation Methods
o SP 1.5 Evaluate Alternatives
o SP 1.6 Select Solutions
Integrated Project Management +IPPD (IPM)
• A Project Management process area at Maturity Level 3
Purpose
The purpose of Integrated Project Management +IPPD (IPM) is to establish and
manage the project and the involvement of the relevant stakeholders according to an
integrated and defined process that is tailored from the organization's set of standard
processes.
Specific Practices by Goal
• SG 1 Use the Project's Defined Process
o SP 1.1 Establish the Project's Defined Process
o SP 1.2 Use Organizational Process Assets for Planning Project Activities
o SP 1.3 Establish the Project's Work Environment
o SP 1.4 Integrate Plans
o SP 1.5 Manage the Project Using the Integrated Plans
o SP 1.6 Contribute to the Organizational Process Assets
• SG 2 Coordinate and Collaborate with Relevant Stakeholders
o SP 2.1 Manage Stakeholder Involvement
o SP 2.2 Manage Dependencies
o SP 2.3 Resolve Coordination Issues
IPPD Addition:
• SG 3 Apply IPPD Principles
o SP 3.1 Establish the Project's Shared Vision
o SP 3.2 Establish the Integrated Team Structure
o SP 3.3 Allocate Requirements to Integrated Teams
o SP 3.4 Establish Integrated Teams
o SP 3.5 Ensure Collaboration among Interfacing Teams
Measurement and Analysis (MA)
• A Support process area at Maturity Level 2
Purpose
The purpose of Measurement and Analysis (MA) is to develop and sustain a
measurement capability that is used to support management information needs.
Specific Practices by Goal
• SG 1 Align Measurement and Analysis Activities
o SP 1.1 Establish Measurement Objectives
o SP 1.2 Specify Measures
o SP 1.3 Specify Data Collection and Storage Procedures
o SP 1.4 Specify Analysis Procedures
• SG 2 Provide Measurement Results
o SP 2.1 Collect Measurement Data
o SP 2.2 Analyze Measurement Data
o SP 2.3 Store Data and Results
o SP 2.4 Communicate Results
Organizational Innovation and Deployment (OID)
• A Process Management process area at Maturity Level 5
Purpose
The purpose of Organizational Innovation and Deployment (OID) is to select and
deploy incremental and innovative improvements that measurably improve the
organization's processes and technologies. The improvements support the
organization's quality and process-performance objectives as derived from the
organization's business objectives.
Specific Practices by Goal
• SG 1 Select Improvements
o SP 1.1 Collect and Analyze Improvement Proposals
o SP 1.2 Identify and Analyze Innovations
o SP 1.3 Pilot Improvements
o SP 1.4 Select Improvements for Deployment
• SG 2 Deploy Improvements
o SP 2.1 Plan the Deployment areas
o SP 2.2 Manage the Deployment
o SP 2.3 Measure Improvement Effects
Organizational Process Definition +IPPD (OPD)
• A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Process Definition +IPPD (OPD) is to establish and
maintain a usable set of organizational process assets.
Specific Practices by Goal
• SG 1 Establish Organizational Process Assets
o SP 1.1 Establish Standard Processes
o SP 1.2 Establish Life-Cycle Model Descriptions
o SP 1.3 Establish Tailoring Criteria and Guidelines
o SP 1.4 Establish the Organization's Measurement Repository
o SP 1.5 Establish the Organization's Process Asset Library
IPPD Addition:
• SG 2 Enable IPPD Management
o SP 2.1 Establish Empowerment Mechanisms
o SP 2.2 Establish Rules and Guidelines for Integrated Teams
o SP 2.3 Balance Team and Home Organization Responsibilities
Organizational Process Focus (OPF)
• A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Process Focus (OPF) is to plan and implement
organizational process improvement based on a thorough understanding of the current
strengths and weaknesses of the organization's processes and process assets.
Specific Practices by Goal
• SG 1 Determine Process Improvement Opportunities
o SP 1.1 Establish Organizational Process Needs
o SP 1.2 Appraise the Organization's Processes
o SP 1.3 Identify the Organization's Process Improvements
• SG 2 Plan and Implement Process Improvement Activities
o SP 2.1 Establish Process Action Plans
o SP 2.2 Implement Process Action Plans
• SG 3 Deploy Organizational Process Assets and Incorporate Lessons Learned
o SP 3.1 Deploy Organizational Process Assets
o SP 3.2 Deploy Standard Processes
o SP 3.3 Monitor Implementation
o SP 3.4 Incorporate Process-Related Experiences into the Organizational
Process Assets
Organizational Process Performance (OPP)
• A Process Management process area at Maturity Level 4
Purpose
The purpose of Organizational Process Performance (OPP) is to establish and
maintain a quantitative understanding of the performance of the organization's set of
standard processes in support of quality and process-performance objectives, and to
provide the process performance data, baselines, and models to quantitatively manage
the organization's projects.
Specific Practices by Goal
• SG 1 Establish Performance Baselines and Models
o SP 1.1 Select Processes
o SP 1.2 Establish Process Performance Measures
o SP 1.3 Establish Quality and Process Performance Objectives
o SP 1.4 Establish Process Performance Baselines
o SP 1.5 Establish Process Performance Models
Organizational Training (OT)
• A Process Management process area at Maturity Level 3
Purpose
The purpose of Organizational Training (OT) is to develop the skills and knowledge of
people so they can perform their roles effectively and efficiently.
Specific Practices by Goal
• SG 1 Establish an Organizational Training Capability
o SP 1.1 Establish the Strategic Training Needs
o SP 1.2 Determine Which Training Needs Are the Responsibility of the
Organization
o SP 1.3 Establish an Organizational Training Tactical Plan
o SP 1.4 Establish Training Capability
• SG 2 Provide Necessary Training
o SP 2.1 Deliver Training
o SP 2.2 Establish Training Records
o SP 2.3 Assess Training Effectiveness
Product Integration (PI)
• An Engineering process area at Maturity Level 3
Purpose
The purpose of Product Integration (PI) is to assemble the product from the product
components, ensure that the product, as integrated, functions properly, and deliver the
product.
Specific Practices by Goal
• SG 1 Prepare for Product Integration
o SP 1.1 Determine Integration Sequence
o SP 1.2 Establish the Product Integration Environment
o SP 1.3 Establish Product Integration Procedures and Criteria
• SG 2 Ensure Interface Compatibility
o SP 2.1 Review Interface Descriptions for Completeness
o SP 2.2 Manage Interfaces
• SG 3 Assemble Product Components and Deliver the Product
o SP 3.1 Confirm Readiness of Product Components for Integration
o SP 3.2 Assemble Product Components
o SP 3.3 Evaluate Assembled Product Components
o SP 3.4 Package and Deliver the Product or Product Component
Project Monitoring and Control (PMC)
• A Project Management process area at Maturity Level 2
Purpose
The purpose of Project Monitoring and Control (PMC) is to provide an understanding
of the project's progress so that appropriate corrective actions can be taken when the
project's performance deviates significantly from the plan.
Specific Practices by Goal
• SG 1 Monitor Project Against Plan
o SP 1.1 Monitor Project Planning Parameters
o SP 1.2 Monitor Commitments
o SP 1.3 Monitor Project Risks
o SP 1.4 Monitor Data Management
o SP 1.5 Monitor Stakeholder Involvement
o SP 1.6 Conduct Progress Reviews
o SP 1.7 Conduct Milestone Reviews
• SG 2 Manage Corrective Action to Closure
o SP 2.1 Analyze Issues
o SP 2.2 Take Corrective Action
o SP 2.3 Manage Corrective Action
Project Planning (PP)
• A Project Management process area at Maturity Level 2
Purpose
The purpose of Project Planning (PP) is to establish and maintain plans that define
project activities.
Specific Practices by Goal
• SG 1 Establish Estimates
o SP 1.1 Estimate the Scope of the Project
o SP 1.2 Establish Estimates of Work Product and Task Attributes
o SP 1.3 Define Project Life Cycle
o SP 1.4 Determine Estimates of Effort and Cost
• SG 2 Develop a Project Plan
o SP 2.1 Establish the Budget and Schedule
o SP 2.2 Identify Project Risks
o SP 2.3 Plan for Data Management
o SP 2.4 Plan for Project Resources
o SP 2.5 Plan for Needed Knowledge and Skills
o SP 2.6 Plan Stakeholder Involvement
o SP 2.7 Establish the Project Plan
• SG 3 Obtain Commitment to the Plan
o SP 3.1 Review Plans that Affect the Project
o SP 3.2 Reconcile Work and Resource Levels
o SP 3.3 Obtain Plan Commitment
Process and Product Quality Assurance (PPQA)
• A Support process area at Maturity Level 2
Purpose
The purpose of Process and Product Quality Assurance (PPQA) is to provide staff
and management with objective insight into processes and associated work products.
Specific Practices by Goal
• SG 1 Objectively Evaluate Processes and Work Products
o SP 1.1 Objectively Evaluate Processes
o SP 1.2 Objectively Evaluate Work Products and Services
• SG 2 Provide Objective Insight
o SP 2.1 Communicate and Ensure Resolution of Noncompliance Issues
o SP 2.2 Establish Records
Quantitative Project Management (QPM)
• A Project Management process area at Maturity Level 4
Purpose
The purpose of the Quantitative Project Management (QPM) process area is to
quantitatively manage the project's defined process to achieve the project's established
quality and process-performance objectives.
Specific Practices by Goal
• SG 1 Quantitatively Manage the Project
o SP 1.1 Establish the Project's Objectives
o SP 1.2 Compose the Defined Processes
o SP 1.3 Select the Subprocesses that Will Be Statistically Managed
o SP 1.4 Manage Project Performance
• SG 2 Statistically Manage Subprocess Performance
o SP 2.1 Select Measures and Analytic Techniques
o SP 2.2 Apply Statistical Methods to Understand Variation
o SP 2.3 Monitor Performance of the Selected Subprocesses
o SP 2.4 Record Statistical Management Data
Requirements Development (RD)

o An Engineering process area at Maturity Level 3
Purpose
The purpose of Requirements Development (RD) is to produce and analyze customer,
product, and product-component requirements.
Specific Practices by Goal
• SG 1 Develop Customer Requirements
o SP 1.1 Elicit Needs
o SP 1.2 Develop the Customer Requirements
• SG 2 Develop Product Requirements
o SP 2.1 Establish Product and Product-Component Requirements
o SP 2.2 Allocate Product-Component Requirements
o SP 2.3 Identify Interface Requirements
• SG 3 Analyze and Validate Requirements
o SP 3.1 Establish Operational Concepts and Scenarios
o SP 3.2 Establish a Definition of Required Functionality
o SP 3.3 Analyze Requirements
o SP 3.4 Analyze Requirements to Achieve Balance
o SP 3.5 Validate Requirements
Requirements Management (REQM)
• An Engineering process area at Maturity Level 2
Purpose
The purpose of Requirements Management (REQM) is to manage the requirements of
the project's products and product components and to identify inconsistencies between
those requirements and the project's plans and work products.
Specific Practices by Goal
• SG 1 Manage Requirements
o SP 1.1 Obtain an Understanding of Requirements
o SP 1.2 Obtain Commitment to Requirements
o SP 1.3 Manage Requirements Changes
o SP 1.4 Maintain Bidirectional Traceability of Requirements
o SP 1.5 Identify Inconsistencies between Project Work and Requirements
Risk Management (RSKM)
• A Project Management process area at Maturity Level 3
Purpose
The purpose of Risk Management (RSKM) is to identify potential problems before they
occur so that risk-handling activities can be planned and invoked as needed across the
life of the product or project to mitigate adverse impacts on achieving objectives.
Specific Practices by Goal
• SG 1 Prepare for Risk Management
o SP 1.1 Determine Risk Sources and Categories
o SP 1.2 Define Risk Parameters
o SP 1.3 Establish a Risk Management Strategy
• SG 2 Identify and Analyze Risks
o SP 2.1 Identify Risks
o SP 2.2 Evaluate, Categorize, and Prioritize Risks
• SG 3 Mitigate Risks
o SP 3.1 Develop Risk Mitigation Plans
o SP 3.2 Implement Risk Mitigation Plans
Supplier Agreement Management (SAM)
• A Project Management process area at Maturity Level 2
Purpose
The purpose of Supplier Agreement Management (SAM) is to manage the acquisition
of products from suppliers for which there exists a formal agreement.
Specific Practices by Goal
• SG 1 Establish Supplier Agreements
o SP 1.1 Determine Acquisition Type
o SP 1.2 Select Suppliers
o SP 1.3 Establish Supplier Agreements
• SG 2 Satisfy Supplier Agreements
o SP 2.1 Execute the Supplier Agreement
o SP 2.2 Monitor Selected Supplier Processes
o SP 2.3 Evaluate Selected Supplier Work Products
o SP 2.4 Accept the Acquired Product
o SP 2.5 Transition Products
Technical Solution (TS)
• An Engineering process area at Maturity Level 3
Purpose
The purpose of Technical Solution (TS) is to design, develop, and implement solutions
to requirements. Solutions, designs, and implementations encompass products, product
components, and product-related life-cycle processes either singly or in combination as
appropriate.
Specific Practices by Goal
• SG 1 Select Product-Component Solutions
o SP 1.1 Develop Alternative Solutions and Selection Criteria
o SP 1.2 Select Product Component Solutions
• SG 2 Develop the Design
o SP 2.1 Design the Product or Product Component
o SP 2.2 Establish a Technical Data Package
o SP 2.3 Design Interfaces Using Criteria
o SP 2.4 Perform Make, Buy, or Reuse Analysis
• SG 3 Implement the Product Design
o SP 3.1 Implement the Design
o SP 3.2 Develop Product Support Documentation
Validation (VAL)
• An Engineering process area at Maturity Level 3
Purpose
The purpose of Validation (VAL) is to demonstrate that a product or product component
fulfills its intended use when placed in its intended environment.
Specific Practices by Goal
• SG 1 Prepare for Validation
o SP 1.1 Select Products for Validation
o SP 1.2 Establish the Validation Environment
o SP 1.3 Establish Validation Procedures and Criteria
• SG 2 Validate Product or Product Components
o SP 2.1 Perform Validation
o SP 2.2 Analyze Validation Results.
Verification (VER)
• An Engineering process area at Maturity Level 3
Purpose
The purpose of Verification (VER) is to ensure that selected work products meet their
specified requirements.
Specific Practices by Goal
• SG 1 Prepare for Verification
o SP 1.1 Select Work Products for Verification
o SP 1.2 Establish the Verification Environment
o SP 1.3 Establish Verification Procedures and Criteria
• SG 2 Perform Peer Reviews
o SP 2.1 Prepare for Peer Reviews
o SP 2.2 Conduct Peer Reviews
o SP 2.3 Analyze Peer Review Data
• SG 3 Verify Selected Work Products
o SP 3.1 Perform Verification
o SP 3.2 Analyze Verification Results
SIX SIGMA

135. What is six sigma?

Six Sigma is a business management strategy originally developed by Motorola, USA in


1986.[1][2] As of 2010, it is widely used in many sectors of industry, although its use is not
without controversy.
Six Sigma seeks to improve the quality of process outputs by identifying and removing
the causes of defects (errors) and minimizing variability in manufacturing and business
processes.[3] It uses a set of quality management methods, including statistical methods,
and creates a special infrastructure of people within the organization ("Black Belts",
"Green Belts", etc.) who are experts in these methods.[3] Each Six Sigma project carried
out within an organization follows a defined sequence of steps and has quantified
financial targets (cost reduction or profit increase).[3]
The term Six Sigma originated from terminology associated with manufacturing,
specifically terms associated with statistical modelling of manufacturing processes. The
maturity of a manufacturing process can be described by a sigma rating indicating its
yield, or the percentage of defect-free products it creates. A six sigma process is one in
which 99.99966% of the products manufactured are statistically expected to be free of
defects (3.4 defects per million). Motorola set a goal of "six sigma" for all of its
manufacturing operations, and this goal became a byword for the management and
engineering practices used to achieve it.

136. Can you explain the different methodology for execution and design process in
SIX sigma?

DMAIC and DMADV are 2 methodology for exceution and design process in six sigma
DMAIC is used to improve an existing business process and has five phasesDefine -
Define Opportunity
Measure - Measure performance
Analyze - Analyze opportunity
Improve - Improve performance
Control - Control Performance
DMADV is used for new product or process design development and has five phases
Define - Define Opportunity
Measure - Measure CTQ (Critical to Quality)
Analyze - Analyze Relationship
Design - Design solution
Verify - Verify functionality

137. What does an executive leader, champions, Master Black belt, green belts and
black belts mean?

Executive leaders - They are the person who take the leadership of Six sigma, CEO,
owner , promoter of Six Sigma throughout organization.
Champions - They have daya to day responsibility for business process being improved.
They make sure the Six sigma project team has the required resources to execute their
tasks.
Master Black belt - They address the most complex process improvement projects and
provide guidelines and training to black belts and green belts.
Green belts - Greens belts assist black belts. They have enough knowledge of six
sigma.They apply six sigma methodologies at bottom level to solve problems and
improve process
Black belts - They work as team lead or project manager of project chosen for six
sigma.Black belts selects projects and train resources, and implement it. They find out
the variations and see how to minimize them.

138. What are the different kinds of variations used in six sigma?

Variation defines how much changes are happening in an output of a process.

There are 4 ways of measuring variations

 Mean - variations are measured and compared bye using mathematics averaging
technique.

 Median - It is a mid point in a range of data.It is calculated by finding the difference


between highest and lowest vale n then division by two and finally add the lowest value
to it.

 Mode- it is the most time of occurred values in a data range.

 Range - variations are measured as difference between highest and lowest values in
particular data range.

139 can you explain the concept of standard deviation?


The standard deviation is a measure of how spread out your data are. It is useful in
comparing sets of data which may have the same mean but a different range.
Computation of the standard deviation is a bit tedious. The steps are:

 Compute the mean for the data set.

 Compute the deviation by subtracting the mean from each value.

 Square each individual deviation.

 Add up the squared deviations.

 Divide by one less than the sample size.

 Take the square root

140. Can you explain the concept of fish bone/ Ishikawa diagram?

Fish bone/ Ishikawa diagram is named after Kaoru Ishikawa, a quality expert from
Japan.It is a tool used to visualize, identify and classify possible causes of problems in
process, product or services. Also known as cause effect diagram.Using this tool root
cause of problems can be identified.
Following are the steps:
1. Identify a problem (effect) with a list of potential causes and Write the effect.
3. Identify major causes of the problem, which become “big branches”.
4. Fill in the “small branches” with subcauses of each major cause until the lowest-level
subcause is identified.
5. Review the completed diagram with the work process to verify that these causes
(factors) do affect the problem being resolved.
6. Work on the most important causes first.
7. Verify the root causes by collecting appropriate data (sampling) to validate
arelationship to the problem.
8. Continue this process to identify all causes, and, ultimately the root cause.

141. What is Pareto principle?

The Pareto principle (also known as the 80-20 rule,[1] the law of the vital few, and the
principle of factor sparsity) states that, for many events, roughly 80% of the effects
come from 20% of the causes.[2][3]
Business management thinker Joseph M. Juran suggested the principle and named it
after Italian economist Vilfredo Pareto, who observed in 1906 that 80% of the land in
Italy was owned by 20% of the population; he developed the principle by observing that
20% of the pea pods in his garden contained 80% of the peas.[3]
It is a common rule of thumb in business; e.g., "80% of your sales come from 20% of
your clients". Mathematically, where something is shared among a sufficiently large set
of participants, there must be a number k between 50 and 100 such that "k% is taken by
(100 - k)% of the participants". The number k may vary from 50 (in the case of equal
distribution, i.e. 100% of the population have equal shares) to nearly 100 (when a tiny
number of participants account for almost all of the resource). There is nothing special
about the number 80% mathematically, but many real systems have k somewhere
around this region of intermediate imbalance in distribution.
The Pareto principle is only tangentially related to Pareto efficiency, which was also
introduced by the same economist. Pareto developed both concepts in the context of
the distribution of income and wealth among the population.

142. Can you explain QFD?

Quality function deployment is a quality tool which builds and deliver a quality product
by focusing on the various business functions towards achieving a goal.

1. Derive top-level product requirements or technical characteristics from customer


needs.

2. Develop product concepts to satisfy these requirements


.3. Evaluate product concepts to select most optimum (Concept Selection Matrix).
4. Partition system concept or architecture into subsystems or assemblies and flow-
down higher- level requirements or technical characteristics to these subsystems or
assemblies.
5. Derive lower-level product requirements (assembly or part characteristics) and
specifications from subsystem/assembly requirements (Assembly/Part Deployment
Matrix).

6. For critical assemblies or parts, flow-down lower-level product requirements


(assembly or part characteristics) to process planning.

7. Determine manufacturing process steps to meet these assembly or part


characteristics.

8. Based in these process steps, determine set-up requirements, process controls and
quality controls to assure achievement of these critical assembly or part characteristics.

143. Can you explain FMEA?

FMEA - Failure modes and Effect analysis is a technique to identify the potential
problems in system design or process by examining the effects of lower level failures.
Based on the result actions are taken for the problems/failures to reduce
the re occurrence and reduce the risk if it occurs again.
Failure modes are any potential or actual problem/defect in design or process.
Effect Analysis is study of the effects of the failure/problems.
Steps for FMEA
1. Identify the effect of each failure by failure mode analysis and identifiy single
failure points that are critical.
2. Rank each failure and its probability of occurrence
3. Take action
144. Can you explain X bar charts?

A bar chart is normally a two-dimensional chart using bars to represent values or


items.A bar chart is particularly advantageous when:
A large graphic is desired
Specific time frame periods are to be emphasized
Two or more items are to be included, representing different values or items so that the
bar gives easy distinction between the two items

145 Question & answer missing

146. What does Agile mean?

Agile means quick or lively. This quickness and liveliness can be physical or even
mental. A dancer could have very agile leaps and jumps. A student might be
commended for having a very agile mind. Do you think you may be agile?

147. Can you explain agile modeling?

Agile modeling is an approach to the modeling aspects of software development. It’s a


practice for modeling and documentation for software systems. In one line

It’s a collection of best practices for software modelling in light-weight manner.


In abstraction we can say it augments other software processes. For instance let’s say
your company is using UML and then Agile applies approach practices on UML.For
example “Keep things simple” is Agile approach. So it means that we do not need to use
all diagrams in our project, use only which are needed. If we summarize then in one
word we can say Agile modeling says “Do only what’s needed and nothing more than
that”.
Figure: - Agile Modeling

148. What are core and supplementary principles in agile modeling?

Agile Modeling (AM) defines a collection of core and supplementary principles that
when applied on a software development project set the stage for a collection of
modeling practices. Some of the principles have been adopted from eXtreme
Programming (XP) and are well documented in Extreme Programming Explained,
which in turn adopted them from common software engineering techniques. For the
most part the principles are presented with a focus on their implications to modeling
efforts and as a result material adopted from XP may be presented in a different light.

Supplementa
Core Principles
ry Principles
• Assume
Simplicity
• Embrace
Change
• Enabling the

Next Effort is
Your
Conten
Secondary
t is
Goal
More
• Incremental
Import
Change
ant
• Maximize
Than
Stakeholder
Repres
ROI
entatio
• Model With a
n
Purpose
• Open
• Multiple
and
Models
Honest
• Quality Work
Comm
• Rapid
unicati
Feedback
on
• Working
Software Is
Your Primary
Goal
• Travel Light

149. What is the main principle behind agile documentation?

The main deliverable in Agile is a working software and not documentation.


Documentation is a support to get the working software. In traditional delivery cycle lot
of documentation where generated in design and requirement phase. But we are sure
many of documentation where either created just for the sake of it or it was just created.
Below are the some of the key points to make documentation Agile:-

Before creating any document ask a question do we need it and if we who is the stake
holder. Document should exist only if needed and not for the sake of existence.
The most important thing is we need to create documentation to provide enough data
and no more than that. It should be simple and should communicate to stakeholders
what it needs to communicate. For instance below figure ‘Agile Documentation’ shows
two views for a simple class diagram. In the first view we have shown all the properties
for “Customer” and the “Address” class. Now have a look at the second view where we
have only shown the broader level view of the classes and relationships between them.
The second view is enough and not more. If the developer wants to get in to details we
can do that during development.
Figure: - Agile documentation

Document only for the current and not for future. In short whatever documentation we
require now we should produce and not something we need in the future.
Documentation changes its form as it travels through every cycle. For instance in the
requirement phase it’s the requirement document, in design it’s the technical
documentation and so on.
150. What are the different methodologies to implement Agile?

Agile is a thinking approach to software development which promises to remove the


issues we had with traditional waterfall methodology. In order to implement Agile
practically in projects we have various methodologies. Below figure ‘Agile
Methodologies’ shows the same in more detailed manner.
Figure: - Agile Methodologies

151. What is XP?

Extreme Programming is a discipline of software development based on values of


simplicity, communication, feedback, and courage. It works by bringing the whole team
together in the presence of simple practices, with enough feedback to enable the team
to see where they are and to tune the practices to their unique situation.

In Extreme Programming, every contributor to the project is an


integral part of the “Whole Team“. The team forms around a business representative
called “the Customer”, who sits with the team and works with them daily.

152. What are User Stories in XP and how different are they from requirement?

Use story is nothing but end users requirement. What differentiates a user story from a
requirement is that they are short and sweet. In one sentence they are just enough and
nothing more than that. User story ideally should be written on index cards. Below figure
‘User Story Index Card’ shows the card. Its 3 x 5 inches (8 x 13 cm) card. This will keep
your stories as small as possible. Requirement document go in pages. As we are
keeping the stories short its simple to read and understand. Traditional requirement
documents are verbose and they tend to loose the main requirement of the project.

Note: - When I was working in a multinational company I remember first 50 pages of


the requirement document having things like history, backtracking, author of the
document etc. I was completely drained till I started reading the core requirement
Note: - Theoretically it’s good to have cards, but in real scenario you will not. We have
seen in actual scenario project manager keeping stories in document and every story
not more than 15 lines.

Figure: - User Story Index Card

153. Who writes User stories?

It’s written and owned by the end customer and no one else

154. When do we say a story is valid?

155. When are test plans written in XP?

XP development cycle consists of two phases one is ‘Release Planning’ and the other is
‘Iteration Planning’. In release planning we decide what should be delivered and in
which priority. In iteration planning we break the requirements in to tasks and plan how
to deliver those activities decided in release planning. Below figure ‘Actual Essence‘
shows what actually these two phases deliver.
Figure: - Actual Essence

If you are still having the old SDLC in mind below figure ‘Mapping to Traditional Cycle’
shows how the two phases map to SDLC.

Figure: - Mapping to Traditional Cycle

So let’s explore both these phases in a more detailed manner. Both phases “Release
Planning” and “Iteration Planning” have three common phases “Exploration”,
“Commitment” and “Steering”.

156. Can you explain the XP development life cycle?

1. Exploration Phase
2. Planning Phase
3. Iterations to Release Phase
4. Productionizing Phase
5. Maintenance Phase

Figure 1. The XP project lifecycle.


1. Exploration Phase

The first phase that an XP project experiences is the Exploration phase (Beck, 2000),
encompassing the initial requirements modeling and initial architectural
modeling aspects of the agile software development lifecycle. This phase includes
development of the architectural spike and the development of the initial user
stories. From a requirements point of view he suggests that you require enough
material in the user stories to make a first good release and the developers should be
sufficiently confident that they can’t estimate any better without actually implementing
the system. Every project has a scope, something that is typically based on a
collection of initial requirements for your system. Although the XP lifecycle presented
in Figure 1 does not explicitly include a specific scope definition task it implies one with
user stories being an input into release planning. User stories are a primary driver of
the XP methodology – they provide high-level requirements for your system and are the
critical input into your planning process. The implication is that you need a collection of
user stories, anywhere from a handful to several dozen, to get your XP project started.

157. Can you explain how planning game works in Extreme Programming?

Extreme Programming (XP) is a software development methodology which is intended


to improve software quality and responsiveness to changing customer requirements. As
a type of agile software development,[1][2][3] it advocates frequent "releases" in short
development cycles (timeboxing), which is intended to improve productivity and
introduce checkpoints where new customer requirements can be adopted.

Other elements of extreme programming include: programming in pairs or doing


extensive code review, unit testing of all code, avoiding programming of features until
they are actually needed, a flat management structure, simplicity and clarity in code,
expecting changes in the customer's requirements as time passes and the problem is
better understood, and frequent communication with the customer and among
programmers.

158. How do we estimate in Agile?

If you read the Agile cycle carefully (explained in the previous section) you will see Agile
estimation happens at two places.

User Story Level Estimation: - In this level a User story is estimated using Iteration
Team velocity and the output is Ideal Man days or Story points.

Task Level Estimation: - This is a second level of estimation. This estimation is at the
developer level according to the task assigned. This estimation ensures that the User
story estimation is verified.

Estimation happens at two levels one when we take the requirement and one when we
are very near to execution that’s at the task level. This looks very much logical because
as we are very near to complete task estimation is more and more clear. So task level
estimation just comes as a cross verification for user story level estimation.

Figure: - Agile Estimation

159. On What basis can stories be prioritized?

User story should normally be prioritized from the business importance point of view. In
real scenarios this is not the only criteria. Below are some of the factors to be accounted
when prioritizing user stories:-
Prioritize by business value: - Business user assigns a value according to the
business needs. There three level of ratings for business value:-
o Most important features: - With out these features the software has no meaning.
o Important features: - It’s important to have features. But if these features do not exist
there are alternatives by which user can manage.
o Nice to have features: - These features are not essential features but rather it’s over
the top cream for the end user.
Prioritize by risk: - This factor helps us prioritize by risk from the development angle.
Risk index is assigned from 0 to 2 and are classified in three main categories :-
o Completeness
o Volatility
o Complexity

Below figure “Risk Index” shows the values and the classification accordingly.

Figure: - Risk Index

160. Can you point out simple differences between Agile and traditional SDLC?

Lengthy requirement documents are now simple and short user stories.
Estimation unit man days and man hours are now ideal days and ideal hours
respectively.
In traditional approach we freeze the requirement and complete the full design and then
start coding. But in Agile we do designing task wise. So just before the developer starts
a task he does design.
In traditional SDLC we used always hear this voice ‘After signoff nothing can be
changed’, in Agile we work for the customer, so we do accept changes.
Unit test plans are written after coding or during coding in traditional SDLC. In Agile we
write unit test plans before writing the code.
Figure: - Agile and Traditional SDLC

161. Can you explain the concept of refactoring?

Refactoring is "the process of changing a software system in such a way that it does not
alter the external behavior of the code yet improves its internal structure," according to
Martin Fowler, the "father" of refactoring. The concept of refactoring covers practically
any revision or cleaning up of source code, but Fowler consolidated many best
practices from across the software development industry into a specific list of
"refactorings" and described methods to implement them in his book, Refactoring:
Improving the Design of Existing Code. While refactoring can be applied to any
programming language, the majority of refactoring current tools have been developed
for the Java language.

162. What is a feature in Feature Driven Development?

Feature-Driven Development (FDD) is a client-centric, architecture-centric, and


pragmatic software process. The term “client” in FDD is used to represent what Agile
Modeling (AM) refers to as project stakeholders or eXtreme Programming (XP) calls
customers. FDD was first introduced to the world in 1999 via the book Java Modeling
In Color with UML, a combination of the software process followed by Jeff DeLuca’s
company and Peter Coad’s concept of features. FDD was first applied on a 15 month,
50-person project for a large Singapore bank in 1997, which was immediately followed
by a second, 18-month long 250-person project. A more substantial description is
published in the book A Practical Guide to Feature-Driven Development as well as
the Feature Driven Development web site.

163. Can you explain the overall structure of FDD project?

FDD is an iterative methodology to deliver projects. Rather than delivering projects in


one go manner we deliver the features with in time limits. So let’s understand how the
FDD cycle moves. We will have two views one is the over all flow and one is the detail
iteration flow.
Below figure ‘Structure of FDD project’ shows step by step approach for FDD project.
• Identify the features: - In this phase we identify the features in the project. Keep
one thing in mind features are not simple user point of view requirements; we
should be able to schedule a feature.
• Prioritize the features: - Big bang theory thinking is bad. Many project
managers think deliver everything at the first step itself, but that’s practically
difficult. Rather deliver first the most needed functionalities, then needed and
then so called as over the top cream functionalities. To deliver in feature by
feature manner we need to prioritize the feature list from the user’s angle.
• Define iterations and time boxes: - The first thing which must have clicked your
mind is we should deliver in group of features, but that’s not the case in FDD. We
deliver according to the size of iteration. Iteration is based on “timeboxes” so that
we know how long iteration is. Depending on the timeboxes the features can be
delivered or not is decided.
The below points are looped for every iteration (below sections are covered in
more detail in the coming up section).
• Plan Iteration: - At the start of every iteration we need to plan it out, how we are
going to execute the plan.
• Create release: - We code, test and deliver according to the plan chalked out in
“Plan Iteration” phase.
If everything is ok we move ahead if not we take the next iteration
Figure: - Structure of a FDD project
164-171 Questions & answers missing
172.Can you explain in detail project life cycle phase in DSDM?

Ans There are in all five phases in DSDM project life cycle:-
1. Feasibility Study: - During this stage the can the project be used for DSDM is
examined. For that we need to answer questions like "Can this project fulfill
business needs?", "Is the project fit for DSDM?" and "What are the prime risks
involved in the project?".
2. Business Study: - Once we have concluded that the project has passed the
feasibility study in this phase we do a business study. Business study involves
meeting with the end customer/user to discuss about a proposed system. In one
sentence it’s a requirement gathering phase. Requirements are then prioritized
and time boxed. So the output of this phase is a prioritized requirement list with
time frames.
3. Functional Model Iteration: - In this phase we develop prototype which is
reviewed by the end user.
4. Design and Build Iteration: - The prototype which was agreed by the user in
the previous stage is designed and built in this stage and given to the end user
for testing.
5. Implementation: - Once the end user has confirmed everything is alright its time
to implement the same to the end user.
Figure: - DSDM Project Life Cycle
173. Can you explain LSD ?
Ans. Lean software development has derived its principles from lean manufacturing.
Below figure ‘Principles of LSD’ shows all the principles.

Figure: - Principles of LSD

Let’s understand in brief about the principles.


Eliminate waste: - Only deliver what’s needed to the end user. Anything more than that
is a waste. In short anything which does not add value is a waste. In short we can by
pass tasks and still deliver then it should be bypassed.
Decide as late as possible: - Software systems are complex. The more they are near
to execution more is the clarity. So delay decisions so that they can be based in facts
rather than assumptions. For instance your team decides to use automation testing, but
when you complete execution you come to know changes are delivered every day by
the developer. After execution you conclude manual testing is the best option. Bad part
will be if you have already bought an automation tool, its waste of money.
Deliver as fast as possible: - faster a product is delivered, faster you will get user
response and faster you can improve in the next iteration. The concept is not fast
coding, but try to deliver is small user identified chunks for better understanding.
Motivate team: - Team members are the most important aspect for success of any
project. Motivate them, given them roles; keep your team spirit high and whatever you
can do to make them feel good in the project. A highly motivated team delivers project
on time.
Integrity: - Software system should be delivered in loosely coupled components. Every
component or module can function individually and when integrated with the project it
works perfectly well. It should be a plug and play from the end user point of view. This
spirit is derived from how actual production system work. You can assemble a car with
wheels from one organization and seats from other organization.

174.Can you explain ASD?

ASD (Adaptive Software Development) accepts that change is a truth. It also accepts in
principles that mistakes can happen and it’s important to learn from those mistakes in
the future. Below figure ‘ASD Cycle’ shows three important phases in ASD.

Figure: - ASD Cycle


Let’s understand all the three phases in a detail manner.
Speculate (nothing but planning):- This is the planning phase of ASD cycle. Below
figure ‘Speculate’ shows in detail what happens in this phase.
Define Project Scope: - This is the first phase in the main speculate phase. In this we
set the main goals and what’s the scope of the entire project.
Set Time for Scope: - Timelines are decided for the scope identified in the previous
phase.
Decide the number of iterations: - Depending on the time lines and the scope we
identify how many iterations will the project have.
Break Scope to tasks: - In this section we break the scope in to tasks. Tasks are
nothing but development activity which a developer has to do to complete the scope.
Assign task to developers: - Assign those task to developer.
Now that we know who will do what task, its time for coding / execution.

Figure: - Speculate
Collaborate (coding / execution):-Execute as per the task and test it.
Learn (Review and give feedback to planning):- At the end of iteration see what
lessons we have learnt and apply the same for the next iteration.

Вам также может понравиться