Вы находитесь на странице: 1из 14

1.

2 INTRODUCTION TO SYSTEM DEVELOPMENT LIFE CYCLE (SDLC)


1.2.1 Initiation Phase
1.2.2 System Concept Development Phase
1.2.3 Planning Phase
1.2.4 Requirements Analysis Phase
1.2.5 Design Phase
1.2.6 Development Phase
1.2.7 Integration and Test Phase
1.2.8 Implementation Phase
1.2.9 Operations and Maintenance Phase
1.2.10 Disposition Phase

1.3 CONTROLS/ASSUMPTIONS

1.4 DOCUMENTATION

1.0 BACKGROUND

The DOJ spends millions of dollars each year on the acquisition, design, development,
implementation, and maintenance of information systems vital to mission programs and
administrative functions. The need for safe, secure, and reliable system solutions is
heightened by the increasing dependence on computer systems and technology to provide
services and develop products, administer daily activities, and perform short- and long-
term management functions. There is also a need to ensure privacy and security when
developing information systems, to establish uniform privacy and protection practices,
and to develop acceptable implementation strategies for these practices.

The DOJ needs a systematic and uniform methodology for information systems
development. Using this SDLC will ensure that systems developed by the Department
meet IT mission objectives; are compliant with the current and planned Information
Technology Architecture (ITA); and are easy to maintain and cost-effective to enhance.
Sound life cycle management practices include planning and evaluation in each phase of
the information system life cycle. The appropriate level of planning and evaluation is
commensurate with the cost of the system, the stability and maturity of the technology
under consideration, how well defined the user requirements are, the level of stability of
program and user requirements and security considerations.

1.1 PURPOSE, SCOPE, AND APPLICABILITY

1.1.1 Purpose

This SDLC methodology establishes procedures, practices, and guidelines governing the
initiation, concept development, planning, requirements analysis, design, development,
integration and test, implementation, and operations, maintenance and disposition of
information systems (IS) within the DOJ. It should be used in conjunction with existing
policy and guidelines for acquisition and procurement, as these areas are not discussed in
the SDLC.

1.1.2 Scope

This methodology should be used for all DOJ information systems and applications. It is
applicable across all information technology (IT) environments (e.g., mainframe, client,
server) and applies to contractually developed as well as in-house developed applications.
The specific participants in the life cycle process, and the necessary reviews and
approvals, vary from project to project. The guidance provided in this document should
be tailored to the individual project based on cost, complexity, and criticality to the
agency’s mission. See Chapter 13 for Alternate SDLC Work Patterns if a formal SDLC is
not feasible. Similarly, the documents called for in the guidance and shown in Appendix
C should be tailored based on the scope of the effort and the needs of the decision
authorities.

1.1.3 Applicability

This methodology can be applied to all DOJ Offices, Boards, Divisions and Bureaus
(OBDB) who are responsible for information systems development. All Project Managers
and development teams involved in system development projects represent the primary
audience for the DJ SDLC, version 2.0.

1.2 INTRODUCTION TO SDLC

The SDLC includes ten phases during which defined IT work products are created or
modified. The tenth phase occurs when the system is disposed of and the task performed
is either eliminated or transferred to other systems. The tasks and work products for each
phase are described in subsequent chapters. Not every project will require that the phases
be sequentially executed. However, the phases are interdependent. Depending upon the
size and complexity of the project, phases may be combined or may overlap. See Figure
1-1.
Figure 1-1

[D]

The DOJ SDLC encompasses ten phases:

1.2.1 Initiation Phase

The initiation of a system (or project) begins when a business need or opportunity is
identified. A Project Manager should be appointed to manage the project. This business
need is documented in a Concept Proposal. After the Concept Proposal is approved, the
System Concept Development Phase begins.

1.2.2 System Concept Development Phase

Once a business need is approved, the approaches for accomplishing the concept are
reviewed for feasibility and appropriateness. The Systems Boundary Document identifies
the scope of the system and requires Senior Official approval and funding before
beginning the Planning Phase.
1.2.3 Planning Phase

The concept is further developed to describe how the business will operate once the
approved system is implemented, and to assess how the system will impact employee and
customer privacy. To ensure the products and /or services provide the required capability
on-time and within budget, project resources, activities, schedules, tools, and reviews are
defined. Additionally, security certification and accreditation activities begin with the
identification of system security requirements and the completion of a high level
vulnerability assessment.

1.2.4 Requirements Analysis Phase

Functional user requirements are formally defined and delineate the requirements in
terms of data, system performance, security, and maintainability requirements for the
system. All requirements are defined to a level of detail sufficient for systems design to
proceed. All requirements need to be measurable and testable and relate to the business
need or opportunity identified in the Initiation Phase.

1.2.5 Design Phase

The physical characteristics of the system are designed during this phase. The operating
environment is established, major subsystems and their inputs and outputs are defined,
and processes are allocated to resources. Everything requiring user input or approval
must be documented and reviewed by the user. The physical characteristics of the system
are specified and a detailed design is prepared. Subsystems identified during design are
used to create a detailed structure of the system. Each subsystem is partitioned into one or
more design units or modules. Detailed logic specifications are prepared for each
software module.

1.2.6 Development Phase

The detailed specifications produced during the design phase are translated into
hardware, communications, and executable software. Software shall be unit tested,
integrated, and retested in a systematic manner. Hardware is assembled and tested.

1.2.7 Integration and Test Phase

The various components of the system are integrated and systematically tested. The user
tests the system to ensure that the functional requirements, as defined in the functional
requirements document, are satisfied by the developed or modified system. Prior to
installing and operating the system in a production environment, the system must
undergo certification and accreditation activities.

1.2.8 Implementation Phase


The system or system modifications are installed and made operational in a production
environment. The phase is initiated after the system has been tested and accepted by the
user. This phase continues until the system is operating in production in accordance with
the defined user requirements.

1.2.9 Operations and Maintenance Phase

The system operation is ongoing. The system is monitored for continued performance in
accordance with user requirements, and needed system modifications are incorporated.
The operational system is periodically assessed through In-Process Reviews to determine
how the system can be made more efficient and effective. Operations continue as long as
the system can be effectively adapted to respond to an organization’s needs. When
modifications or changes are identified as necessary, the system may reenter the planning
phase.

1.2.10 Disposition Phase

The disposition activities ensure the orderly termination of the system and preserve the
vital information about the system so that some or all of the information may be
reactivated in the future if necessary. Particular emphasis is given to proper preservation
of the data processed by the system, so that the data is effectively migrated to another
system or archived in accordance with applicable records management regulations and
policies, for potential future access.

1.3 CONTROLS/ASSUMPTIONS

The DOJ FY 2002 - FY 2006 IT Strategic Plan defines the strategic vision for using IT to
meet business needs of the Department. The DOJ Technical Reference Model (TRM)
standards guidance, version 2.0, provides standards for all IT systems funded by DOJ. It
applies to both the development of new systems and the enhancements of existing
systems. This document is available on the DOJ Intranet at
http://10.173.2.12/jmd/irm/imss/enterprisearchitecture/enterarchhome.html (available to
DOJ Employees only).

This SDLC calls for a series of comprehensive management controls. These include:

• Life Cycle Management should be used to ensure a structured approach to


information systems development and operation.
• Each system project must have an accountable sponsor.
• A single project manager must be appointed for each system project.
• A comprehensive project management plan is required for each system
project.
• Data Management and security must be emphasized throughout the Life Cycle.
• A system project may not proceed until resource availability is assured.
All DOJ components shall adhere to IRM Order 2880.1A which provides general policy
on Information Resources Management, to include roles and responsibilities for
information collection, resource management and privacy act requirements. DOJ orders
are located at http://10.173.2.12/dojorders/dojorders.html.

1.4 DOCUMENTATION

This life cycle methodology specifies which documentation shall be generated during
each phase.

Some documentation remains unchanged throughout the systems life cycle while others
evolve continuously during the life cycle. Other documents are revised to reflect the
results of analyses performed in later phases. Each of the documents produced are
collected and stored in a project file. Care should be taken, however, when processes are
automated. Specifically, components are encouraged to incorporate a long-term retention
and access policy for electronic processes. Be aware of legal concerns that implicate
effectiveness of or impose restrictions on electronic data or records. Contact your
Records Management Office for specific retention requirements and procedures.
Recommended documents and their project phase are shown in Table 1.

Table 1
Planning Document

Concept Proposal C/F


System Boundary Document C/F * * * * * *
Cost-Benefit Analysis C R R R R F
Feasibility Study C R F
Risk Management Plan C R R R R F * *

Acquisition Plan C R F *
Configuration Managment Plan C R R R F * *
Quality Assurance Plan C R R R F * *
Concept of Operations C R R R R F * *
System Security Plan C R R R F * *
Project Management Plan C R R R F *
Verification and Validation Plan C R R R F
System Engineering Management Plan C/F * * * * * * *

Functional Requirements Document C F


Test and Evaluation Master Plan C R R F * *
Interface Contraol Document C R F * * *
Privacy Act Notice/Privacy Impact Assessment C F
Security Risk Assessment C R F
Conversion Plan C R F *
System Design Document C F *
Implementation Plan C R F
Maintenance Manual C R F * *
Operations Manual (System Administration Manual) C R F * *
Training Plan C R F * *
User Manual C R F * *

Contingency Plan C F * *
Software Development Document C R F *
System Software C F * *
Test Files/Data C F *
Integration Document C F * *

Test Analysis Report C/F


Test Analysis Approval Determination C/F
Test Problem Report C
IT Systems Security Certification & Accreditation C/F
Delivered System C/F *
Change Implemention Notice C C
Version Description Document C/F *
Post-Implementation Review C

In-Process Review Report C


User Satisfaction Report C

Disposition Plan C/F


Post-termination Review Report C/F
KEY: C=Created R-Revised F=Finalized *=Updated if needed

Table of Contents | Forward | Chapter 2


Triage" is a medical term. It refers to dividing wounded or sick people into three
categories: those who will die no matter what you do, those who will recover even if
unaided, and those who will recover only if aided. In a situation where there's too much
to do, you must concentrate on the third group.

Bug Triage Meetings (sometimes called Bug Councils) are project meetings in which
open bugs are divided into categories. The most important distinction is between bugs
that will not be fixed in this release and those that will be

There are three categories for the medical usage, software also three categories - bugs to
fix now, bugs to fix later, and bugs we'll never fix

Triaging a bug involves:

Making sure the bug has enough information for the developers and makes sense
Making sure the bug is filed in the correct place
Making sure the bug has sensible "Severity" and "Priority" fields

Let us see what Priority and Severity means

Priority is Business;
Severity is Technical

In Triages, team will give the Priority of the fix based on the business perspective. They
will check “How important is it to the business that we fix the bug?” In most of the times
high Severity bug is becomes high Priority bug, but it is not always. There are some
cases where high Severity bugs will be low Priority and low Severity bugs will be high
Priority.

In most of the projects I worked, if schedule drawn closer to the release, even if the bug
severity is more based on technical perspective, the Priority is given as low because the
functionality mentioned in the bug is not critical to business.

Priority and Severity gives the excellent metrics to identify overall health of the Project.
Severity is customer-focused while priority is business-focused. Assigning Severity for a
bug is straightforward. Using some general guidelines about the project, testers will
assign Severity but while assigning a priority is much more juggling act. Severity of the
bug is one of the factors for assigning priority for a bug. Other considerations are might
be how much time left for schedule, possibly ‘who is available for fix’, how important is
it to the business to fix the bug, what is the impact of the bug, what are the probability of
occurrence and degree of side effects are to be considered.

Read the excellent article Arguing Apples and Oranges


This article clearly explains the how Priority and Severity of the bug given.
Some of the above points taken from this article

Many organizations mandate that bugs of certain severity should be at least certain
priority. Example: Crashes must be P1; Data loss must be P1, etc. A severe bug that
crashes the system only once and not always reproducible will not be P1, where as an
error condition that results re-entry a portion of input for every user will be P1

Microsoft uses a four-point scale to describe severity of bugs and three-point scale for
Priority of the bug. They are as follows

Severity
---------------
1. Bug causes system crash or data loss.
2. Bug causes major functionality or other severe problems; product crashes in obscure
cases.
3. Bug causes minor functionality problems, may affect "fit anf finish".
4. Bug contains typos, unclear wording or error messages in low visibility fields.

Priority
---------------
1. Must fix as soon as possible. Bug is blocking further progress in this area.
2. Should fix soon, before product release.
3. Fix if time; somewhat trivial. May be postponed.

Comments and your experience in giving Severity & Priority for bug are welcome.

Priority is how important something is to fix, while


Severity is how severe the impact is on the system as a result of the bug

Tim is looking at business priority: “How important is it to the business that we fix the
bug?” Jordan is looking at technical severity: “How nasty is the bug from a technical
perspective?” These two questions sometimes arrive at the same answer: a high severity
bug is often also high priority, but not always. Allow me to suggest some definitions.

Severity is levels:
* Critical: the software will not run
* High: unexpected fatal errors (includes crashes and data corruption)
* Medium: a feature is malfunctioning
* Low: a cosmetic issue
Now you see why Jordan was arguing that the Print bug was a medium: a feature was
malfunctioning.

Priority levels:
* Now: drop everything and take care of it as soon as you see this (usually for blocking
bugs)
* P1: fix before next build to test
* P2: fix before final release
* P3: we probably won’t get to these, but we want to track them anyway

And now you can see why Tim was so adamant that the issue was a high. From his
perspective, it was a P1 matter.

They’re both right. It’s of medium severity, but P1 to fix.

CMM Levels
The CMM is designed to be an easy to understand methodology for ranking a company’s IT
related activities. The CMM has six levels 0 – 5. The purpose of these levels is to provide a
“measuring stick” for organizations looking to improve their system development processes.

• Level 0 – Not Performed


• Level 1 – Performed Informally
• Level 2 – Planned and Tracked
• Level 3 – Well-Defined
• Level 4 – Quantitatively Controlled
• Level 5 – Continuously Improving

Level 0: Not Performed


Level 0 is common for companies that are just entering the IT field. At this level there are few or
no best practices. Some IT activities are not even performed and all deliverables are done in
“one-off efforts.” Typically new applications are built by a small number of people (possibly even
one person) in a very isolated fashion.

Level 1: Performed Informally


At level 1 consistent planning and tracking of IT activities is missing and deliverables are
accomplished via “Heroic” effort. “Heroic” effort means that a team might work long hours in
order to build a particular application; however, there are very few IT standards and reuse/sharing
is minimal. As a result, IT deliverables are adequate; however, the deliverables are not
repeatable or transferable.
transferable

As a general rule, the amount of money a company spends on their IT applications does not
impact what CMM level they are at. The closest exception to this rule is the move from Level 0 to
Level 1. Typically, a company can move from level 0 to level 1 as a by-product of elevated IT
spending. Aside from this exception, moving beyond level 1 is not impacted by money spent. A
corporation could be spending well over $500 million on application development and be at this
level. Indeed, the vast majority of Fortune 500 companies and large government organizations
are at a CMM level of 1.

Level 2: Planned and Tracked


Level 2 has IT deliverables that are planned and tracked. In addition, there are some defined
best practices within the enterprise (e.g. defined IT standards/documents, program version
control, etc.). Some repeatable processes exist within an IT project team/group, but the success
of this team/group is not transferable across the enterprise

Level 3: Well-Defined
IT best practices are documented and performed throughout the enterprise. At level 3 IT
deliverables are repeatable AND transferable across the company. This level is a very difficult
jump for most companies. Not surprisingly, this is also the level that provides the greatest cost
savings.

Level 4: Qualitatively Controlled


Companies at level 4 have established measurable process goals for each defined process.
These measurements are collected and analyzed quantitatively.
quantitatively At this level, companies can
begin to predict future IT implementation performance.

Level 5: Continuously Improving


At level 5 enterprises have quantitative (measurement) and qualitative (quality) understanding
of each IT process. It is at this level that a company understands how each IT process is related
to the overall business strategies and goals of the corporation. Every programmer should
understand how each line of SQL will assist the company in reaching their strategic goals.

In my next column I will apply the CMM levels to data warehousing and provide you with a
mechanism to rank your company’s data warehousing efforts.

Level One
Company has no standard process for software development. Nor does it have a
project-tracking system that enables developers to predict costs or finish dates with
any accuracy.

Level Two
Company has installed basic software management processes and controls. But there
is no consistency or coordination among different groups.

Level Three
Company has pulled together a standard set of processes and controls for the entire
organization so that developers can move between projects more easily and
customers can begin to get consistency from different groups.

Level Four
In addition to implementing standard processes, company has installed systems to
measure the quality of those processes across all projects.

Level Five
Company has accomplished all of the above and can now begin to see patterns in
performance over time, so it can tweak its processes in order to improve productivity
and reduce defects in software development across the entire organization
Application Testing life cycle

This life cycle is used for standard applications that are built from various custom technologies and
follow the normal or standard testing approach. The Application or custom-build Lifecycle and its
phases is depicted below:

• Requirement Specification documents


• Functional Specification documents
Test Requirements • Design Specification documents (use cases, etc)
• Use case Documents
• Test Trace-ability Matrix for identifying Test Coverage

• Test Scope, Test Environment


• Different Test phase and Test Methodologies
Test Planning • Manual and Automation Testing
• Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
• Evaluation & identification – Test, Defect tracking tools

• Test Bed installation and configuration


• Network connectivity’s
Test Environment Setup
• All the Software/ tools Installation and configuration
• Coordination with Vendors and others

• Test Traceability Matrix and Test coverage


• Test Scenarios Identification & Test Case preparation
Test Design • Test data and Test scripts preparation
• Test case reviews and Approval
• Base lining under Configuration Management

• Automation requirement identification


• Tool Evaluation and Identification.
Test Automation • Designing or identifying Framework and scripting
• Script Integration, Review and Approval
• Base lining under Configuration Management

• Executing Test cases


Test Execution and • Testing Test Scripts
Defect Tracking • Capture, review and analyze Test Results
• Raised the defects and tracking for its closure

• Test summary reports


Test Reports • Test Metrics and process Improvements made
and Acceptance • Build release
• Receiving acceptance
Product/System Testing

The product or system test is where system requirements are confirmed. For each requirement,
multiple test conditions may be created with corresponding expected results. One key consideration
prior to starting product testing is to create a testing infrastructure. Verify you have a test PC or
server along with a separate database. High volume most likely will not be needed to perform a
complete system test. Verify proper test data is in place and user accounts are setup. Creating
reusable test data will increase efficiency on future iterations of test cases. Identify any tools to
assist in testing. These may include automated test scripts/robots, test data management tools
and/or a test condition tracking systems. Be sure to include test conditions for exceptions.

Errors are to be expected, be sure a proper defect tracking system is in place to trace, categorize
and assign these defects and issues. It is important to allocate developer time during testing. Not
many applications are bug free at this stage.

Acceptance Testing

Acceptance testing should be performed by the users who will be utilizing the new system.
Hopefully, these are the same users that helped define requirements so translating test conditions
from requirements should be understandable by the users. Like system test, consider if a special
environment is necessary for complete testing. Prior to acceptance test execution, verify the test
approach and user involvement with key stakeholders.

Similar to system test, set success criteria in terms of percentage of failed versus successful
conditions along with the number of defects found and their priorty. These criteria should be agreed
upon with key stakeholders and determine if the testing phase is complete, and deployment can
begin.

Вам также может понравиться