Академический Документы
Профессиональный Документы
Культура Документы
1.3 CONTROLS/ASSUMPTIONS
1.4 DOCUMENTATION
1.0 BACKGROUND
The DOJ spends millions of dollars each year on the acquisition, design, development,
implementation, and maintenance of information systems vital to mission programs and
administrative functions. The need for safe, secure, and reliable system solutions is
heightened by the increasing dependence on computer systems and technology to provide
services and develop products, administer daily activities, and perform short- and long-
term management functions. There is also a need to ensure privacy and security when
developing information systems, to establish uniform privacy and protection practices,
and to develop acceptable implementation strategies for these practices.
The DOJ needs a systematic and uniform methodology for information systems
development. Using this SDLC will ensure that systems developed by the Department
meet IT mission objectives; are compliant with the current and planned Information
Technology Architecture (ITA); and are easy to maintain and cost-effective to enhance.
Sound life cycle management practices include planning and evaluation in each phase of
the information system life cycle. The appropriate level of planning and evaluation is
commensurate with the cost of the system, the stability and maturity of the technology
under consideration, how well defined the user requirements are, the level of stability of
program and user requirements and security considerations.
1.1.1 Purpose
This SDLC methodology establishes procedures, practices, and guidelines governing the
initiation, concept development, planning, requirements analysis, design, development,
integration and test, implementation, and operations, maintenance and disposition of
information systems (IS) within the DOJ. It should be used in conjunction with existing
policy and guidelines for acquisition and procurement, as these areas are not discussed in
the SDLC.
1.1.2 Scope
This methodology should be used for all DOJ information systems and applications. It is
applicable across all information technology (IT) environments (e.g., mainframe, client,
server) and applies to contractually developed as well as in-house developed applications.
The specific participants in the life cycle process, and the necessary reviews and
approvals, vary from project to project. The guidance provided in this document should
be tailored to the individual project based on cost, complexity, and criticality to the
agency’s mission. See Chapter 13 for Alternate SDLC Work Patterns if a formal SDLC is
not feasible. Similarly, the documents called for in the guidance and shown in Appendix
C should be tailored based on the scope of the effort and the needs of the decision
authorities.
1.1.3 Applicability
This methodology can be applied to all DOJ Offices, Boards, Divisions and Bureaus
(OBDB) who are responsible for information systems development. All Project Managers
and development teams involved in system development projects represent the primary
audience for the DJ SDLC, version 2.0.
The SDLC includes ten phases during which defined IT work products are created or
modified. The tenth phase occurs when the system is disposed of and the task performed
is either eliminated or transferred to other systems. The tasks and work products for each
phase are described in subsequent chapters. Not every project will require that the phases
be sequentially executed. However, the phases are interdependent. Depending upon the
size and complexity of the project, phases may be combined or may overlap. See Figure
1-1.
Figure 1-1
[D]
The initiation of a system (or project) begins when a business need or opportunity is
identified. A Project Manager should be appointed to manage the project. This business
need is documented in a Concept Proposal. After the Concept Proposal is approved, the
System Concept Development Phase begins.
Once a business need is approved, the approaches for accomplishing the concept are
reviewed for feasibility and appropriateness. The Systems Boundary Document identifies
the scope of the system and requires Senior Official approval and funding before
beginning the Planning Phase.
1.2.3 Planning Phase
The concept is further developed to describe how the business will operate once the
approved system is implemented, and to assess how the system will impact employee and
customer privacy. To ensure the products and /or services provide the required capability
on-time and within budget, project resources, activities, schedules, tools, and reviews are
defined. Additionally, security certification and accreditation activities begin with the
identification of system security requirements and the completion of a high level
vulnerability assessment.
Functional user requirements are formally defined and delineate the requirements in
terms of data, system performance, security, and maintainability requirements for the
system. All requirements are defined to a level of detail sufficient for systems design to
proceed. All requirements need to be measurable and testable and relate to the business
need or opportunity identified in the Initiation Phase.
The physical characteristics of the system are designed during this phase. The operating
environment is established, major subsystems and their inputs and outputs are defined,
and processes are allocated to resources. Everything requiring user input or approval
must be documented and reviewed by the user. The physical characteristics of the system
are specified and a detailed design is prepared. Subsystems identified during design are
used to create a detailed structure of the system. Each subsystem is partitioned into one or
more design units or modules. Detailed logic specifications are prepared for each
software module.
The detailed specifications produced during the design phase are translated into
hardware, communications, and executable software. Software shall be unit tested,
integrated, and retested in a systematic manner. Hardware is assembled and tested.
The various components of the system are integrated and systematically tested. The user
tests the system to ensure that the functional requirements, as defined in the functional
requirements document, are satisfied by the developed or modified system. Prior to
installing and operating the system in a production environment, the system must
undergo certification and accreditation activities.
The system operation is ongoing. The system is monitored for continued performance in
accordance with user requirements, and needed system modifications are incorporated.
The operational system is periodically assessed through In-Process Reviews to determine
how the system can be made more efficient and effective. Operations continue as long as
the system can be effectively adapted to respond to an organization’s needs. When
modifications or changes are identified as necessary, the system may reenter the planning
phase.
The disposition activities ensure the orderly termination of the system and preserve the
vital information about the system so that some or all of the information may be
reactivated in the future if necessary. Particular emphasis is given to proper preservation
of the data processed by the system, so that the data is effectively migrated to another
system or archived in accordance with applicable records management regulations and
policies, for potential future access.
1.3 CONTROLS/ASSUMPTIONS
The DOJ FY 2002 - FY 2006 IT Strategic Plan defines the strategic vision for using IT to
meet business needs of the Department. The DOJ Technical Reference Model (TRM)
standards guidance, version 2.0, provides standards for all IT systems funded by DOJ. It
applies to both the development of new systems and the enhancements of existing
systems. This document is available on the DOJ Intranet at
http://10.173.2.12/jmd/irm/imss/enterprisearchitecture/enterarchhome.html (available to
DOJ Employees only).
This SDLC calls for a series of comprehensive management controls. These include:
1.4 DOCUMENTATION
This life cycle methodology specifies which documentation shall be generated during
each phase.
Some documentation remains unchanged throughout the systems life cycle while others
evolve continuously during the life cycle. Other documents are revised to reflect the
results of analyses performed in later phases. Each of the documents produced are
collected and stored in a project file. Care should be taken, however, when processes are
automated. Specifically, components are encouraged to incorporate a long-term retention
and access policy for electronic processes. Be aware of legal concerns that implicate
effectiveness of or impose restrictions on electronic data or records. Contact your
Records Management Office for specific retention requirements and procedures.
Recommended documents and their project phase are shown in Table 1.
Table 1
Planning Document
Acquisition Plan C R F *
Configuration Managment Plan C R R R F * *
Quality Assurance Plan C R R R F * *
Concept of Operations C R R R R F * *
System Security Plan C R R R F * *
Project Management Plan C R R R F *
Verification and Validation Plan C R R R F
System Engineering Management Plan C/F * * * * * * *
Contingency Plan C F * *
Software Development Document C R F *
System Software C F * *
Test Files/Data C F *
Integration Document C F * *
Bug Triage Meetings (sometimes called Bug Councils) are project meetings in which
open bugs are divided into categories. The most important distinction is between bugs
that will not be fixed in this release and those that will be
There are three categories for the medical usage, software also three categories - bugs to
fix now, bugs to fix later, and bugs we'll never fix
Making sure the bug has enough information for the developers and makes sense
Making sure the bug is filed in the correct place
Making sure the bug has sensible "Severity" and "Priority" fields
Priority is Business;
Severity is Technical
In Triages, team will give the Priority of the fix based on the business perspective. They
will check “How important is it to the business that we fix the bug?” In most of the times
high Severity bug is becomes high Priority bug, but it is not always. There are some
cases where high Severity bugs will be low Priority and low Severity bugs will be high
Priority.
In most of the projects I worked, if schedule drawn closer to the release, even if the bug
severity is more based on technical perspective, the Priority is given as low because the
functionality mentioned in the bug is not critical to business.
Priority and Severity gives the excellent metrics to identify overall health of the Project.
Severity is customer-focused while priority is business-focused. Assigning Severity for a
bug is straightforward. Using some general guidelines about the project, testers will
assign Severity but while assigning a priority is much more juggling act. Severity of the
bug is one of the factors for assigning priority for a bug. Other considerations are might
be how much time left for schedule, possibly ‘who is available for fix’, how important is
it to the business to fix the bug, what is the impact of the bug, what are the probability of
occurrence and degree of side effects are to be considered.
Many organizations mandate that bugs of certain severity should be at least certain
priority. Example: Crashes must be P1; Data loss must be P1, etc. A severe bug that
crashes the system only once and not always reproducible will not be P1, where as an
error condition that results re-entry a portion of input for every user will be P1
Microsoft uses a four-point scale to describe severity of bugs and three-point scale for
Priority of the bug. They are as follows
Severity
---------------
1. Bug causes system crash or data loss.
2. Bug causes major functionality or other severe problems; product crashes in obscure
cases.
3. Bug causes minor functionality problems, may affect "fit anf finish".
4. Bug contains typos, unclear wording or error messages in low visibility fields.
Priority
---------------
1. Must fix as soon as possible. Bug is blocking further progress in this area.
2. Should fix soon, before product release.
3. Fix if time; somewhat trivial. May be postponed.
Comments and your experience in giving Severity & Priority for bug are welcome.
Tim is looking at business priority: “How important is it to the business that we fix the
bug?” Jordan is looking at technical severity: “How nasty is the bug from a technical
perspective?” These two questions sometimes arrive at the same answer: a high severity
bug is often also high priority, but not always. Allow me to suggest some definitions.
Severity is levels:
* Critical: the software will not run
* High: unexpected fatal errors (includes crashes and data corruption)
* Medium: a feature is malfunctioning
* Low: a cosmetic issue
Now you see why Jordan was arguing that the Print bug was a medium: a feature was
malfunctioning.
Priority levels:
* Now: drop everything and take care of it as soon as you see this (usually for blocking
bugs)
* P1: fix before next build to test
* P2: fix before final release
* P3: we probably won’t get to these, but we want to track them anyway
And now you can see why Tim was so adamant that the issue was a high. From his
perspective, it was a P1 matter.
CMM Levels
The CMM is designed to be an easy to understand methodology for ranking a company’s IT
related activities. The CMM has six levels 0 – 5. The purpose of these levels is to provide a
“measuring stick” for organizations looking to improve their system development processes.
As a general rule, the amount of money a company spends on their IT applications does not
impact what CMM level they are at. The closest exception to this rule is the move from Level 0 to
Level 1. Typically, a company can move from level 0 to level 1 as a by-product of elevated IT
spending. Aside from this exception, moving beyond level 1 is not impacted by money spent. A
corporation could be spending well over $500 million on application development and be at this
level. Indeed, the vast majority of Fortune 500 companies and large government organizations
are at a CMM level of 1.
Level 3: Well-Defined
IT best practices are documented and performed throughout the enterprise. At level 3 IT
deliverables are repeatable AND transferable across the company. This level is a very difficult
jump for most companies. Not surprisingly, this is also the level that provides the greatest cost
savings.
In my next column I will apply the CMM levels to data warehousing and provide you with a
mechanism to rank your company’s data warehousing efforts.
Level One
Company has no standard process for software development. Nor does it have a
project-tracking system that enables developers to predict costs or finish dates with
any accuracy.
Level Two
Company has installed basic software management processes and controls. But there
is no consistency or coordination among different groups.
Level Three
Company has pulled together a standard set of processes and controls for the entire
organization so that developers can move between projects more easily and
customers can begin to get consistency from different groups.
Level Four
In addition to implementing standard processes, company has installed systems to
measure the quality of those processes across all projects.
Level Five
Company has accomplished all of the above and can now begin to see patterns in
performance over time, so it can tweak its processes in order to improve productivity
and reduce defects in software development across the entire organization
Application Testing life cycle
This life cycle is used for standard applications that are built from various custom technologies and
follow the normal or standard testing approach. The Application or custom-build Lifecycle and its
phases is depicted below:
The product or system test is where system requirements are confirmed. For each requirement,
multiple test conditions may be created with corresponding expected results. One key consideration
prior to starting product testing is to create a testing infrastructure. Verify you have a test PC or
server along with a separate database. High volume most likely will not be needed to perform a
complete system test. Verify proper test data is in place and user accounts are setup. Creating
reusable test data will increase efficiency on future iterations of test cases. Identify any tools to
assist in testing. These may include automated test scripts/robots, test data management tools
and/or a test condition tracking systems. Be sure to include test conditions for exceptions.
Errors are to be expected, be sure a proper defect tracking system is in place to trace, categorize
and assign these defects and issues. It is important to allocate developer time during testing. Not
many applications are bug free at this stage.
Acceptance Testing
Acceptance testing should be performed by the users who will be utilizing the new system.
Hopefully, these are the same users that helped define requirements so translating test conditions
from requirements should be understandable by the users. Like system test, consider if a special
environment is necessary for complete testing. Prior to acceptance test execution, verify the test
approach and user involvement with key stakeholders.
Similar to system test, set success criteria in terms of percentage of failed versus successful
conditions along with the number of defects found and their priorty. These criteria should be agreed
upon with key stakeholders and determine if the testing phase is complete, and deployment can
begin.