Академический Документы
Профессиональный Документы
Культура Документы
Management
Chapt
er 1
Abstr
act
Insurance Management system is an intranet based private web
application, which is designed by a group of insurance Agent agents, to
make their application work of insurance policies and claims process
easier and flexible. The web based application is used within their
organization under the distributed accessibility to check the status of
the customers who have taken new policies and their proper track and
reminding of policy premium payments and claim conditions. The
system is actually developed to cater to the process speed up in their
Business process such that the customer satisfaction rates increase
and the generic flow of customers into this domain follows a smooth
and steady fashion. The major problem under the manual process is to
keep track of the existing insurance policy holders, and remaining
them at regular intervals about their policy premium payments. In
order to render the ordinary services also it takes a great lot of time in
just searching through the registers for the existing customers base. If
the process of data storage can be automated and follow up of search
can be increased, there is always a heavy chance of getting extra
customers in turn, which can increase the profits ratio upon the
organization.
Physically
visits
Waits for response
Regularly checks Prepare a list of
her ledgers to cross all the customers
check the premium
payment customers
for that month
Details the
information related to
the reminders and
types them physically
With the new system the following activities get more momentum.
Agent view
2. The Agent at any time can view the required information whether it is
policies, or customers at the click of a mouse and instance of a second.
8. Above all the overall system can at any time provide consistent and
reliable information authenticated upon its operations.
Chapt
er 3
Feasibil
ity
Report
Technical Descriptions
Database: The total number of databases that were identified to build
the system is 13. The major parts of the databases are categorized as
administration components and customer of based components. The
administration components are useful is managing the actual master
date that may be necessary to maintain the consistency of the system.
These databases purely used for the internal organizational needs and
necessities.
GUI
In the flexibility of the uses the interface has been developed a
graphics concept in mind, associated through a browses interface. The
GUI’S at the top level have been categorized as
The Agents and policyholders user interface helps the respective actors
in transactions the actual information as per their necessities with
specific to the required services. The GUI’s restrict the ordinary users
from mismanipulating the systems data, which can make the existing
system nonoperational. The information with specific to their personal
standards and strategies can be changed through proper privileges.
Number of Modules
HARDWARE REQUIREMENTS
Hard Disk : 20 GB
Financial Feasibility
The overall documents for this project use the recognized modeling
standards at the software industries level.
MS SQL Server 2000 is one of the many database services that plug into a
client / server model. It works efficiently to manage resources, a database
information, among the multiple clients requesting & sending.
SQL is an inter-active language used to query the database and access data
in database. SQL has the following features:
1. It is a unified language.
3. It is a non-procedural language.
Introduction to MS SQL Server 2000
Introduction
SQL Server™ 7.0 a scalable, reliable, and easy-to-use product that will
provide a solid foundation for application design for the next 20 years.
Scalability The new disk format and storage subsystem provide storage
that is scalable from very small to very large databases.
Specific changes include:
• Simplified mapping of database objects to files eases
management and enables tuning flexibility. DB objects
can be mapped to specific disks for load balancing.
Data Type Sizes Maximum size of character and binary data types is
dramatically increased.
Dynamic Row- Full row-level locking is implemented for both data rows and
Level Locking index entries. Dynamic locking automatically chooses the
optimal level of lock (row, page, multiple page, table) for all
database operations. This feature provides improved
concurrency with no tuning. The database also supports the
use of "hints" to force a particular level of locking.
Large Memory SQL Server 7.0 Enterprise Edition will support memory
Support addressing greater than 4 GB, in conjunction with Windows
NT Server 5.0, Alpha processor-based systems, and other
techniques.
Overview
The original code was inherited from Sybase and designed for eight-
megabyte Unix systems in 1983.These new formats improve manageability
and scalability and allow the server to easily scale from low-end to high-end
systems, improving performance and manageability.
Benefits
There are many benefits of the new on-disk layout, including:
Improved scalability and integration with Windows NT Server
Better performance with larger I/Os
Stable record locators allow more indexes
More indexes speed decision support queries
Simpler data structures provide better quality
Greater extensibility, so that subsequent releases will have a cleaner
development process and new features are faster to implement
Storage Engine Subsystems
Most relational database products are divided into relational engine and
storage engine components. This document focuses on the storage engine,
which has a variety of subsystems:
• Mechanisms that store data in files and find pages, files, and extents.
• Record management for accessing the records on pages.
• Access methods using b-trees that are used to quickly find records
using record identifiers.
• Concurrency control for locking, used to implement the physical lock
manager and locking protocols for page- or record-level locking.
• I/O buffer management.
• Logging and recovery.
• Utilities for backup and restore, consistency checking, and bulk data
loading.
Databases, Files, and File groups
Overview
SQL Server 7.0 is much more integrated with Windows NT Server than any of
its predecessors. Databases are now stored directly in Windows NT Server
files. SQL Server is being stretched towards both the high and low end.
Files
SQL Server 7.0 creates a database using a set of operating system files, with
a separate file used for each database. Multiple databases can no longer
share the same file. There are several important benefits to this
simplification. Files can now grow and shrink, and space management is
greatly simplified. All data and objects in the database, such as tables, stored
procedures, triggers, and views, are stored only within these operating
system files:
Primary data This file is the starting point of the database. Every database
file has only one primary data file and all system tables are always
stored in the primary data file.
Secondary These files are optional and can hold all data and objects that
data files are not on the primary data file. Some databases may not have
any secondary data files, while others have multiple secondary
data files.
Log files These files hold all of the transaction log information used to
recover the database. Every database has at least one log file.
When a database is created, all the files that comprise the database are
zeroed out (filled with zeros) to overwrite any existing data left on the disk
by previously deleted files. This improves the performance of day-to-day
operations.
Filegroups
A database now consists of one or more data files and one or more log files.
The data files can be grouped together into user-defined filegroups. Tables
and indexes can then be mapped to different filegroups to control data
placement on physical disks. Filegroups are a convenient unit of
administration, greatly improving flexibility. SQL Server 7.0 will allow you to
back up a different portion of the database each night on a rotating schedule
by choosing which filegroups to back up. Filegroups work well for
sophisticated users who know where they want to place indexes and tables.
SQL Server 7.0 can work quite effectively without filegroups.
Log files are never a part of a filegroup. Log space is managed separately
from data space.
Using Files and Filegroups
Using files and filegroups improves database performance by allowing a
database to be created across multiple disks, multiple disk controllers, or
redundant array of inexpensive disks (RAID) systems. For example, if your
computer has four disks, you can create a database that comprises three
data files and one log file, with one file on each disk. As data is accessed,
four read/write heads can simultaneously access the data in parallel, which
speeds up database operations.Additionally, files and filegroups allow better
data placement because a table can be created in a specific filegroup. This
improves performance because all I/O for a specific table can be directed at a
specific disk. For example, a heavily used table can be placed on one file in
one filegroup and located on one disk. The other less heavily accessed tables
in the database can be placed on other files in another filegroup, located on a
second disk.
Space Management
There are many improvements in the allocations of space and the
management of space within files. The data structures that keep track of
page-to-object relationships were redesigned. Instead of linked lists of
pages, bitmaps are used because they are cleaner and simpler and facilitate
parallel scans. Now each file is more autonomous; it has more data about
itself, within itself. This works well for copying or mailing database files.
SQL Server now has a much more efficient system for tracking table space.
The changes enable
• Growing and shrinking files
• Better support for large I/O
• Row space management within a table
• Less expensive extent allocations
SQL Server is very effective at quickly allocating pages to objects and reusing
space freed by deleted rows. These operations are internal to the system and
use data structures not visible to users, yet are occasionally referenced in
SQL Server messages.
File Shrink
The server checks the space usage in each database periodically. If a
database is found to have a lot of empty space, the size of the files in the
database will be reduced. Both data and log files can be shrunk. This activity
occurs in the background and does not affect any user activity within the
database. You can also use the SQL Server Enterprise Manager or DBCC to
shrink files as individually or as a group, or use the DBCC commands
SHRINKDATABASE or SHRINKFILE.
SQL Server shrinks files by moving rows from pages at the end of the file to
pages allocated earlier in the file. In an index, nodes are moved from the end
of the file to pages at the beginning of the file. In both cases pages are freed
at the end of files and then returned to the file system. Databases can only
be shrunk to the point that no free space is remaining; there is no data
compression.
File Grow
Automated file growth greatly reduces the need for database management
and eliminates many problems that occur when logs or databases run out of
space. When creating a database, an initial size for the file must be given.
SQL Server creates the data files based on the size provided by the database
creator and data is added to the database these files fill. By default, data files
are allowed to grow as much as necessary until disk space is exhausted.
Alternatively, data files can be configured to grow automatically, but only to
a predefined maximum size. This prevents disk drives from running out of
space.
Allowing files to grow automatically can cause fragmentation of those files if
a large number of files share the same disk. Therefore, it is recommended
that files or file groups be created on as many different local physical disks as
available. Place objects that compete heavily for space in different file
groups.
Physical Database Architecture
Microsoft SQL Server version 7.0 introduces significant improvements in the
way data is stored physically. These changes are largely transparent to
general users, but do affect the setup and administration of SQL Server
databases.
Pages and Extents
The fundamental unit of data storage in SQL Server is the page. In SQL
Server version 7.0, the size of a page is 8 KB, increased from 2 KB. The start
of each page is a 96-byte header used to store system information, such as
the type of page, the amount of free space on the page, and the object ID of
the object owning the page.
There are seven types of pages in the data files of a SQL Server 7.0
database.
Data Data rows with all data except text, ntext, and
image.
Torn page detection helps insure database consistency. In SQL Server 7.0,
pages are 8 KB, while Windows NT does I/O in 512-byte segments. This
discrepancy makes it possible for a page to be partially written. This could
happen if there is a power failure or other problem between the time when
the first 512-byte segment is written and the completion of the 8 KB of I/O.
There are several ways to deal with this. One way is to use battery-backed
cached I/O devices that guarantee all-or-nothing I/O. If you have one of
these systems, torn page detection is unnecessary.
In SQL Server 7.0, you can enable torn page detection for a particular
database by turning on a database option.
Locking Enhancements
Row-Level Locking
SQL Server 6.5 introduced a limited version of row locking on inserts. SQL
Server 7.0 now supports full row-level locking for both data rows and index
entries. Transactions can update individual records without locking entire
pages. Many OLTP applications can experience increased concurrency,
especially when applications append rows to tables and indexes.
Dynamic Locking
SQL Server 7.0 has a superior locking mechanism that is unique in the
database industry. At run time, the storage engine dynamically cooperates
with the query processor to choose the lowest-cost locking strategy, based
on the characteristics of the schema and query.
Lock Modes
SQL Server locks resources using different lock modes that determine how
the resources can be accessed by concurrent transactions.
SQL Server 7.0 tables use one of two methods to organize their data pages:
• Clustered tables are tables that have a clustered index. The data rows
are stored in order based on the clustered index key. The data pages
are linked in a doubly linked list. The index is implemented as a b-tree
index structure that supports fast retrieval of the rows based on their
clustered index key values.
• Heaps are tables that have no clustered index. There is no particular
order to the sequence of the data pages and the data pages are not
linked in a linked list.
Table Indexes
A SQL Server index is a structure associated with a table that speeds
retrieval of the rows in the table. An index contains keys built from one or
more columns in the table. These keys are stored in a structure that allows
SQL Server to quickly and efficiently find the row or rows associated with the
key values. This structure is called a heap. The two types of SQL Server
indexes are clustered and nonclustered indexes
Clustered Indexes
A clustered index is one in which the order of the values in the index is the
same as the order of the data stored in the table.
The clustered index contains a hierarchical tree. When searching for data
based on a clustered index value, SQL Server quickly isolates the page with
the specified value and then searches the page for the record or records with
the specified value. The lowest level, or leaf node, of the index tree is the
page that contains the data.
Nonclustered Indexes
The new data types that support Unicode are ntext, nchar, and nvarchar.
They are the same as text, char, and varchar, except for the wider range of
characters supported and the increased storage space used.
Improved Data Storage
Data storage flexibility is greatly improved with the expansion of the
maximum limits for char, varchar, binary, and varbinary data types to 8,000
bytes, increased from 255 bytes. It is no longer necessary to use text and
image data types for data storage for anything but very large data values.
The Transact-SQL string functions also support these very long char and
varchar values, and the SUBSTRING function can be used to process text and
image columns. The handling of Nulls and empty strings has been improved.
A new unique identifier data type is provided for storing a globally unique
identifier (GUID).
Normalization
Second normal form: Involves taking out data that is only dependent on
part of key.
The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. The common language
runtime is the foundation of the .NET Framework. You can think of the
runtime as an agent that manages code at execution time, providing core
services such as memory management, thread management, and remoting,
while also enforcing strict type safety and other forms of code accuracy that
ensure security and robustness. In fact, the concept of code management is
a fundamental principle of the runtime. Code that targets the runtime is
known as managed code, while code that does not target the runtime is
known as unmanaged code. The class library, the other main component of
the .NET Framework, is a comprehensive, object-oriented collection of
reusable types that you can use to develop applications ranging from
traditional command-line or graphical user interface (GUI) applications to
applications based on the latest innovations provided by ASP.NET, such as
Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of
managed code, thereby creating a software environment that can exploit
both managed and unmanaged features. The .NET Framework not only
provides several runtime hosts, but also supports the development of third-
party runtime hosts.
The runtime enforces code access security. For example, users can trust that
an executable embedded in a Web page can play an animation on screen or
sing a song, but cannot access their personal data, file system, or network.
The security features of the runtime thus enable legitimate Internet-deployed
software to be exceptionally feature rich.
While the runtime is designed for the software of the future, it also supports
software of today and yesterday. Interoperability between managed and
unmanaged code enables developers to continue to use necessary COM
components and DLLs.
Defines rules that languages must follow, which helps ensure that objects
written in different languages can interact with each other.
Describes concepts and defines terms relating to the common type system.
Type Definitions
Type Members
Value Types
Classes
Delegates
Arrays
This section describes the common language runtime's built-in support for
language interoperability and explains the role that the CLS plays in enabling
guaranteed cross-language interoperability. CLS features and rules are
identified and CLS compliance is discussed.
In This Section
Language Interoperability
Describes built-in support for cross-language interoperability and introduces
the Common Language Specification.
Console applications
• Scripted or hosted applications.
• ASP.NET applications.
• Windows services.
For example, the Windows Forms classes are a comprehensive set of
reusable types that vastly simplify Windows GUI development. If you write
an ASP.NET Web Form application, you can use the Web Forms classes.
Client Application Development
Client applications are the closest to a traditional style of application in
Windows-based programming. These are the types of applications that
display windows or forms on the desktop, enabling a user to perform a task.
Client applications include applications such as word processors and
spreadsheets, as well as custom business applications such as data-entry
tools, reporting tools, and so on. Client applications usually employ windows,
menus, buttons, and other GUI elements, and they likely access local
resources such as the file system and peripherals such as printers.
The Windows Forms classes contained in the .NET Framework are designed
to be used for GUI development. You can easily create command windows,
buttons, menus, toolbars, and other screen elements with the flexibility
necessary to accommodate shifting business needs.
For example, the .NET Framework provides simple properties to adjust visual
attributes associated with forms. In some cases the underlying operating
system does not support changing these attributes directly, and in these
cases the .NET Framework automatically recreates the forms. This is one of
many ways in which the .NET Framework integrates the developer interface,
making coding simpler and more consistent.
It forms a type boundary. Every type's identity includes the name of the
assembly in which it resides. A type called MyType loaded in the scope of one
assembly is not the same as a type called MyType loaded in the scope of
another assembly.
There are several ways to create assemblies. You can use development tools,
such as Visual Studio .NET, that you have used in the past to create .dll or
.exe files. You can use tools provided in the .NET Framework SDK to create
assemblies with modules created in other development environments. You
can also use common language runtime APIs, such as Reflection. Emit, to
create dynamic assemblies.
The following illustration shows a basic network schema with managed code
running in different server environments. Servers such as IIS and SQL Server
can perform standard operations while your application logic executes
through the managed code.
Server-side managed code
ASP.NET is the hosting environment that enables developers to use the .NET
Framework to target Web-based applications. However, ASP.NET is more
than just a runtime host; it is a complete architecture for developing Web
sites and Internet-distributed objects using managed code. Both Web Forms
and XML Web services use IIS and ASP.NET as the publishing mechanism for
applications, and both have a collection of supporting classes in the .NET
Framework.
If you have used earlier versions of ASP technology, you will immediately
notice the improvements that ASP.NET and Web Forms offers. For example,
you can develop Web Forms pages in any language that supports the .NET
Framework. In addition, your code no longer needs to share the same file
with your HTTP text (although it can continue to do so if you prefer). Web
Forms pages execute in native machine language because, like any other
managed application, they take full advantage of the runtime. In contrast,
unmanaged ASP pages are always scripted and interpreted. ASP.NET pages
are faster, more functional, and easier to develop than unmanaged ASP
pages because they interact with the runtime like any managed application.
The .NET Framework also provides a collection of classes and tools to aid in
development and consumption of XML Web services applications. XML Web
services are built on standards such as SOAP (a remote procedure-call
protocol), XML (an extensible data format), and WSDL (the Web Services
Description Language). The .NET Framework is built on these standards to
promote interoperability with non-Microsoft solutions.
For example, the Web Services Description Language tool included with
the .NET Framework SDK can query an XML Web service published on the
Web, parse its WSDL description, and produce C# or Visual Basic source
code that your application can use to become a client of the XML Web
service. The source code can create classes derived from classes in the class
library that handle all the underlying communication using SOAP and XML
parsing. Although you can use the class library to consume XML Web services
directly, the Web Services Description Language tool and the other tools
contained in the SDK facilitate your development efforts with the .NET
Framework.
If you develop and publish your own XML Web service, the .NET Framework
provides a set of classes that conform to all the underlying communication
standards, such as SOAP, WSDL, and XML. Using those classes enables you
to focus on the logic of your service, without concerning yourself with the
communications infrastructure required by distributed software development.
Finally, like Web Forms pages in the managed environment, your XML Web
service will run with the speed of native machine language using the scalable
communication of IIS.
This section describes the programming essentials you need to build .NET
applications, from creating assemblies from your code to securing your
application. Many of the fundamentals covered in this section are used to
create any application using the .NET Framework. This section provides
conceptual information about key programming concepts, as well as code
samples and detailed explanations.
Accessing Data with ADO.NET
Describes the ADO.NET architecture and how to use the ADO.NET classes to
manage application data and interact with data sources including Microsoft
SQL Server, OLE DB data sources, and XML.
Shows how to use Internet access classes to implement both Web- and
Internet-based applications.
Developing Components
Explains the extensive support the .NET Framework provides for developing
international applications.
Explains the .NET Framework SDK mechanism called the Code Document
Object Model (CodeDOM) that enables the output of source code in multiple
programming languages.
Product Perspective:
The major function that product executes are divided into two
categories.
1.Administrative Functions.
2.User Interface Functions.
Administrative Functions:
The general functions that are taken care of at the user level are a are
the customer can view their respective policy information at any time.
The customers can also view their premium payment standards and
the claims duration strategy. The system also helps the Agents to just
view the standardized information of the customer with whom he has
established business policies etc.
Project Plan
The total project consists of 7 modules which are particular way has
follows:
The description for all the above modules has been provided in the
previous section.
Number of Databases:
10. Claim status code: This database specifically states what are
the different statuses that a policy can have while it is under
operation.
Agent Information
Module Reports on the Agent Information
Reports on the
Customer customer Information
Information Module
Reports on the
Policies
Information
Insurance
Policies Information Management
Module Managem
ent Reports on the
Policy
Payments
Information
Policy Payment
Module
Insurance
Companies
Agent
Agent Master
Master
Check for
Insert New Insurance Company Verify
Verify
Company Agent Agent Data
Data
1.1
1.2
Check
for the
Agent
Verify
Data
Inser
t
Insurance Companies
Master
DFD For New Policy Entry
Policies
Master Company Master
2.1 2.2
Check for
the
Company
Verify
Insert Data
2.3
Policies Master
DFD For New Customer Policy Entry
Check
for
Policy
Insert
Customer
Policy Payments Policy
Master
Master
Policy Payments
Master
DFD For New Policy Claim Entry
Check for
the Claim
Status
Code
Insert
Policy Claim
Master
3rd Level DFD
Agent Master
1.2
Validate
Compid(
)
Commit(
)
Insurance Companies
Master
DFD ForNew Policy Entry
Policies
Master Company Master
2.1 2.2
Policies Master
Validate Validate
Commit( Reftypei Policyid(
) d() )
2.4
2.3
Master
DFD For New Customer Policy Entry
Request for a
New
Customer Policy
Validate Validate Validate
CustPolic Custid() Policyid()
yid()
3.2 3.3
3.1
Commit()
Policy Customer
Payments Policy
Master Master
Request for
a New
Policy
Payment Validate Validate
PolPayid( CustPolid( Commit()
) )
4.1 4.2
Policy
Payments
Master
DFD For New Policy Claim Entry
Request
For a
New
Claim
Validate
Validate Validate Claimstat
PolClaimid() PolicyNO() usid()
Commit()
Claim
Master
ER-Diagrams
Use cases model the system from the end users point of view, with the
following objectives
Use Cases
The actors who have been recognized within the system are
1. Agent
2. Policy holder
3. System administrator
Login
Companies’
information
Policies
Information
Agent
Customer
policies
Policy types
information
Policy Holder
This actor is the outer who insures himself or any item with a specific policy.
He goes for policy premium payments and policy claims from time to time.
Login
Policies
information
Policies
Registration
Policy Holder
Nominees
information
Policy
payments
Policy claims
System Administrator
He is the actor who takes care of the actual data administration upon the
system. He is the sole responsibility, to check the consistency and reliability
of information.
Login
Companies
registration
Agent
Registration
Customers
Admin Registration
Policy
Registration
Security
information
registration
Elaborated Diagrams
<<Uses>> <<Uses>>
<<Uses>>
Login Authenticate login Authenticate Enable privileged
name
password access
<<Uses>> <<Uses>>
Raise request for Authenticate the given
company information parameter Query Analyzer
<<Uses>>
<<Uses>>
Raise request for
existing or new
Check for any
specific schedules
allocated upon him
Handle the
schedules
<<Uses>>
<<Uses>> <<Uses>>
Select premium
Raise request for policy Accept the Check for due payment period
claims policy number payments
<<Uses>> <<Uses>>
<<Uses>>
Enter the required Check the
Raise request for new data along with authentic of Store
company information
standards information
<<Uses>>
<<Uses>> <<Uses>>
System
Raise request for
new customer
registration
Enter the required
data as per the
standards
Check the
authenticity of
information Store
Administrator
<<Uses>> <<Uses>>
<<Uses>>
Enter the Check for due
Raise request for new required data Store
Agent registration as per the
payments
standards
<<Uses>> <<Uses>>
Check the <<Uses>>
Raise request for new
user login creation Enter the authenticity of
required data as information
per the standards
Store
Sequence Diagrams
Agent Login
Agent
login Agent
Login master
screen Agent master
master
Policy
Login
login Policyholder Customer policy
screen master
master login master
Login Employee
screen Administrator Administrator master
login master login master
Customer
Customer policy master
master screen Customer master
Enter the
customer Validate Accept
number policy
Customer ID () number ()
Authenticate
the policy
number
Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing
that it has no errors. The basic purpose of testing phase is to detect the
errors that may be present in the program. Hence one should not start
testing with the intent of showing that a program works, but the intent
should be to show that a program doesn’t work. Testing is the process of
executing a program with the intent of finding errors.
Testing Objectives
The main objective of testing is to uncover a host of errors,
systematically and with minimum effort and time. Stating formally, we
can say,
Testing is a process of executing a program with the intent of
finding an error.
A successful test is one that uncovers an as yet undiscovered
error.
A good test case is one that has a high probability of finding
error, if it exists.
The tests are inadequate to detect possibly present errors.
The software more or less confirms to the quality and reliable
standards.
Levels of Testing
In order to uncover the errors present in different phases we have the
concept of levels of testing. The basic levels of testing are as shown
below…
Acceptance
Testing
Client Needs
System Testing
Requirements
Unit Testing
Code
System Testing
The philosophy behind testing is to find errors. Test cases are devised
with this in mind. A strategy employed for system testing is code
testing.
Code Testing:
This strategy examines the logic of the program. To follow this method
we developed some test data that resulted in executing every
instruction in the program and module i.e. every path is tested.
Systems are not designed as entire nor are they tested as single
systems. To ensure that the coding is perfect two types of testing is
performed or for that matter is performed or that matter is performed
or for that matter is performed on all systems.
Types Of Testing
Unit Testing
Link Testing
Unit Testing
Unit testing focuses verification effort on the smallest unit of software
i.e. the module. Using the detailed design and the process
specifications testing is done to uncover errors within the boundary of
the module. All modules must be successful in the unit test before the
start of the integration testing begins.
Link Testing
Link testing does not test software but rather the integration of each
module in system. The primary concern is the compatibility of each
module. The Programmer tests where modules are designed with
different parameters, length, type etc.
Integration Testing
After the unit testing we have to perform integration testing. The goal
here is to see if modules can be integrated proprerly, the emphasis
being on testing interfaces between modules. This testing activity can
be considered as testing the design and hence the emphasis on testing
module interactions.
In this project integrating all the modules forms the main system.
When integrating all the modules I have checked whether the
integration effects working of any of the services by giving different
combinations of inputs with which the two services run perfectly before
Integration.
System Testing
Here the entire software system is tested. The reference document for
this process is the requirements document, and the goal os to see if
software meets its requirements.
Here entire ‘ATM’ has been tested against requirements of project and
it is checked whether all requirements of project have been satisfied or
not.
Acceptance Testing
Acceptance Test is performed with realistic data of the client to demonstrate
that the software is working satisfactorily. Testing here is focused on
external behavior of the system; the internal logic of program is not
emphasized.
The entire project has been developed and deployed as per the requirements
stated by the user, it is found to be bug free as per the testing standards
The system at present does not take care off the money payment methods,
initiated in the first face, the application of the credit card transactions is
SQL Server