Вы находитесь на странице: 1из 59

Index

Sr. No. Topic


1 Introduction & Objectives of the Project.
 Introduction & Objectives.
 Definition

2 Feasibility Study
 Technical Feasibility
 Economical Feasibility
 Operational Feasibility
 Legal Feasibility
 Financial Feasibility

3 Software Engineering Paradigm Applied


 Introduction to Software
 Software Development Life Cycle
 Meaning of Spiral Model
 Spiral Model
 Analysis
 Design
 Testing
 Implementation

4 Technology & Operation System


 .Net Framework
 Framework Architecture
 Common Language Specification
 Common Language Runtime
 Introduction of Asp.Net
 Introduction of Asp.Net(C#)
 Introduction of Sql Server
 Data Access Layer

5 Software And Hardware Requirement Specifications


 Software Requirement
 Hardware Requirement

6 Analysis
 Data Flow Diagrams
 E-R Diagrams
 Schema Diagram
 Data Dictionary (Tables)
 Number of Modules
 Screen Shots
7 Coding

8 Coding Efficiency

9 Coding Optimization

10 Validation Checks

11 Testing
 System Testing
 Integration Testing
 Unit Testing
 White Box Testing
 Black Box Testing
 Acceptance Testing

12 Implementation & Maintenance


 System Security
 Security Measures
 Cost Estimation

13 Chart And Project Schedule

14 Future Scope of the Project

15 Bibliography

16 Synopsis
1. Introduction & Objectives
Introduction

Objectives

2. Feasibility Study

Feasibility study is a process to check possibilities of system development. It is a


method to check various different requirements and availability of financial &
technical resources.

Before starting the process various parameters must be checked like:

 Estimated finance is there or not?


 The man power to operate the system is there or not?
 The man power is trained or not?

All the above conditions must be satisfied to start the project. This is why in depth
analysis of feasibility is carried out.

There are three different ways feasibility can be tested

1) Technical Feasibility
2) Economic Feasibility
3) Operational Feasibility.
4) Financial feasibility
5) Legal feasibility
Technical Feasibility:

It is basically used to see existing computer, hardware and software etc., weather it
is sufficient or additional equipment’s are required? Minimum System Requirement is
such that it can be affordable by of the user who is having

computer. All the user requires is compatible browser and .net framework installed so
our system is fully technical feasible.

Economic Feasibility:

In Economic feasibility, analysis of the cost of the system is carried out. The
system should be only developed if it is going to give returned the current manual
system user can get the price only by purchasing the newspapers. In addition if he/she
wants to see archives of particular equity then he has to refer to all the old
newspapers. For research reports he has to buy another magazine. So Instead of
buying no of magazines user has to just go online and with a single click he can get
whatever information he wants. So our project of online share news passes the test of
Economic feasibility.
Operational Feasibility:

Once the system is designed there must be trained and expert operator. If there
are not trained they should give training according to the needs of the system.

From the user’s perspective our system fully operational feasible as it just
requires some knowledge of computer. Operators only need add daily prices of
various equities and there are enough validations available so operator does not
require any special technical knowledge. So our system also passes the test of
operational feasibility.

Legal Feasibility

Determines whether the proposed system conflicts with legal requirements, e.g. a data
processing system must comply with the local Data Protection Acts.

Financial feasibility

In case of a new project, financial viability can be judged on the following


parameters:

 Total estimated cost of the project


 Financing of the project in terms of its capital structure, debt equity ratio and
promoter's share of total cost
 Existing investment by the promoter in any other business
 Projected cash flow and profitability

Final Conclusion of the Feasibility Study


Finally, from the whole study it can be concluded that the system is technically
feasible, initially if we see then the initial cost is high but by studying economical
feasibility with improved level services, customer may be attracted towards the Star
Placement Services and ultimately that is our aim . Other feasibility aspects are satisfied
with considering certain risk factor, which is always present in any proposed system
project.
After completing the feasibility study I described the whole study and presented
the report of the study to the Chief Manager of Starnet Services.
We discussed about dates to start the real specification of the system and the
designing days and further details. We discussed roughly about the model of the actual
software system, how it could take place etc.
3. Software Engineering Paradigm
Applied

Introduction to Software
 “What is exactly meant by software?” I was asked by one of the official of the
Starnet Services in the meeting.
 Let’s first define the term software.
 Computer software is the product that software engineers design and build.
 It encompasses programs that execute within a computer of any size and
architecture, documents that encompass hard copy and virtual forms, and
data that combine numbers and text but also includes representation of
pictorial, video, and audio information.
 Software Engineer builds it, and virtually everyone in the industrialized
world uses it either directly or indirectly.
 Because it affects nearly every aspect of our lives and has become pervasive
in our commerce, our culture, and our everyday activities.
 We build computer software like we build any successful product, by
applying a process that leads to a high quality result that meets the needs of
the people who will use the product.
 We apply a software engineering approach.
 From the point of view of a software engineer; the work product is the
programs, documents, and data that are computer software.
 But from the user’s point of view, the work product is the resultant
information that somehow makes the user’s world, User privileges better.
 Software is both a product and a vehicle for delivering a product.

Software Applications
 System Software

System software is a collection of programs written to service other


program, e.g. COMPLIER, EDITORS, AND FILE MANAGEMENT UTILITIES, OS
COMPONENTS, DRIVERS, etc.

 Real-Time software

Software that monitors/analyzes/controls real-world events as they


occur is called real time. Elements of real-time software include a data
gathering component that collects and formats information from an external
environment, an analysis component that transforms information as required
by the application, a control/output component that responds to the external
environment and a monitoring component that coordinates all other
components so that real response can be maintained.

 Business Software

Business information processing is the largest single software application


area. Discrete “systems” (e.g. PAYROLL, ACCOUNTS RECEIVABLE/PAYABLE,
INVENTORY, SMBS)

 Engineering and Scientific Software


Engineering and scientific software have been characterized by “number
crunching” algorithms. Application range from astronomy to volcanology,
from automotive stress analysis to space shuttle orbital dynamics, and from
molecular biology to automated manufacturing.

 Embedded software

Intelligent products have become commonplace in nearly every consumer


and industrial market. Embedded software resides in read-only memory and
is used to control products and systems for the consumer and industrial
markets, e.g. keypad control for a microwave oven, so we can say that they
can perform very limited and esoteric functions.

 Personal computer software

The personal computer software market has burgeoned over the past two
decades. Word processing, spreadsheets, computer graphics, multimedia,
entertainment, personal and business financial applications, external
network, and database access are only a few of hundreds of applications.
 Web-based software

The web pages retrieved by a browser are software incorporates


executable instructions (e.g. CGI, HTML, PERL, JAVA, ASP), and data (e.g.
hypertext and a variety of visual and audio formats)

 Artificial intelligence software

Artificial intelligence (AI) software makes use of nonnumeric algorithms


to solve complex problems that are not amenable to computation or
straightforward analysis. Expert systems, pattern recognition (image and
voice), artificial neural networks, theorem proving, and game playing are
representatives of applications within this category.

 This proposed project could be put in the category of BUSINESS


APPLICATION SOFTWARE.
Software Development Life Cycle

The systems development life cycle (SDLC) is a conceptual model used in project
management that describes the stages involved in an information system development
project, from an initial feasibility study through maintenance of the completed
application.

Various SDLC methodologies have been developed to guide the processes involved,
including the waterfall model (which was the original SDLC method); rapid application
development (RAD); joint application development (JAD); the fountain model; the spiral
model; build and fix; and synchronize-and-stabilize. Frequently, several models are
combined into some sort of hybrid methodology. Documentation is crucial regardless of
the type of model chosen or devised for any application, and is usually done in parallel
with the development process. Some methods work better for specific types of projects,
but in the final analysis, the most important factor for the success of a project may be
how closely the particular plan was followed.
Spiral Model

The spiral model, also known as the spiral lifecycle model, is a systems
development lifecycle (SDLC) model used in information technology (IT). This model of
development combines the features of the prototyping model and the waterfall model.
The spiral model is favored for large, expensive, and complicated projects.

Meaning of Spiral Model spiral model

A software life-cycle model which supposes incremental development, using the


waterfall for each step, with the aim of managing risk. In the spiral model, developers define
and implement features in order of decreasing priority.
The steps in the spiral model can be generalized as follows:
The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the external or internal
users and other aspects of the existing system.
Spiral Model
A preliminary design is created for the new system.

 A first prototype of the new system is constructed from the preliminary design. This is
usually a scaled-down system, and represents an approximation of the characteristics of
the final product.

 A second prototype is evolved by a fourfold procedure: (1) evaluating the first


prototype in terms of its strengths, weaknesses, and risks; (2) defining the
requirements of the second prototype; (3) planning and designing the second
prototype; (4) constructing and testing the second prototype.

 At the customer's option, the entire project can be aborted if the risk is deemed too
great. Risk factors might involve development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customer's judgment, result in a
less-than-satisfactory final product.

System Analysis

The goal of system analysis is to determine where the problem is in an attempt to


fix the system. This step involves breaking down the system in different pieces to
analyze the situation, analyzing project goals, breaking down what needs to be created
and attempting to engage users so that definite requirements can be defined.

Requirements analysis sometimes requires individuals/teams from client as well


as service provider sides to get detailed and accurate requirements; often there has to
be a lot of communication to and from to understand these requirements. Requirement
gathering is the most crucial aspect as many times communication gaps arise in this
phase and this leads to validation errors and bugs in the software program.

Design
In systems design the design functions and operations are described in detail,
including screen layouts, business rules, process diagrams and other documentation.
The output of this stage will describe the new system as a collection of modules or
subsystems.

The design stage takes as its initial input the requirements identified in the
approved requirements document. For each requirement, a set of one or more design
elements will be produced as a result of interviews, workshops, and/or prototype
efforts.

Design elements describe the desired software features in detail, and generally
include functional hierarchy diagrams, screen layout diagrams, tables of business rules,
business process diagrams, pseudo code, and a complete entity-relationship diagram
with a full data dictionary. These design elements are intended to describe the software
in sufficient detail that skilled programmers may develop the software with minimal
additional input design.

Testing

The code is tested at various levels in software testing. Unit, system and user
acceptance testing’s often performed. This is a grey area as many different opinions
exist as to what the stages of testing are and how much if any iteration occurs. Iteration
is not generally part of the waterfall model, but usually some occur at this stage. In the
testing phase, the whole system is tested one by one

Following are the types of testing:

1. White Box Testing

2. Black Box Testing

White Box Testing

White-box testing is a method of testing software that tests internal structures


or workings of an application, as opposed to its functionality (i.e. black-box testing). In
white-box testing an internal perspective of the system, as well as programming skills,
are required and used to design test cases. The tester chooses inputs to exercise paths
through the code and determine the appropriate outputs. This is analogous to testing
nodes in a circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the
software testing process, it is usually done at the unit level. It can test paths within a
unit, paths between units during integration, and between subsystems during a system
level test. Though this method of test design can uncover many errors or problems, it
might not detect unimplemented parts of the specification or missing requirements.

Black-Box testing

Block Box Testing is a method of testing that tests the functionality of an


application as opposed to its internal structures or workings (see white). Specific
knowledge of the application's code/internal structure and programming knowledge in
general is not required. Test cases are built around specifications and requirements, i.e.,
what the application is supposed to do. It uses external descriptions of the software,
including specifications, requirements, and designs to derive test cases. These tests can
be functional or non-functional, though usually functional. The test designer selects
valid and invalid inputs and determines the correct output. There is no knowledge of
the test object's internal structure.

Implementation

In this phase the designs are translated into code. Computer programs are
written using a conventional programming language or an application generator.
Programming tools like Compilers, Interpreters, and Debuggers are used to generate the
code. Different high level programming languages like C, C++, Pascal, and Java are used
for coding. With respect to the type of application, the right programming language is
chosen
4. Technology and Operation System
The .Net Framework

A frame work is commonly thought of as a set of class libraries that aid in the
development of applications. The .net framework is more than just a set of classes. The
.net framework is targeted by compliers using a wide variety of applications. Including
everything from small components that run on handheld devices to large Microsoft
ASP.ET application that span web farms, where multiple web serves act together to
improve the performance fault tolerance of a web site. The .NET framework is
responsible for providing a basic platform that these applications can share. This basic
platform includes a runtimes set of services that oversee the execution of applications. A
key responsibility of the runtime is to manage execution so that software written by
different programming languages uses classes and other types safely.

Microsoft .Net Framework Architecture

Microsoft's .NET Framework is comprised of two main components - the Common


Language Runtime (CLR) and the .NET Framework class libraries. The CLR is the real
foundation of the .NET Framework. It is the execution engine for all .NET applications.
Every target computer requires the CLR to successfully run a .NET application that uses
the .NET Framework.

The main features of CLR include:


 Automatic Memory Management
 Thread Management
 Code Compilation & Execution
 Code Verification
 High level of security
 Remoting
 Structured Exception Handling
 Interoperability between Managed and Unmanaged code.
 Integration with Microsoft Office System

All .NET applications are compiled into Intermediate Language code (MSIL). When
executed on the CLR, MSIL is converted into native machine code specific to the
operating platform. This process is done by a Just in Time (JIT) compiler. The code
executed by the CLR is called as Managed Code. This code is type safe and thoroughly
checked by the CLR before being deployed. The .NET runtime also provides a facility to
incorporate existing COM components and DLL's into a .NET application. Code that is
not controlled by the CLR is called Unmanaged Code.

The .NET Framework is further comprised of Common Type System (CTS) and
Common Language Specification (CLS). The CTS defines the common data types used by
.NET programming languages. The CTS tells you how to represent characters and
numbers in a program. The CLS represents the guidelines defined by for the .NET
Framework. These specifications are normally used by the compiler developers and are
available for all languages, which target the .NET Framework.

Net architecture

Common Language Specification


To fully interact with other objects regardless of the language they were
implemented in, objects must expose to callers only those features that are common to
all the languages they must interoperate with. For this reason, the Common Language
Specification (CLS), which is a set of basic language features needed by many
applications, has been defined. The CLS rules define a subset of the Common Type
System; that is, all the rules that apply to the common type system apply to the CLS,
except where stricter rules are defined in the CLS. The CLS helps enhance and ensure
language interoperability by defining a set of features that developer can rely on to be
available in a wide variety of languages. The CLS also establishes requirements for CLS
compliance; these help you determine whether your managed code conforms to the CLS
and to what extent a given tool supports the development of managed code that uses
CLS features.

If your component uses only CLS features in the API that it exposes to other code
(including derived classes), the component is guaranteed to be accessible from any
programming language that supports the CLS. Components that adhere to the CLS rules
and use only the features included in the CLS are said to be CLS-compliant components.

The CLS was designed to be large enough to include the language constructs that
are commonly needed by developers, yet small enough that most languages are able to
support it. In addition, any language constructs that makes it impossible to rapidly
verify the type safety of code was excluded from the CLS so that all CLS-compliant
languages can produce verifiable code if they choose to do so.

Common Language Runtime

The Common Language Runtime (CLR) is the virtual machine component of


Microsoft's .NET initiative. It is Microsoft's implementation of the Common Language
Infrastructure (CLI) standard, which defines an execution environment for program
code. The CLR runs a form of byte code called the Microsoft Intermediate Language
(MSIL), Microsoft's implementation of the Common Intermediate Language.
Developers using the CLR write code in a high level language such as C#. At
compile-time, a .NET compiler converts such code into MSIL (Microsoft Intermediate
Language) code. At runtime, the CLR's just-in-time compiler (JIT compiler) converts the
MSIL code into code native to the operating system. Alternatively, the MSIL code can be
compiled to native code in a separate step prior to runtime. This speeds up all later runs
of the software as the MSIL-to-native compilation is no longer necessary.

Although some other implementations of the Common Language Infrastructure


run on non-Windows operating systems, the CLR runs on Microsoft Windows operating
systems.

The virtual machine aspect of the CLR allows programmers to ignore many
details of the specific CPU that will execute the program. The CLR also provides other
important services, including the following:

• Memory management
• Thread management
• Exception handling
• Garbage collection
• Security

Introduction to ASP.NET

Although in C# Language, .NET is a powerful but simple language aimed


primarily at developers creating web applications for the Microsoft .NET platform. It
inherits many of the best features of C++ but with some of the inconsistencies and
anachronisms removed, resulting in cleaner and logical language. C# also contains a
variety of useful new innovations that accelerate application development, especially
when used in conjunction with Microsoft Visual Studio .NET.

The Common Language Runtime provides the services that are needed for
executing any application that’s developed with one of the .NET languages. This is
possible because all of the .NET languages compile to a common Intermediate Language.
The CLR also provides the common type system that defines that data types that are
used by all the .Net languages. That way, you can use same data types regardless of
what.NET language you’re using to develop your application.
 Microsoft ASP.NET( C # )

Microsoft ASP.NET( C # ) is one of the most well known languages for the front-end
programming. It provides a ‘Rapid Application Development ‘environment to the
developers. It provides supports for the ODBC (Open Database Connectivity) and RDO
data access methods, which can be used as a powerful development tools. It also
supports ActiveX Data Objects (ADO) access methods, which is useful in creating a web
page, and writing DHTML applications. It has such tools that any programmer can have
an attractive screens which he imagines. It is the most widely used languages and is
more flexible. Also one can have the desired properties of the various commands to
create textbox’s, labels, used in the screens. It also has the facility to create menu.

Microsoft ASP.NET(C #) is based on Visual Studio.NET that was developed in early


70’s. Visual Studio .Net comes in several varieties including the following:

Microsoft, realizing that ASP does possess some significant shortcomings,


developed ASP.net. ASP.net is a set of components that provide developers with a
framework with which to implement complex functionality. Two of the major
improvements of ASP.net over traditional ASP are scalability and availability. ASP.net is
scalable in that it provides state services that can be utilized to manage session
variables across multiple web services in a server farm. Additionally, ASP.net possesses
a high performance process model that can detect application failures and recover from
them. We use the fundamentals of programming with C# using Visual Studio .NET and
.NET framework.

The project is the starting point for authoring applications, components &
services in Visual Studio.NET 2008.It eats as a container that manages your source code,
data connections & references. A project is organized as part of a solution, which can
contain multiple projects that are independent of each other. C# project file has .asproj
extension whereas solution file has .sln extension.

In order to write code against an external component, your project must first
contain a reference to it. A reference can be made to the following types of component.
(1) .NET class libraries or assemblies
(2) COM components
(3) Other class libraries of projects in the same solution
(4) XML web services

Features of ASP.NET:
(1) Component Infrastructure.
(2) Language Integration.
(3) Internet Interoperation.
(4) Simple Development.
(5) Simple Deployment.
(6) Reliability.
(7) Security
Introduction to Microsoft SQL Server

Microsoft SQL Server enhances the performance, reliability, and scalability


provided by earlier releases of SQL Server by making the processes of developing
applications, managing systems, and replicating data easier than ever.

All of data processing is involved with the operations of storing and retrieving
data. A database, such as Microsoft SQL Server, is designed as the central repository for
all the data of an organization. The crucial nature of data to any organization underlines
the importance of the method used to store it and enable its later retrieval.

Microsoft SQL Server uses features similar to those found in other databases and
some features that are unique. Most of these additional features are made possible by
SQL Server’s tight integration with the Windows NT operating system. SQL Server
contains the data storage options and the capability to store and process the same
volume of data as a mainframe or minicomputer.

Like most mainframe or minicomputer databases, SQL Server is a Database that


has seen an evolution from its introduction in the mid-1960s until today. Microsoft’s
SQL Server is founded in the mature and powerful relational model, currently the
preferred model for data storage and retrieval.

Unlike mainframe and minicomputer databases, a server database is accessed by


users-- called clients--from other computer systems rather than from input/output
devices, such as terminals. Mechanisms must be in place for SQL Server to solve
problems that arise from the access of data from perhaps Hundreds of computer
systems, each of which can process portions of the database independently from the
data on the server. Within the framework of a client/server

database, a server database also requires integration with communication components


of the server in order to enable connections with client systems.
SQL server also contains many of the front-end tools of PC databases that
traditionally haven’t been available as part of either mainframe or minicomputer
databases. In addition to using a dialect of Structured Query Language (SQL), GUI
applications can be used for the storage, retrieval, and administration of the database.

Data Access Layer:

When working with data one option is to embed the data-specific logic directly into
the presentation layer. This may take the form of writing ADO.NET code in the ASP.NET
page's code portion or using the SqlDataSource control from the markup portion.
creating a connection to the database, issuing SELECT, INSERT, UPDATE, and DELETE
commands, and so on – should be located in the DAL.The presentation layer should not
contain any references to such data access code, but should instead make calls into the
DAL for any and all data requests. I have created data access layer for Fill() and Get()
methods. Get is done by two ways.

 GetStory(),which will return information about the success story or user who
met by this site.

 GetMessage(), which will return information about a message for particular type
of membership.

These methods, when invoked, will connect to the database, issue the appropriate
query, and return the results. These methods could simply return a Dataset or Data
Reader populated by the database query, but ideally these results should be returned
using strongly-typed objects.

In strongly-typed Data Table, will have each of its columns implemented as properties,
resulting in code that looks like: DataTable .Rows [index].column Name.

Figure illustrates the workflow between the different layers of an application that
uses Typed Datasets.
To retrieve the data to populate the Data Table, I used a Table Adapter class,
which functions as my Data Access Layer. For our story Data Table, the Table Adapter is
containing the methods – Getstory(), Getstorybyid(memberid), and so on – that I can
invoke from the presentation layer. The Data Table’s role is to serve as the strongly-
typed objects used to pass data between the layers.

I have a Typed Dataset with a single Data Table (message) and a strongly-typed
Data Adapter class (FmsgTableAdapter,PmsgTableAdpter) with a GetMessage() method.

In my application I have used pattern for inserting, updating, and deleting data,
this pattern involves creating methods that, when invoked, issue an INSERT, UPDATE,
or DELETE command to the database that operates on a single database

record. Such methods are typically passed in a series of scalar values (integers, strings,
Booleans, Date Times, and so on) that correspond to the values to insert, update, or
delete.
The patterns use the Table Adapter’s Insert Command, Update Command, and Delete
Command properties to issue their INSERT, UPDATE, and DELETE commands to the
database.
Figure Each Insert, Update, and Delete Request Is Sent to the Database
Immediately.
5. Software and Hardware
Requirement Specifications

System Implementation

The system was initially implemented in only one computer on trial basis. First,
dummy data was fed & the testing was done. All the validations & constraints in the
system were checked & tested for dummy data so that the system will not give any error
in future. It satisfies the needs of the users.

After the successful & smooth running, the system is ready for the final
installation or implementation on other computers.

The system was implemented in parallel to the old system to test whether the
system is able to perform the required task with required accuracy. After near about 15
days the new system was completely in use.

 Hardware and software require

Hardware

 Pentium 2.90 Ghz. Or higher microprocessor


 320 GB or More Disk Space
 4 GB Ram
 DVD Driver.
 Mouse
 Keyboard
 Printer

Software
 Microsoft Word. (MS Agent), MS-Visio, net Frame works, MS-Sql Server express
edition.

Window platform

Any Windows operating system

Details of Hardware and Software used

Details of Hardware Used

 Pentium 2.90 Ghz..


 320 GB Hard Disk.
 4 GB RAM.

Software Used

 Application Package used is Microsoft ASP.NET( C # )


 Database Package: Microsoft SQL Server 2005.
 Othe tools: Microsoft VISIO (UML modeling)
 MS-WORD

Window Platform

 Operating System: Windows 00 / NT/window XP


6. Analysis
Data Flow Diagram

After the conclusion of interviews of officials and observations from Preliminary


Investigation, Feasibility Study and Software Requirement Specifications was signed. I
had to draw the Functional Specifications from Data Flow Diagram techniques, to start
designing the system.

 What is a Data Flow Diagram?


 Data flow diagrams illustrate how data is processed by a system in terms of
inputs and outputs.
 Data Flow Diagram Notations
 You can use two different types of notations on your data flow diagrams:
Yourdon & Coad or Gane & Sarson.
 Process: A process transforms incoming data flow into outgoing data flow.

: Yourdon & Coad Process

: Gane & Sarson Process


Data Flow Diagram Layers

 Draw data flow diagrams in several nested layers. A single process node on a
high level diagram can be expanded to show a more detailed data flow
diagram. Draw the context diagram

 The nesting of data flow layers


 Context Diagrams: A context diagram is a top level (also known as Level 0)
data flow diagram. It only contains one process node (process 0) that
generalizes the DFD.
Data Flow Diagram
 Data Model(Schema Diagram)
 Data Dictionary
 No. of modules
 Process Logic
Screen Shot
7. Coding
8. Code Efficiency

 Efficiency of code is mainly dependent on how intelligently coding is done.


There is no specific technique by which any one can say, this is the efficient
code and other one is a bad one, it all depends on the programmer that how
efficiently he uses his intellect. And the other most important thing is the way
one is handling the language; which is used to develop the code and for that
one has to have proper knowledge of language.

 But still there are some common techniques and structures; if any one
follows that then his/her code can become quite efficient.

 E.g. Variable Naming Conventions, Properly used scope of variable, use of


control structure and looping structure in a easier and a simpler way as much
as possible.

 To write code in a proper order and sequence, the order and sequence again
depends on the programmer and the situation.
9. Optimization of code

If coding is done efficiently then it should also be used or done optimistically. i.e. best
use of code.

 What is the reason behind this optimization?


 Optimization means make best or most effective use.
 E.g. there is one efficient function or efficient environment oriented language
or tool is available but if optimum use of that function or tool or language
makes the whole program more simpler, effective and user friendly also.
 Now, how to optimize? Again it mostly depends on the programmer, that how
intelligently he is doing all those things, still there are some basic rules to
make our code optimistic.
 First thing is, develop the code which is general i.e. that code cannot be
purposefully developed only by one angle. i.e. only for current system.
Programmer must be awake of all the general usage of that code, at least
he/she has to look out onto most of the probabilistic events or conditions or
specifications that can occur. The most usage of any function can also be
cleared prior to making them, then only one can develop the general purpose
code which can be said as optimization of code. So ultimately code must be
reusable.

 Second is, Modularization i.e. most important thing for optimization. If total
code is distributed in proper modules prior to start of the actual coding then
it’s a better way of coding. General Module i.e. Standard Module is used.
 Third thing is capabilities; utilities and facilities which are provided by
language or tool or environment in which the programmer is developing the
code must be properly known.

 Active X Control named as EdgeCtl.ocx


 The most important thing to use OCX is its Reusability.
 MDI is used so, automatically it optimizes the code.
 The Optimization goal is achieved by combining the OCX and MDI.
10. Validation check

1. All text fields that take integers, as inputs will be validated so that only the digits
are allowed.

2. All text fields that take inputs as alphanumeric will be validated, so that only
alphabets are taken as parameters for input.

3. All fields that are mapped to Primary key, will be validated so that the data is not
stored as NULL in the required fields.

4. All text fields max length, are set according to the mapped database fields, so
that the characters do not exceed the maximum length.

5. Before storing the data, all fields that take in NULL are store values as NULL.

6. All Date field values are stored as “dd-MMM-yyyy” format and will be consistent
through the system.

7. All database fields that take in a single value, as flag will content Digit.

8. Primary keys are IDENTITY columns, which makes then Auto-Increment value
field.

9. Data Stores in the reference table / column is validated through a Visual


Graphical Component like Combo, List Views, Tree Views, which makes the
Foreign Key Value consistent and sure to be present in the Parent table.
11. Testing

To examine critically is called Testing. Whatever we have developed whether


it is properly working or how much correctly the development has been done or what
are the errors. To answer these type of questions testing is required.
First of all the project is debugged by method of Traditional breakpoint facility.
Debugging means the process of isolating and correcting the cause of known errors.
Various testing methods are used to test the system.

System Testing

 A system is tested for online responses, volume of transactions, stress, and


recovery from failure, and usability. System testing involves two kinds of
activities – Integration testing and acceptance testing.

Integration Testing

 Bottom up integration is the traditional; strategy used to integrate the


components of a software system into a functioning whole. Bottom-up
integration consists of unit testing, followed by subsystem testing, followed
by testing of the entire system. Unit testing has the goal of discovering errors
in the individual modules of the system.
Unit Testing

 A program unit is usually small enough that the programmers who developed
it can test it in great detail and certainly in greater detail will be possible
when the unit is integrated into an evolving software product.
 There are four categories of tests a programmer will typically perform on a
program unit:

1. Functional Tests: specify operating conditions, input values and expected


results. For example the function Numeric written to check whether data is
numeric or not the argument can be passed as null argument.

2. Performance Tests: should be designed to verify response time, execution


time, throughput, primary and secondary memory utilization and traffic rates
on data channels and communications. A query executed takes 5 seconds to
display results, is a test for response time. Execution time is the time taken by
CPU to execute a program. Throughput is the rate at which data gets
transferred from one data source to destination. Primary and secondary
memory utilization needs to be optimized. Traffic rates on data channels and
communication link testing are applicable for networks.

3. Stress Tests: are designed to overload a system in various ways. The


purpose of test is to determine the limitations of the system. During multiple
query execution the available memory can be reduced to see whether the
program is able to handle the situation.

4. Structural Tests: are concerned with examining the internal processing logic
of a software system. For example, if a function is responsible for tax
calculation, the verification of the logic is a structural test.
 To test the code there are two testing methods, which are very popular they
mentioned below 1. White box 2. Black Box.

Database Testing

Modern Web Application does much more than present static content objects. In many
application domains, Web Application interface with sophisticated database
management system and build dynamic content object that are created in real time
using the data acquired from a database.

Database Testing for Web Application is complicated by a variety of factor.

1) The original client side request for information is rarely presented in the form that
can be input to a database management system.

2) The database may be remote to the server that houses the Web application.

3) RAW data acquired from the database must be transmitted to the Web application
Server and properly formatted for subsequent transmittal to the client.

4) The dynamic content objects must be transmitted to the client in a form that can be
displayed to the end user.
Client layer-user interface

Server layer- WebApp

Server layer-
Data transformation

Server layer - data


Management

Database layer – data access

Database

[Layers of interaction]

In figure testing should be ensure that

1. Valid information is passed between the client and server from the interface layer

2. The Web application process script correctly and properly extract or formats user
data.

1. Queries are passed to a data management layer that communicates with


database access routines.
2. User data are passed correctly to a server side data transformation function
that format appropriate queries.

Interface Testing

Interface design model is reviewed to ensure that generic quality criteria


established for all user interfaces have been achieved and that application specific
interface design issue has been properly addressed.

Interface testing strategy

The overall strategy for interface testing is to (1) Uncover error related to specific
Interface mechanisms (2) uncover errors in the way the interface implements the
semantics of navigation, Web Application functionality, or content display. to
accomplish this strategy, a number of objectives must be achieved:

Interface futures are tested to ensure that design rules, aesthetics, and related visual
content are available for the user without error. Individual interface mechanisms are
tested in a manner that is a logous to unit testing For example; tests are designed to
exercise all forms, client-side scripting, dynamic HTML. Each interface mechanism is
tested within the context of a use-case or NSU for a specific user category the interface
is tested within a variety of environments to ensure that it will be compatible.

Compatibility Testing
Web application must operate within environment that differs from one
another. Different computer, display device, OS, browser and network connection speed
can have significant on Web application operation. Different browser some time
produced slightly different results, regardless of the degree of HTML standardization
within the Web application.

The Web Engineering team derives a series of compatibility, validation tests,


derived from existing interface tests, navigation tests, performance tests and security
tests.

12. Implementation & Maintenance


 After testing system will be implemented at the actual site.

 Therefore, implementation team should be provided with a well-defined set of


software requirements, an architectural design specification and a detailed
design description.

 After that user training schedule will be arranged.

 Whole system itself consists HELP MENU and HELP TOIPCS so; no major
problem will be encountered.

 After three or four months first actual feedback will be taken.

 Form that feedback necessary other tips and points will be discussed.

 The maintenance is free for one year from system implementation year, after
that it depends on the management to continue or to discontinue. The
maintenance-working schedule will be discussed after three months evolutions.
System Security Measures

 At the back end very powerful security is provide by SQL SERVER 2005.

 Without proper username and password no one can enter in the database.

 Again if user name and password is correct then that user can do only those
operations, which are granted by the administrator.

 On Front end side, security is provided by unique user name and password
which is known by him or administrator no one else knows it.

 So, anybody who does not know the password and username cannot use it.

 The account creation for new user is done by administrator.

 Star Placement Services do not want very high security in this version so,
high level security is not implemented .That will be implemented in the next
version.
Cost Estimation

Here, I have roughly rounded the cost estimation:

 This is just a rough estimation, so it can be predicted more or less in some

cases than actual estimation.

 Total there are 25 forms in the software so designing and coding costing

around ` 15000 + Reports costing ` 15000 and + Database `15000 =

`45000.

 The cost of Extra Reports, Utilities, and Original software, Hardware are

not estimated and included in the cost estimation.


13. Project Schedule
No Project Goals Starting Date Ending Date Days
1 Analysis 01-APR-2014 20-MAY-2014 50

2 Feasibility Study 21-MAY-2014 15-MAY-2014 25

3 Soft. Eng. Para. 16-MAY-2014 31-MAY-2014 15

4 Requirement Spec 01-AUG-2014 03-AUG-2014 2

5 Design 04-AUG-2014 04-AUG-2014 30

6 Coding 06-AUG-2014 08-SEPT-2014 32

7 Validation Checks 09-SEPT-2014 19-SEPT-2014 10

8 Testing 20-SEPT-2014 27-SEPT-2014 7

9 Implementation& 28-SEPT-2014 30-SEPT-2014 2


Maintenance

10 Documentation Parallel Work done with 2


all the Schedule
14. Chart

Pert Chart

Feasibility
13%
Analysis
27%
Soft. Par.
8%
Req.
1% Impl.
6%
Design
16% Testing
Coding 8%
21%

Val.
2%
14. Scope of Future Application
15. Bibliography
Books

 ASP.NET(Black Book).
 Professional ASP.NET(Wrox Publication).
 C# Vijaymukhi.
 ASP.NET Complete Reference.
 Software engineering Concepts By Roger S.Presman
 UML IN A NUTSHELL By Alhir
 Fundamentals of Software Engineering By Rajib Mall
 SQL Server 2005 (Wrox Publication).

Web Sites

 www.gujaratimatrimonial.com
 www.shubhlagnam.com
 www.codeproject.com
 www.ranasamaj.com
 www.jeevansathi.com
 www.shadi.com
 www.google.co.in