Вы находитесь на странице: 1из 25

Result management system

The Result Management System (RMS) enables organization to analyse,edit and publish assessment result in formed and secure way The RMS enables organization to:Manage assessment result in a professional and secure way to convert them into the meaningful data of the organization(e.g. grades,certificates and other measure of achievements)_ To analyze database related to RMS Obtain results that are visible by the students and are easy to calculate result automatically Analyze the result that are calculated and the statistical data is created to obtain correct data.

Problem with current system


Problems with existing system:The problem with this system is that there is a lot of paperwork involved in the recording the database of the result The result have to be physically seen by the student from the list The result is manually calculated by the human being which increases the chances of errors There is a long and difficult system after the result is calculated and seen by the students later

Project monitering system


The waterfall model:Sequence of models executed top to bottom Each activity is validated/tested before proceeding to next step Activities:System feasibility, system design, system analysis, programme design, coding, testing, installation, operation and support, maintenance, retirement

Strength of waterfall model


Utilization of staff Good project control Easy to use Easy to understand Provides structure to inexperienced staff

Milestones are well understood Sets requirement stability Good for management control Works well when quality is more important than cost and schedule

Phases of waterfall model


Definition/study analysis:During this phase research is conducted which includes brainstorming about the software,what it is going to be and what purpose is it going to fulfill

Basic design:If the 1st phase get successfully completed and the well thought plan out for the software Development has been made than the next step is involves formulating the basic desighn of the software on the paper Technical design/detailed design:After the basic design is approved, then the more elaborated design can be planned.here the functions of each of the programs are decided and the engineering units are placed e.g. modules, programmes.

Construction/implementation:In this phase source code of the programs are written.

Testing:At this phase, the whole design and its construction is put under the test to check its functionality. If there are any errors then they will surface at this point of the process

Integration:-

In the phase, of integration company puts it in use after the system has been tested successfully

Feasibility study
A feasibility study is defined as an evaluation or analysis of the potential impact of a proposed project or program. A feasibility study is conducted to assist decision-makers in determining whether or not to implement a particular project or program. The feasibility study is based on extensive research on both the current practices and the proposed project/program and its impact on the school foodservice operation. The feasibility study will contain extensive data related to financial and operational impact and will include advantages and disadvantages of both the current situation and the proposed plan. The feasibility study is conducted to assist the decision-makers in making the decision that will be in the best interest of the school foodservice operation. The extensive research, conducted in a non-biased manner, will provide data upon which to base a decision.

Focus of feasibility study: Is the product cocept viable? Will it be possible to develop a product that matches the projects vision statements? What are the current estimated cost and schedule for the project? Is the business model for software justified when the current cost and schedule estimates are considered? Have the major risks to the projects been identified and can they be surmounted? Is the software development plan complete and adequate to support further development work? The work done during the first 10-20 percent of the project should sufficiently answer these questions and give the client or top management enough information to decide whether to fund the rest of the project.

Breaking a software project into a feasibility study phase helps software organization in at least three ways:-

Some people view any cancelled project as a failure. But a project cancelled at 10 to 20 percent level vis--vis copletion point should not be considered a failure.cacelling one project that ultimately goes no where aftewr it is only 10 to 20 percent complete instead of 80 to 90 percent complete should always be a good decision. Feasibility study can be conducted with marginal funds.after the final decision about project,more accurate and realistic estimate can be prepared.this may help us to reduce cost variations. The project team will complete 10 to 20 percent of the project before requesting funding for rest of it forces a focus on upstream activities that are critical to a projects success.otherwise these activities are often ignored, and the damaging consequences of such neglect would not become apparent until late in the project.

Top-down design principle


The essential idea of top-down design is that the specification is viewed as describing a black box for the program? The designer should decide how the internals of the black box is constructed from smaller black boxe: and that those inner black boxes be specified. This process is then repeated for those inner boxes, and so on till the black boxes can be coded directly. A top down design approach starts by identifying the major modules of the system, decomposing them into their lower level modules and iterating until the desired level of detail is achieved. This is stepwise refinement; starting from an abstract design, in each step the design is refined to a more concrete level, until we reach a level where no more refinement is needed and the design can be implemented directly. Most design mathadology is based on this approach and this is suitable, if the specifications are clear and development is from the scratch. If coding of a part starts soon after its design, nothing can be tested until all the subordinate modules are coded.

Structured design
Structured Design (SD) is concerned with the development of modules and the synthesis of these modules in a so called "module hierarchy". In order to design optimal module structure and interfaces two principles are crucial: Cohesion which is "concerned with the grouping of functionally related processes into a particular

module" Coupling relates to "the flow of information, or parameters, passed between modules. Optimal

coupling reduces the interfaces of modules, and the resulting complexity of the software". Page-Jones (1980) has proposed his own approach, which consists of three main objects: structure charts, module specifications and a data dictionary. The structure chart aims to show "the module hierarchy or calling sequence relationship of modules. There is a module specification for each module shown on the structure chart. The module specifications can be composed of pseudo-code or a program design language. The data dictionary is like that of structured analysis. At this stage in the software development lifecycle, after analysis and design have been performed, it is possible to automatically generate data type declarations"and procedure or subroutine templates.

System design
Systems design is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specifiedrequirements. One could see it as the application of systems theory to product development. There is some overlap with the disciplines ofsystems analysis, systems architecture and systems engineering.[1][2] If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development,"[3] then design is the act of taking the marketing information and creating the design of the product to be manufactured. Systems design is therefore the process of defining and developingsystems to satisfy specified requirements of the user. Until the 1990s systems design had a crucial and respected role in the data processing industry. In the 1990s standardization of hardware and software resulted in the ability to build modular systems. The increasing importance of software running on generic platforms has enhanced the discipline of software engineering.

System testing and implementation


In computer science, an implementation is a realization of a technical specification or algorithm as a program, software component, or othercomputer system through programming and deployment. Many implementations may exist for a given specification or standard. For example,web browsers contain implementations of World Wide Web Consortium-recommended specifications, and software development tools contain implementations of programming languages.

System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. [1] As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testingand also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limited type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.

Types of testing
Functional testing Structural testing

Why testing?
Although software testing is itself an expensive activity,yet launching of software without testing may lead to cost potentially much higher than that of testing,specially in systems where human safety is involved.no one would think of allowing automatic pilot software into service without the most rigorous testing. In so-called life critical system, economics must not be a prime consideration while deciding whether the product should be released to the customer. In most systems, however, it is the cost factor, which plays a major role. It is both the driving force and the limiting factor as well. In the software life-cycle the earlier the errors are discovered and removed, the lower is the cost of their removal. The most damaging errors are those, which are not discovered during the testing process and therefore remain when the system goes live. In commercial systems it is often difficult to estimate the cost of errors.

Testing methodology
There are different types of software testing methodologies used in the field of software testing and quality assurance. In the following article, we'll take a look at various software testing techniques and methodologies that are in practice today

Software testing is an integral part of the software development life cycle (SDLC). Testing a piece of code effectively and efficiently is equally important, if not more, to writing it. So what is software testing? Well, for those who are new to software testing and quality assurance, here are few useful facts. Software testing is nothing but subjecting a piece of code to both, controlled and uncontrolled operating conditions, in an attempt to observe the output and examining whether it is in accordance with certain pre-specified conditions. Different sets of test cases and testing strategies are prepared, all of which are aimed at achieving one common goal - removing bugs and errors from the code and making the software error-free and capable of providing accurate and optimum outputs. There are different types of software testing techniques and methodologies

Software Testing Methodologies


These are some of the commonly used test methodologies:

Waterfall model V model Spiral model RUP Agile model RAD

Let us have a look at each of these methodologies one by one.

Waterfall Model
The waterfall model adopts a 'top down' approach regardless of whether it is being used for software development or testing. The basic steps involved in this software testing methodology are as follows: 1. 2. 3. 4. 5. Requirement analysis Test case design Test case implementation Testing, debugging and validating the code or product Deployment and maintenance

Spiral Model

As the name implies, spiral model follows an approach in which there are a number of cycles (or spirals) of all the sequential steps of the waterfall model. Once the initial cycle gets completed, a thorough analysis and review of the achieved product or output is performed. If it is not as per the specified requirements or expected standards, a second cycle follows, and so on. This methodology follows an iterative approach and is generally suited for large projects having complex and constantly changing requirements.

Rational Unified Process (RUP)


The RUP methodology is also similar to the spiral model in the sense that entire testing procedure is broken up into multiple cycles or processes. Each cycle consists of four phases namely; inception, elaboration, construction and transition. At the end of each cycle, the product/output is reviewed and a further cycle (made up of the same four phases) follows if necessary. Today, you will find certain organizations and companies adopting a slightly modified version of the RUP, which goes by the name, Enterprise Unified Process (EUP).

Agile Model

This methodology follows neither a purely sequential approach nor a purely iterative approach. It is a selective mix of both approaches in addition to quite a few and new developmental methods. Fast and incremental development is one of the key principles of this methodology. The focus is on obtaining quick, practical and visible outputs rather than merely following the theoretical processes. Continuous customer interaction and participation is an integral part of the entire development process.

Rapid Application Development (RAD)

The name says it all. In this case, the methodology adopts a rapid developmental approach by using the

principle of component-based construction. After understanding the different requirements of the given project, a rapid prototype is prepared and is then compared with the expected set of output conditions and standards. The necessary changes and modifications are made following joint discussions with the customer or development team (in the context of software testing). Though this approach does have its share of advantages, it can be unsuitable if the project is large, complex and happens to be extremely dynamic in nature, wherein requirements change constantly.

Unit testing
In computer programming, unit testing is a method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine if they are fit for use.[1] Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming a unit could be an entire module but is more commonly an individual function or procedure. In object-oriented programming a unit is often an entire interface, such as a class, but could be an individual method.[citation needed] Unit tests are created by programmers or occasionally by white box testers during the development process. Ideally, each test case is independent from the others: substitutes like method stubs, mock objects,
[2]

fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically

written and run by software developers to ensure that code meets its design and behaves as intended.

Benefits
Find problems early Facilitates change Simplifies integration Documentation

Integration testing
Integration testing (sometimes called Integration and Testing, abbreviated "I&T") is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before validation testing. Integration testing takes as its inputmodules that have been unit tested, groups them in larger aggregates, applies tests

defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.

Purpose
The purpose of integration testing is to verify functional, performance, and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using Black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing. The overall idea is a "building block" approach, in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages. Some different types of integration testing are big bang, top-down, and bottom-up.

Top-down and Bottom-up


Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module. Sandwich Testing is an approach to combine top down testing with bottom up testing. The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-Down, it is easier to find a missing branch link.

Black-box testing
Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. The tester is only aware of what the software is supposed to do, but not how i.e. when he enters a certain input, he gets a certain output; without being aware of how the output was produced in the first place[1]. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the test object's internal structure. This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.

Test design techniques


Typical black-box test design techniques include: Decision table testing All-pairs testing State transition tables Equivalence partitioning Boundary value analysis

Debugging
"Debug" redirects here. For the shell command, see debug (command). For the German magazine, see Debug (magazine). Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece of electronic hardware, thus making it behave as expected. Debugging tends to be harder when various subsystems are tightly coupled, as changes in one may cause bugs to emerge in another. Many books have been written about debugging (see below: Further reading), as it involves numerous aspects, including interactive debugging, control flow, integration testing, log files, monitoring

(application, system), memory dumps, profiling, Statistical Process Control, and special design tactics to improve detection while simplifying changes.

Techniques
Print (or tracing) debugging is the act of watching (live or recorded) trace statements, or print statements, that indicate the flow of execution of a process. This is sometimes called printf debugging, due to the use of the printf statement in C. Remote debugging is the process of debugging a program running on a system different than the

debugger. To start remote debugging, debugger connects to a remote system over a network. Once connected, debugger can control the execution of the program on the remote system and retrieve information about its state. Post-mortem debugging is debugging of the program after it has already crashed. Related

techniques often include various tracing techniques (for example,[8]) and/or analysis of memory dump (or core dump) of the crashed process. The dump of the process could be obtained automatically by the system (for example, when process has terminated due to an unhandled exception), or by a programmer-inserted instruction, or manually by the interactive user. Delta Debugging - technique of automating test case simplification.[9]:p.123 Staff Squeeze - technique of isolating failure within the test using progressive inlining of parts of

the failing test

Sdlc

Sdlc is a adhoc approach or not well defined.it comprises of two phases namely built and fixing First phase related to build meaning wright code. Second phase related to fix means finding of error code

Advantage: It is used for only small project. It is less time consuming.

Dis-advantage: It is not used for large project. As the requirement were not defined into it a product at a full of errorsre-working of the product result into increase cost.

Build

Fix

E-R diagram

DFD

System analysis
It is the most important phase of the system development cycle. It is defined as a process of examining a situation with the intent of improving it through the better procedures and methods. Analysis involves deep study of the tesk. System analysis is the process of gathering and interpreting facts diagnose the problems, defining the goals, design the constraints and using the facts improve the system. A wrongly understood system could lead to false decisions. Systems are analyzed in terms of their objectives and the input, process and output required achievinf the goals. The goal of the analysis stage is to build and understand the scenario involved and to create a decription of just what is desired and what will eventually be build. To

understand the problem we must have input to get output. This can take the form of interviews, specifications regarding to level of performance and random data. In order to have a structured approach to the analysis stage I had to choose a methodology to follow to have a complete analysis stage. The analysis stage is probably the most important as a mistake or missed requirement from analysis phase may cost much more time and money to fix later than if it had been caught In the analysis phase.therefore it is imperative that at the analysis stage the best job possible is made. The aim is to identify the boundaries of the system and their sub-systems and interface between sub-systems and systems.

Screen shots

Scope of the project


Fully integrated view item and test statistics enables a complete view of how your test is performing User-friendly interface with color coded-icons to flag questions that demonstrate poor or borderline performance Get a real-time preview of how proposed changes will impact overall item statistics and test reliability Make the change once and have it globally applied to all participant results. Published results saved in a simple, flat format easily imported into third-party reporting tools Maintain changes within an audit trail to aid assessment defensibility

References
This references for the project RESULT MANAGEMENT SYSTEM have been taken from the following books and website. Web references http://en.wikipedia.org/ Books references Software engineering by kk aggarwal.

Software engineering by pankaj jalate.

Processing techniques
The processing options available to the designer are: Batch processing

Real-time processing On-line processing A combination of all the above You are already aware of these techniques. It is quite interesting to note, however, that a combination of these is often found to be ideal in traditional data processing applications. This increases throughout of the system as also brings down the response time of on-line activities. In most of the business applications, 24-hour data is acceptable enough and hence it is possible to update voluminous data after office-hours in batch mode.

Design methodologies
The scope of the system design is guided by the framework for the new system developed during analysis. More clearly defined logical method for developing system that meets user requirements has led to new techniques and methodologies that fundamentally attempt to do the following: Improve productivity of analysts and programmers. Improve documentation and subsequent maintenance and enhancements. Cut down drastically on cost overruns and delays. Improve communication among the user, analyst, designer, and programmer. Standardize the approach to analysis and design. Simplify design by segmentation.

Вам также может понравиться