Вы находитесь на странице: 1из 8

SUBJECT CODE: M.Sc. 12/MCA 12 SUBJECT TITLE: Software Engineering Question1. What do you understand by software design?

Explain various techniques used for verification of detail design. Software design is a process of problem solving and planning for a software solution. After the purpose and specifications of software are determined, software developers will design or employ designers to develop a plan for a solution. It includes low-level component and algorithmimplementation issues as well as the architectural view. Design concepts The design concepts provide the software designer with a foundation from which more sophisticated methods can be applied. A set of fundamental design concepts has evolved. They are: 1. Abstraction - Abstraction is the process or result of generalization by reducing the

information content of a concept or an observable phenomenon, typically in order to retain only information which is relevant for a particular purpose. 2. Refinement - It is the process of elaboration. A hierarchy is developed by decomposing a macroscopic statement of function in a stepwise fashion until programming language statements are reached. In each step, one or several instructions of a given program are decomposed into more detailed instructions. Abstraction and Refinement are complementary concepts. 3. 4. Modularity - Software architecture is divided into components called modules. Software Architecture - It refers to the overall structure of the software and the ways in

which that structure provides conceptual integrity for a system. A good software architecture will yield a good return on investment with respect to the desired outcome of the project, e.g. in terms of performance, quality, schedule and cost. 5. 6. Control Hierarchy - A program structure that represent the organization of a program Structural Partitioning - The program structure can be divided both horizontally and components and implies a hierarchy of control. vertically. Horizontal partitions define separate branches of modular hierarchy for each major program function. Vertical partitioning suggests that control and work should be distributed top down in the program structure. 7. 8. Data Structure - It is a representation of the logical relationship among individual Software Procedure - It focuses on the processing of each modules individually elements of data.

9.

Information Hiding - Modules should be specified and designed so that information

contained within a module is inaccessible to other modules that have no need for such information. 1) Cyclomatic complexity Cyclomatic complexity is a measure of source code complexity that has been correlated to number of coding errors in several studies. It is calculated by producing a ControlFlowGraph of the code, and then counting: E = number of edges in the graph. N = number of nodes in the graph. P = number of nodes that are exit points (last instruction, return, exit, etc.) Then Cyclomatic complexity = E - N + P The metric tries to capture the number of paths through the code, and thus the number of required test cases. It is widely used, but has been criticized for not capturing the additional complexity implied in nested control structures. Example: Consider the C function, which implements Euclid's algorithm for finding greatest common divisors. euclid( int if r m n } r while m n r int = = = = (r = = = m return != 0) % m, r (n>m) m n r intn) { ; { ; ; ; m%n; { n; r; n; } n; } through A13.

The

nodes

are

numbered

A0

The control flow graph is shown in figure, in which each node numbered 0 through 13 and edges are shown by lines connecting the nodes and are numbered 0 through 14. Node 1 thus represents the decision

of the "if" statement with the true outcome at node 2 and the false outcome at the node 5.The decision of the the "while"loop is represented by node 7, and the upward flow of control to the next iteration is shown by the dashed line from node 10 to node 7.

2) Cohesion metrics

Cohesion metrics measure how well the methods of a class are related to each other. A cohesive class performs one function. A non-cohesive class performs two or more unrelated functions. A non-cohesive class may need to be restructured into two or more smaller classes. The assumption behind the following cohesion metrics is that methods are related if they work on the same class-level variables. Methods are unrelated if they work on different variables altogether. In a cohesive class, methods work with the same set of variables. In a non-cohesive class, there are some methods that work on different data. The idea of the cohesive class

A cohesive class performs one function. Lack of cohesion means that a class performs more than one function. This is not desirable. If a class performs several unrelated functions, it should be split up.

High cohesion is desirable since it promotes encapsulation. As a drawback, a highly cohesive class has high coupling between the methods of the class, which in turn indicates high testing effort for that class. Low cohesion indicates inappropriate design and high complexity. It has also been found to indicate a high likelihood of errors. The class should probably be split into two or more smaller classes.

Project Analyzer supports several ways to analyze cohesion:


LCOM metrics Lack of Cohesion of Methods. This group of metrics aims to detect problem classes. A high LCOM value means low cohesion. TCC and LCC metrics: Tight and Loose Class Cohesion. This group of metrics aims to tell the difference of good and bad cohesion. With these metrics, large values are good and low values are bad. Cohesion diagrams visualize class cohesion. Non-cohesive classes report suggests which classes should be split and how.

LCOM Lack of Cohesion of Methods There are several LCOM lack of cohesion of methods metrics. Project Analyzer provides 4 variants: LCOM1, LCOM2, LCOM3 and LCOM4. We recommend the use of LCOM4 for Visual Basic systems. The other variants may be of scientific interest. LCOM4 (Hitz & Montazeri) recommended metric LCOM4 is the lack of cohesion metric we recommend for Visual Basic programs. LCOM4 measures the number of "connected components" in a class. A connected component is a set of related methods (and class-level variables). There should be only one such a component in each class. If there are 2 or more components, the class should be split into so many smaller classes. Which methods are related? Methods a and b are related if: 1. they both access the same class-level variable, or 2. a calls b, or b calls a. After determining the related methods, we draw a graph linking the related methods to each other. LCOM4 equals the number of connected groups of methods.

LCOM4=1 indicates a cohesive class, which is the "good" class. LCOM4>=2 indicates a problem. The class should be split into so many smaller classes. LCOM4=0 happens when there are no methods in a class. This is also a "bad" class.

The example on the left shows a class consisting of methods A through E and variables x and y. A calls B and B accesses x. Both C and D access y. D calls E, but E doesn't access any variables. This class consists of 2 unrelated components (LCOM4=2). You could split it as {A, B, x} and {C, D, E, y}.

In the example on the right, we made C access x to increase cohesion. Now the class consists of a single component (LCOM4=1). It is a cohesive class.

It is to be noted that UserControls as well as VB.NET forms and web pages frequently report high LCOM4 values. Even if the value exceeds 1, it does not often make sense to split the control, form or web page as it would affect the user interface of your program. The explanation with UserControls is that they store information in the the underlying UserControl object. The explanation with VB.NET is the form designer generated code that you cannot modify. Implementation details for LCOM4. We use the same definition for a method as with the WMC metric. This means that property accessors are considered regular methods, but inherited methods are not taken into account. Both Shared and non-Shared variables and methods are considered. We ignore empty procedures, though. Empty procedures tend to increase LCOM4 as they do not access any variables or other procedures. A cohesive class with empty procedures would have a high LCOM4. Sometimes empty procedures are required (for classic VB implements, for example). This is why we simply drop empty procedures from LCOM4. We also ignore constructors and destructors (Sub New, Finalize, Class_Initialize, Class_Terminate). Constructors and destructors frequently set and clear all variables in the class, making all methods connected through these variables, which increases cohesion artificially. Suggested use. Use the Non-cohesive classes report and Cohesion diagrams to determine how the classes could be split. It is good to remove dead code before searching for uncohesive classes. Dead procedures can increase LCOM4 as the dead parts can be disconnected from the other parts of the class.

Question 2: What is the importance of testing with respect to software product? Discuss various levels of testing. Importance of Testing: Software testing is aimed at making sure the software product meets its predefined goals. For example, a software application designed to view pictures should do tasks like opening a picture file and showing the picture properly. It should be able to load the file from the secondary storage, display the full image and show an error message when the user loads a non-picture file. The user wants to see a high-quality image and the software should do just that. Software testing can either be done manually or automated. To Improve Quality Computers and software are heavily used in critical fields like medical diagnosis, airplanes and air traffic control, space shuttle missions and stock market reporting. The presence of bugs in the software application can cause irreparable losses. Quality of software is of utmost importance, and making sure the software meets quality standards is the job of the software test engineer. For Verification and Validation Verification and validation of a software product is the process of determining if the system meets its predefined goals and the output is correct. Planning for this phase of testing starts early in the software development life cycle. Verification and validation can be performed by the same organization that developed the product, but are more effective if performed by an independent testing agency. For Reliability Estimation From the user point of view, reliability means how dependable the software product is. In medical diagnosis, an incorrect suggestion to the doctor can result in the loss of lives. Critical software products are thoroughly checked for all aspects of its functionality. Prove Usability and Operability One very important aim of software testing is to prove the software is both usable and operable. Usability testing is where the software is released to a select group of users and their working with the product is observed. All aspects of a user's interaction with the software, like ease of use and where users are facing problems, are recoded and analyzed. Prevent Defect Migration The majority of errors are usually introduced in the software requirements gathering phase. If the errors are detected early, they can be prevented from migrating to the subsequent development phase. Early detection and debugging of errors leads to huge savings in software development costs.

Various levels of Software testing Unit testing Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other. Unit testing is also called component testing. Integration testing Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be localised more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. System testing System testing tests a completely integrated system to verify that it meets its requirements. System integration testing System integration testing verifies that a system is integrated to any external or third-party systems defined in the system requirements.

Question 3. What is meant by software reliability? Give the names of various reliability models and also explain musa markov model in detail. Software reliability is a key part in software quality. The study of software reliability can be categorized into three parts: modeling, measurement and improvement. Software reliability modeling has matured to the point that meaningful results can be obtained by applying suitable models to the problem. There are many models exist, but no single model can capture a

necessary amount of the software characteristics. Assumptions and abstractions must be made to simplify the problem. There is no single model that is universal to all the situations. Software reliability measurement is naive. Measurement is far from commonplace in software, as in other engineering field. "How good is the software, quantitatively?" As simple as the question is, there is still no good answer. Software reliability cannot be directly measured, so other related factors are measured to estimate software reliability and compare it among products. Development process, faults and failures found are all factors related to software reliability. Software reliability improvement is hard. The difficulty of the problem stems from insufficient understanding of software reliability and in general, the characteristics of software. Until now there is no good way to conquer the complexity problem of software. Complete testing of a moderately complex software module is infeasible. Defect-free software product cannot be assured. Realistic constraints of time and budget severely limit the effort put into software reliability improvement.

Вам также может понравиться