Вы находитесь на странице: 1из 10

A Methodology based on PBL to Form Software Test Engineer

Caliane de Oliveira Figuerdo1, Simone C. dos Santos1, Gustavo H. S. Alexandre2, Paulo Henrique Monteiro Borba1 1 UFPE , Recife - Brazil, C.E.S.A.R2, Recife - Brazil cof@cin.ufpe.br,scs@cin.ufpe.br,gugahenrique@gmail.com,phmb@cin.ufpe.br Abstract
The continuous and growing presence of software products and services daily consumed by the society requires a level of quality entirely dependent not only of technology itself, but its development process and the professionals involved. By focusing on the professionals responsible for quality assurance, like the Test Engineer, skills and competence of them need to be developed based on a very critical and detailed view of the problem. The Test Engineer must be an "explorer" of the solution, finding out hidden bugs and trying to remove defects from applications. In this context, this paper proposes a teaching methodology focused on training quality assurance professionals by making use of actual problem-based learning to develop the necessary skills. To prove the applicability of the methodology, an empirical study was developed with positive results in teaching the discipline of exploratory testing.

1. Introduction
Currently, there is a growing demand for trained professionals in software testing due to the demand for high quality products, usually associated with the elimination of defects, and the quality of software processes, related to the preservation of requirements consistency defined by the customer throughout the stages of software development. From an academic standpoint, generally software testing disciplines in undergraduate courses in Computer follow the traditional process of forming knowledge based only on cognitive orientation, with theory and practice provided by a teacher, this agent as a main way of interacting and active. However, this training focused on the teacher results in transforming the student into a passive agent. Additionally, the practical activities are normally defined based on scenarios and situations designed by the teacher, almost always distant from the complexity of real-world problems. Finally, this process of training, assessments of students also end up focusing solely on cognitive aspects, verifying the basic concepts and fundamentals of software testing without considering the ability of practically applying these concepts, as well as the process of creating solutions for problems. As an alternative to traditional methods of teaching, the Problem Based Learning (PBL) [1] teaching method has been applied in different market areas, from the medical field, linked to its origin, the areas of engineering and technology [2] . The author Tynl [3] defines the problem based learning (PBL) as a strategy where students work in teams in order to solve problems and also encourage the development of skills and attitudes, including teamwork, self-initiative, cooperation, ethics and respect for other viewpoints. In this model, the student changes his role in the learning process, moving from passive recipient to active, responsible for their learning. Furthermore, this methodology prescribes an environment where students are immersed in practice, interacting with friends and peers, inserted into a functional learning process for

solving real problems, better aligned to the needs of the real world (match current "realworld" needs). Finally, the adoption of PBL is often confused with practical experiments in which students are left to fend for themselves, with little interaction with teachers and low support from provided study material. However, an effective PBL methodology is strongly process-oriented, since the approach needs to be planned to ensure that theory and practice will walk together and aligned, and accompanied by instruments that can assess their effectiveness [4]. In this context, this article describes a teaching methodology based on PBL to train professionals in software testing, here called "software test engineers." The adoption of PBL is accomplished through the creation of software factories guided planning, execution, control and development, aligned to actual projects and multidirectional interactions between students and teachers involved in all activities of software testing. An experiment conducted in a Training Program in Software Testing is discussed in order to validate the methodology by presenting important results that indicate contributions of the methodology on building more effective learning processes in monitoring and process validation.

2. The Methodology based on PBL


PBL is defined as an educational strategy, student-centered, which helps develop reasoning and communication. The student is constantly encouraged to learn and be part of the construction process of learning [5, 6, 7]. PBL is essentially process-oriented. The practical immersion in which students are subjected requires a teaching planning that involves defining the structure of the practical environment of educational goals and objectives, roles of human capital involved (students and teachers) and assessment of results. With an emphasis on process definition, the methodology proposed in this paper was developed in accordance with the PDCA (Plan, Do, Check and Act). Conceived by Shewhart and Deming published by the PDCA cycle defines four basic steps to problem solving and continuous improvement, as illustrated in Figure 1: Plan, Determines the objectives of training and methods to achieve them; Do, Implements the training plan and implement the practical work; Check, Verifies the effects of training and; Act, Carries out corrective actions and evolving from its results. In the next sections, are described the main elements and activities in each phase.

Figure 1. PDCA Cycle 2.1. Plan

At the planning stage, the goals and a plan of action to achieve the proposed objective are defined based on information like the students profile, number of students and course objectives. The activities in this phase are: Educational Objectives: Define, structure and organize the teaching / learning process to software test engineers, in concepts, techniques and tools for high productivity by running real projects; Content: In this activity, learning modules were defined as guides and support to the human capital involved in the training. A Learning Module (LM) is a set of practical activities, training, techniques and tools aimed at the theoretical and practical learning of a specific content. The preparation of an LM consists of (1) define a test strategy from the educational objectives of the module, (2) define a test process (activities, artifacts, roles and responsibilities) aligned to the strategy of testing the module, (3 ) choose the techniques and tools necessary to implement the activities planned for the module, (4) List the training previously identified as necessary for understanding and practical application of content covered in the module. Figure 2 illustrates an example of a learning module that addresses a Software Test specific content. Being an independent module, can be applied to any test project that uses the testing strategy defined in the module.

Figure 2. Learning Module. Human Capital: The projects execution involves the participation of five key players, among which are the client, responsible for the demand of testing services and validating the results, the student, who make up the testing team and is responsible for implementation of projects and tests achieving the results agreed with customers, the tutor, responsible for giving training and preparing teaching material acting as an evaluator and facilitator of student learning, the consultant, who works as a software testing expert help in planning activities, technical and training necessary for the implementation of project activities, and the manager responsible for managing the software factory, performing activities of monitoring and control of projects. Environment and infrastructure: This step includes the purchase and installation of infrastructure and tools needed to create an environment similar to a software factory focused on software testing stage. Also in this activity real projects are selected to be executed in software factories. 2.2. Do This step is to implement the activities foreseen in the planning stage, therefore, this step input objects reflect the objects output from the previous step (Plan). In this scenario, students are immersed in the practice of real projects, working on various activities of software testing, and continue interaction with customers, teachers and fellow students. Problems arise during the development / implementation of the activities of a project so that students are encouraged

to seek new knowledge to solve them. According to Santos et al. [4] put the problem before a lesson motivates students, and the project becomes the driving force that motivates learning. The implementation of the action plan defined in planning is the achievement of the following activities: Defining test teams: Consists of dividing students into small teams and heterogeneous teams. According to the methodology of dividing students into teams described in Santos et al. [4], it is important to consider affinities, skills and abilities of each student in order to avoid concentration and / or lack of skills / abilities in the same team. For each new project roles definition is made for each student from the test team, such as test manager, test analyst and test runner. Despite this roles definition, each member gets involved in all stages of the testing process. Implementation of real projects: This activity consists in the selection and implementation of real projects of software testing with a focus on functional learning from real problems, involving the participation of real customers. 2.3. Check This step consists of verifying if the actions performed in the previous section led to the achievement of the goals established in the planning phase. In other words, to verify the effectiveness of the methods used to obtain the expected results. This check is to assess what has been learned, monitor, analyze and guide students in their work. In [8], the author describes the process of teaching / learning as a cyclical process that begins with defining the goals and continues with the choice of methods and evaluation criteria. In this circular process is performed the three methods of evaluation defined by Bloom [9] and cited in Alexander [8]: diagnostic, Used to determine whether the student has the necessary prerequisites for the acquisition of new expertise; Formative, Held in order to verify that the student is achieving the goals established during the course and; summative, Used to classify a student, performed at the end of a course or school year teaching unit. In the context of software testing training via learning modules and implementation of real projects, this process is divided into well defined phases. In step1 - Preparation, defines the criteria and methods of assessment for each learning module. The second step - Diagnostic is to develop and implement assessments that identify the current level of knowledge of students in the content of the module to be developed, but also detect pre-requisites for the acquisition of new expertise. The Step3 - Monitoring is dedicated to the elaboration and implementation of formative assessments. Continuous assessments are made, in order to identify gaps in learning the following aspects: Content, evaluation based on the content taught in training, noting the theoretical knowledge acquired; Process: evaluation based on monitoring the activities of the process and meeting deadlines, noting the oral language, posture, strength and improvement; Output: evaluation of the artifacts produced by observing characteristics such as standardization, organization, completeness, correctness, creativity, appropriateness and timeliness. Finally, in Step 4 - Classification are developed and applied character summative assessments, to verify the learning outcomes achieved by students, according to the attainment levels set. The calculation of student grades includes formative assessments (content, process, and deliveries) and summative. The final concept is calculated by: Final concept = (0.6 x formative evaluation) + (0.4 x Summative Assessment) According to [8], this evaluation methodology is not only concerned with the condition of pass / fail the student, but worry, especially in monitoring the behavior of the student before

an evaluation, also providing resources to enable it to deepen their knowledge and improve on weaknesses identified by the evaluation. 2.4. Act This phase is to take action and make necessary corrections, focusing on the guides and brackets (learning modules) of teaching and learning, in order to avoid repeating the problems. Considering this, each learning module has a version, a context and a level of maturity associated, which may evolve according to these characteristics and the design of which is related, as illustrated in Figure 4.

Figure 1. Evolution of learning module The points represent the learning modules, the line T indicates the progress of the module over time (eg, 1.0, 2.0, associated with the improvement or modification of content), the C line indicates the evolution of the module in relation to the context (for example, the type of application that can be Desktop, Web or Mobile), and the line M indicates the progress of the module in relation to the maturity level of the organization (eg, basic, intermediate or advanced, depending on the tools and techniques associated). The squares shown in red indicate the learning modules have been made. Considering the specific version, context, and maturity, the modules allows monitoring students learning development during the implementation of project activities, enabling corrective actions or improvements that have been found to be necessary in the implementation of a learning module earlier.

3. Key points about Software Test


Software testing as part of Software Engineering is responsible for conducting the verification of the correct treatment of software in relation to identified needs for the customer, by detecting and collecting the information of defects found, allowing the defect to be corrected as soon as possible. Fixing a bug that has spread to the production environment can become a hundred times more expensive than correcting the same defect in the initial phases of the project [10]. Software testing reduces rework, risks and costs associated with project development, contributing to the improvement of software quality. In any software development application of the test shall be performed using a previously defined testing process. A process of testing means defining test levels, test types, techniques and criteria that drive the creation of test cases and test items aligned to the process of software development. In addition. the more specialized and independent test team, the better the quality of the system and lower the total cost [11].

The software test process involves basically four steps: 1) test planning, aimed at drafting and revision of the test strategy and test plan; 2) design of test cases, preparation and review of test cases and test scripts; 3) implementation of the planned tests as test cases and; 4) evaluation of test results, which completes the process and delivery system tested [10, 11]. An important point in the test planning is to define the testing strategy. According to Kaner, et al. [11], test strategy refers to the set of ideas that guide test design throughout the project. In a testing strategy are defined techniques, the level of testing and the test types to be performed. A test technique is about how to test. May be static (does not involve running tests), such as reviews and analysis static or dynamic (involving execution). Examples of dynamic techniques are black box testing based on specifications (or conduct) White box testing, performed upon how the program was built, and the Exploratory tests based on the skills, insights and experiences of the tester, without the need to test scripts. Testing stages or levels of concern when testing, or at what stage of the development process given test should be done. Examples of levels are most commonly exploited Unit tests, applied to the code by programmers, system test, run the system as a whole to validate the execution of the functions, acceptance test, performed before implantation of the software. Finally, the test types are concerned with what has to be tested. Each test type has a focus on a particular goal, which can be tested for functionality, quality characteristic of a nonfunctional, such as reliability or usability, the software structure, or related changes (regression testing). Despite the enormous importance of having a process based on the definition and testing strategy, it is essential to develop the skills of the people so that they know and perform their activities. In short, this is an activity that requires a special place for professional, distinctive profile of the software developer.

4. Empirical Study
The methodology proposed in this paper was implemented on a pilot project for software testing, titled "Project TaRGeT (Test and Requeriment Generation Tool) in the context of the Learning Module of Exploratory Testing, defined as part of a Training Program in Software Testing. This program was implemented by the Laboratory Productivity Software (LabPS1) of the National Institute of Science and Technology for Software Engineering (INES2) in Center of Informatics Federal University of Pernambuco (CIN-UFPE3) in collaboration with partners in innovation institute CESAR 4 (Recife Center for Advanced Studies and Systems), members of the others INES project, and software industry partners. This program is designed to form undergraduate students in computer science and related areas and aims to provide students an opportunity to experience in the practice of real projects, key concepts and processes of software testing, to provide the field and dissemination of technical knowledge in the area. 4.1. Plan The project TaRGeT was classified in the context of desktop applications, and version 1.0 maturity level 2 (intermediate), involving the definition of learning objectives and planning of program modules, capture projects, clients and volunteers to deliver the training, preparation
1 2

http://labps.cin.ufpe.br http://www.ines.org.br 3 http://www.cin.ufpe.br 4 http://www.cesar.org.br

environmental testing, student selection and implementation of the pilot project. The content and strategy of the module is arranged as follows: Educational Objectives: teach the concepts, processes and techniques of exploratory testing, coupled with the black box testing techniques, and also provide skills development of oral and written communication and teamwork; Tools and Techniques: this module has focused on learning and applying the following techniques for exploratory testing and black box testing techniques (Session Based & Charter and Modeling Techniques Testing Black Box) [12]. Were defined TestLink5 (managing and running tests) and Mantis 6 (incident management) as tools to support testing and tool dotProject to manage the test project. Testing process and strategy: defined from the testing strategy of the module, this process addresses the roles (Test Manager and Tester), phases (Planning, Design, Session, Brief Meeting and Bug Tracking) and artifacts related to the activity-based exploratory testing. Content planned: the project includes the courses: DotProject (project management tool); Exploratory testing (main concepts of exploratory testing); Exploratory Testing Session Based & Charter; Modeling Techniques Testing Black Box (equivalence classes, limits analysis, decision tables, use cases). 4.2. Do 4.2.1. Human Capital: In the project Target, students were organized into two teams of five members, one staff assigned in the morning (Scan) and the other in the afternoon (Cmbio). In the training program these teams are heterogeneous, composed of students from different institutions and different periods of graduation. Each student took a role on the team, however, actively participating in all activities and phases of the testing process that was set. In the case of the Project TaRGeT, because it is exploratory testing, there is no distinction between the designer and tester, and therefore, each team consisted of four testers and a test manager. Employees LabPS and partners were assigned the roles of tutors and experts. The calls were conducted with academic and professional practice, often engaged in graduate programs. A dedicated professional took the role of manager, leading the implementation of the methodology and the process of assessing students. 4.2.2. TaRGeT Project Execution: This project was conducted from May/2010 to September 2010. Initially the teams Cmbio and Scan were presented module to test exploratory learning and targeted towards the process of module evaluation. The task required the students was to test the tool TaRGeT in an exploratory way, from the strategy, process, and techniques defined for the module, observing deadlines and artifacts to be delivered. Both teams were tasked to test the same tool without breaking functionality. The challenge was to learn how to test an exploratory way, but in a controlled, efficient and creative. The exploratory test because it stimulates creativity and investigative challenges the tester to design tests better and more effective. All students had used the tools TestLink and Mantis. But also the TaRGeT tool for generating test cases in an automated way on a project prior to this, still had difficulty in understanding some features of the tool, but also to analyze the complexity and level of criticality of each of the time of the scoping and preparing the test plan. Another sticking point for both teams was the understanding of the concepts and techniques of exploratory
5 6

http://www.teamst.org http://www.mantisbt.org

testing. Since this type of difficulty was already foreseen at the time of the module definition, were put into practice the learning resources directed to solving these problems, such as customer participation in defining the scope and conduct of training and pointers to books and references for major names in the field of exploratory testing, which helped a lot in carrying out project activities. The difficulties of the students were identified in daily meetings with discussions about the mistakes made, as well as the identification of new training to guide students in carrying out its activities. During the implementation of the LM, students identify other problems that arose and with them the need to seek new solutions and new training. One such problem was the lack of standardization and difficulty of speech that students have had time to report the bugs found. And so a new training module was added to and administered to students. This training, entitled "How to Report Bugs Effectively" reviewed the guidelines and definition of a standardized template to report bugs effectively. At the end of the module, the teams presented the project results to the client and everyone involved in the process. There were delays in some activities because of the difficulties described above and the evaluation process, since it was the first time that the assessments were applied in the training program. According to the metrics collected regarding the amount of bugs found and the observations reported by the client and carers, the results were very satisfactory. It is noteworthy that although the two teams have conducted tests on the same tool, the bugs were found by staff Scan different bugs found by staff exchange. Compared to the bugs encountered since the implementation of test cases performed earlier in this same tool, they were more critical and in greater quantity. This reinforces the importance of exploratory testing strategy to expand the imagination of the testator, also helping to identify gaps that are usually ignored or difficult to find. Besides these results, this presentation was observed that the students make a good team work, demonstrating the acquisition of theoretical knowledge and practical testing strategy adopted in the module and motivation to continue the training program tests. 4.3. Check During the project execution TaRGeT could apply the evaluation process defined in the methodology. What enabled a more concrete evaluation of the results obtained in relation to the learning objectives proposed for the module. At the beginning of LM was administered a diagnostic evaluation, and then began the formative evaluations of content, process and delivery. For each formative assessment conducted, the students received feedback and had the opportunity to improve learning in the practice of project activities. The data obtained from the formative evaluations of content giving rise to the graph in Figure 7, where one can observe that, except for assessments and trainings DotProject and Black Box Testing Techniques, teams evolve their notes throughout the process, converging for the same level of learning in the last evaluation conducted formative content. Still, these grades are above 7.0 which is the average required for the program. In Figure 7 you can also see a drop in grades of the two teams in the evaluation of the training of Black Box Testing Techniques, both had a similar performance. This decrease can be explained by the complexity of the content measured and the fact it was more difficult to evaluate the module. Usually this content is fixed from practical activities.

Figure 2. Formative assessment content The graph represented by Figure 8 represents the evolution of individual students in relation to formative and summative assessments of exploratory testing module.

Figure 3. Formative and summative assessment It can be seen from the chart the completion of formative assessments, as represented by the assessments of content, process and delivery, and the average of these evaluations represented in red. The blue color indicates the summative evaluation is only qualifying and was held at the end of the module. Of the 10 students initially allocated to the project execution TaRGeT, only 6 completed the module. During his four students left holding the training program, one due for completion for undergraduates, another for city change and the other two having been selected in curricular programs. It is noted in the chart notes in educational assessments of all students were above 8.0 students, whereas summative assessment 66% of these students scored below the average of the assessments. 4.4. Act The LM Exploratory Testing was performed in the second module Training Program in Software Testing. The first module, entitled, "LM Functional Testing, Black Box, via GUI," addressed the basics of software testing, being conducted in two versions, 1.0 and 1.1 at maturity level 1 (basic) and desktop context. Version 1.1 was an improved version of the module in terms of process and training, about the difficulties reported by students in carrying out tests on the tool FLiP. The difference of the module in question for the "LM - Functional Testing, Black Box, via GUI" is the use of an approach to testing based on the experience of the tester (exploratory testing) and use the technique based Session & Charter, in which no There are scripts for testing, and execution of tests is performed in a period of time composed of setup - running research bug. Therefore the LM Exploratory Testing was defined as the 1.0 version, because it was first applied in the Training Program in Software Testing and maturity level 2 (intermediate) concepts and techniques for approaching the test more evolved over the previous module. Moreover, this learning module, it was possible to run the proposed evaluation process in the methodology. The results do not indicate a need to evolve in this

module version and level of maturity, but rather an evolution in terms of context, with other types of applications, such as web applications.

5. Conclusions
The aim of this study was to propose a methodology for training in software testing using a teaching approach / problem-based learning (PBL) to teach the test software in an environment similar to a software factory, involving the practice of real projects and training targeted to the content covered in the projects. The objective was achieved in that positive results were achieved, both in rela tion to the implementation of the learning module, but also in relation to test results obtained during the project execution TaRGeT. The methodology used enabled engagement with concrete situations and challenging, allowing students to experience real situations of software testing at the same time that technical and nontechnical was exercised through the allocation of roles, decision making, work in team, and oral presentation of project results. Additionally, the methodology provided tools that enabled managers to plan and oversee the training program for a much easier way and controlled. As next steps, the LabPS plans new training programs in the area of Software Testing and in the near future in other disciplines of the development process, such as engineering and project management. Although not yet proven, it is believed that the methodology proposed in this paper can be used for new disciplines with minor adjustments. This work was [partially] supported by the National Institute of Science and Technology for Software Engineering (INES), funded by CNPq and FACEPE, grants 573964/2008-4 and APQ-1037-1.03/08.

6. References
[1] J. R. Savery and T.M. Duffy, "Problem Based Learning: An instructional model and Its constructivist framework", Education Technology, 1995. [2] M. Peterson. "Skills to Enhance Problem-based Learning", Med Educ Online [serial online] 2.3, 1997. [3] P. Tynl. "Towards expert knowledge? The comparison between to the traditional and constructivist learning environment in the university, "Int. J. Educ. Res, v.31, p.357-442, 1999. [4] S. Santos C., C. Batista M., Cavalcanti A. P., J. Albuquerque O., Meira S. R. L. Applying PBL in Software Engineering Education, CSEET 2009, Hyderabad, India, 2009. [5] Delisle, R. How to use problem-based learning in the classroom, ASCD: Alexandria, Virginia, USA, 1997. [6] Boud, D., and Feletto, G., The Challenge of Problem-Based Learning, Kongan Page, London, 1998. [7] Duch, B. J. et al. The power of problem-based learning: A Practical teaching is how to undergraduate course in any discipline, Sterling: Stylus Publishing, LLC, Virginia, 2001. [8] Alexandre, G. H. S., "Smart Education - a web tool for assessment and monitoring of learning", Master Thesis, CESAR, Recife, 2008. [9] Bloom, Benjamin S. et al., Handbook of formative and summative assessment of school learning, Pioneer, Sao Paulo, 1st edition, 1983. [10] Myers, G. J., The art of software testing, John Wiley & Sons, New Jersey, USA, 1979. [11] Kaner, C., Bach, J., and Pettichord, B. Lessons Learned in software testing: a context-driven approach, John Wiley & Sons, New York, 2002. [12] Graham, D., et al. Foundations of Software Testing: ISTQB Certification, Thomson Learning, London, 2007.

Вам также может понравиться