Вы находитесь на странице: 1из 5

DISCUSSIONS AND CLOSURES

Discussion of Impact of Change Orders on Labor Efciency for Electrical Construction by Awad S. Hanna, Jeffrey S. Russell, Erik. V. Nordheim, and Matthew J. Bruggink
July/August 1999, Vol. 125, No. 4, pp. 224 232.

John J. Farbarik, P.E.1


1

PhD, Construction Management and Claims Consultant, 14837 Olympic View Lp. Rd. NW, Silverdale, WA 98383. E-mail: farbarik@ pacic.telebyte.com

The objective of this study was to quantify the macroeffects that change orders have on labor efciency in the electrical construction industry. The results of this rather subjective study mistakenly indicate that all labor inefciency is essentially the result of the number of change-order labor hours and the opinion of the contractor as to whether the project was impacted or not impacted by change orders. The bases for the statistical analyses and for the recommendations by the authors are subjective evaluations by contractors as to whether individual projects were impacted or unimpacted by change orders. The evaluations and therefore the resulting data were biased. This biased data then formed the basis for a statistical analysis of both the subjective and the actual cost data. The discusser considers the unqualied use of such subjective data unacceptable. The authors state on page 225, The Ibbs and Allen 1995 study failed to support the concept that changes implemented late in a project are implemented less efciently than sic changes that occur early in a project. The authors later state, on page 227, that several researchers in addition to the writers have indicated that changes issued later in the projects tend to have a more negative impact than changes issued when the project is 50% complete Ibbs and Allen 1995. Did the Ibbs and Allen study support that concept or did it not? The authors should be more consistent. The authors also state, Based on interviews and published papers, many industry professionals believe that changes implemented late in a project cause a greater loss of labor efciency. The fact is that Ibbs and Allen were not able to prove this; and the discusser would assume that, since no pertinent references were provided, no one else has been able to prove it either. The fact that the authors discussed the subject with unnamed industry professionals has little meaning in an archival journal. The authors state that ...research data show a high correlation between projects that are impacted by changes and schedule compression. No such correlation is presented in this article. The authors present, in Fig. 3, a series of bar charts and statistical data under the title Hypothesis Testing for Factors Affecting Loss of Efciency. They go on to state, Fig. 3 illustrates the possible impact that change orders have in terms of causing other productivity-related problems. Fig. 3, unfortunately, shows no such relationship between change orders and other productivity-related problems. Fig. 3 shows that, in the contractors opinion, there is a strong relationship between Change Order

Hours and Impacted Projects, and between an Increase in Original Schedule and Impacted Projects. It fails to show any relationship between Change Order Hours and an Increase in Original Schedule or between Change Order Hours and any other productivityrelated problem. It is noted again for emphasis that the term impacted is assigned based on the subjective evaluation of the contractors surveyed. The authors also state, The survey showed that on impacted projects there are direct relationships between change orders and schedule compression, trade stacking, and sequence of work Tables 1 and 2. Table 1Impacts on Labor Efciency for Impacted Projects and Table 2Impacts on Labor Efciency for Unimpacted Projects are tables presenting the contractors opinion as to the relative effect of various impacts on job efciency. They show, for example, that in the opinion of those contractors questioned, change orders, schedule compression, and sequencing of work have the greatest effect on efciency on an impacted project and that change orders, weather conditions, and trade stacking have the greatest impact on an unimpacted project. There is no data presented that shows any relationship, direct or otherwise, between change orders and schedule compression, trade stacking, or sequence of work, or on any other category of impact. The authors made a fundamental error in their discussion of Fig. 3 and Tables 1 and 2 by stating, in effect, that a strong relationship between each of two independent variables and a dependent variable implies a strong relationship between the two independent variables. This is an error both in statistical data interpretation and in basic logic All cows have four legs. All horses have four legs. All cows are horses. With reference to Tables 1 and 2: electrical contractors by their very nature must rely on other contractors to maintain the progress of the work. It is these other contractors, the prime contractors, who set and revise construction schedules, often on a daily basis. These schedules, often even in their original formulation, provide for subcontractor trade stacking and schedule compression. It is not the subcontractors fault; it is that of the state in which they live and work. And neither is it always caused by change orders. Another problem facing subcontractors and especially electrical subcontractors is that, as is generally known, their prot margins are extremely low. They often bid jobs with very low combined prot and contingency margins. As a result, every schedule revisionwhether effected by owners, change orders, or schedule revisions by the prime contractor or by the availability of labor has an impact on the subcontractors prot margin, his delta. In the discussers experience, subcontractors almost always place blame for this negative impact on the owner and on the changes issued by the owner. They seldom blame themselves and have nothing to gain by blaming the prime contractor. The discusser is very impressed with the various statistical analyses and the resulting predictive model presented by the authors. However, the discusser fails to understand how such a model might be considered valid when 5 of its 11 terms are based on the contractors subjective evaluation as to whether the project has or has not been impacted by change orders, and when 4 of

362 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT / JULY/AUGUST 2002

the 11 terms are based on the estimated change-order hours, and when there is no consideration of the multitude of other factors contributing to cost/labor overruns. A survey of opinions, whether by contractors, owners, or consultants, is an opinion survey, nothing more and nothing less, and is therefore biased. It must be considered biased regardless of the brilliance of the related statistical analyses. The conclusion must be drawn that, regardless of the mathematical validation and robustness of the model presented in this study, it is biased and should never be utilized by any manager attempting to make an objective evaluation of the effect of change orders on labor efciency.

Closure to Impact of Change Orders on Labor Efciency for Electrical Construction by Awad S. Hanna, Jeffrey S. Russell, Erik V. Nordheim, and Matthew J. Bruggink
July/August 1999, Vol. 125, No. 4, pp. 224 232.

Awad S. Hanna1 and Jeffrey S. Russell, P.E.2


1

Professor, Univ. of WisconsinMadison, Dept. of Civil and Environmental Engineering, 2314 Engineering Hall, 1415 Engineering Dr., Madison, WI 53706. 2 Professor, Univ. of WisconsinMadison, Dept. of Civil and Environmental Engineering, 2304 Engineering Hall, 1415 Engineering Dr., Madison, WI 53706.

The initial impression of the writers, in regard to the discussers comments, is that the issues addressed were misconstrued and, unlike the notion of the discusser, are addressed in the article itself. The writers feel other concerns that were raised to be immaterial to the subject. For example, the discussers main concern was that 5 of the 11 terms used on the model are subjective. However, on pages 229 and 230, Eq. 1 and Eq. 2 present the writers model. The model consists of four factors not ve and all of these factors are quantitative. These factors are percent of change quantitative, size of all changes in work hours quantitative, and the number of years of experience of the contractors project manager quantitative. A quantitative variable is called such not of opinion, but rather out of pure denition. The writers

believe that the discusser may have incorrectly confused the impact regression model with the subjective evaluation presented in Table 1 page 228. Indisputably, a subjective evaluation of research based on unbiased data is expected in any article of this nature. Another issue addressed by the discusser was his reference to the writers denition of impact as subjective. The writers clearly dened the term impacted on pages 226 and 227 by stating, ...a project is considered impacted by change orders when the actual and planned cumulative work hours or S curves vary substantially. To reinforce this point, the writers presented a sample of an impacted project from the database shown in Fig. 2 page 226. In addition to that, the writers presented, on page 227, ve other characteristics of impacted projects. To avoid the issue of subjectivity, the writers stated on the last paragraph of page 226, it should be noted that an impacted project should t at least the rst criterion on the following list.... etc.. Where the discussers reference to the writers denition of impacted comes from is unclear. The writers agree that using biased data as a basis for statistical analysis is unacceptable; that is why great lengths were taken to ensure objective analysis. To explicate this point of subjectivity further, because it seems to be a major concern to the discusser, it would be benecial to compare the writers denition with other highly respected researchers works. In Leonards research, projects were dened as impacted because these projects were taken from a rm that specialized in preparation and evaluation of claims Leonard et al. 1991. The objectivity of his data, therefore, is implied and no further explanation of denition is included. Thomas presented a regression function with an indicator variable similar to the writers study. The variable change indicator was represented with a 1 for change work and a 0 for work other than change Thomas and Napolitan 1995. Ibbs presented correlation between change and loss of productivity Ibbs and Allen 1995. He presented no characteristics of projects impacted by changes. The writers denition and method is consistent with all three researchers. Indeed, the writers explanation of both what is an impacted project and the methodology used is clearer than that of any other researcher found while investigating the matter. The writers effort to provide clarity of the methodology seems to be perceived as bias by the discusser. However, clarity assures objectivity by diminishing the possibility of random assignments. The root causes of change-order impact, the writers identied,

Fig. 1. Variables for impacted projects and correlation coefcients rst column of numberscorrelation to change orders
JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT / JULY/AUGUST 2002 / 363

are also under attack by the discusser. He states that, based on his experience, the negative impact felt by subcontractors, such as electrical contractors, resulted from their tendency to bid low. Based on the data collected by the rst writer for over 200 projects collected over ve years, the root cause of change order and its impact is the tendency for some owners to save design fees. As a result, they end up with incomplete design and design details and poor ratings of all project documents. The discusser is certainly qualied in this area to make observations based on his experience, but it would have been helpful to know where his rebuttals were coming from i.e., what research, what article, etc.. The writers did not base their research on their experience alone, but on 61 participants who are also qualied to make subjective observations in this area. The discusser expressed that his experience is in conict with the writers ndings, and the writers would have liked to know why. Indeed, the writers would have liked to know why new ndings, which happen to be different than the discussers experience, are automatically biased. The discusser targeted other issues in the writers article; one was the issue addressed as timing of change. The discusser feels that there were some discrepancies in the writers research. There were no inconsistencies in the writers paper; it was simply a matter of principles of technical writing. The principle of technical writing dictates that we use the reference that we reviewed and not the source reference. For example, when Ibbs and Allen 1995 refer to others, we refer to Ibbs and Allen and not the others. For more reference on timing of change, Leonard et al. 1991 stated, Timing of issuance and processing time are important factors inuencing change order.... Such factors were found to be signicant in 65% and 45% of the cases examined, respectively.... This is also consistent with what the writers paper said. The conclusions of the writers paper were also under discussion. The discusser seems to have misunderstood the conclusions of the paper. Specically, Table 1 and Table 2 show an average response rate on the impact of schedule compress of 3.21 for impacted projects and 1.22 for unimpacted projects. Clearly, 3.21 is higher than 1.22 and also statistically signicant using ANOVA with 90% condence. The writers used methods of statistics that are consistent with basic rules of mathematics to make their ndings. The approach used by the writers to measure the strength of the relationship between two variables, x and y, is called coefcient of linear correlation r. Given n pairs of observation ( x i , y j ), the sample correlation coefcient r was computed as r where Sxy Sxy
Sxx * Sxy

collected from relevant sources were used. The model presented in the paper was tested via new data and via data from existing literature. The model is appropriate considering the limitation presented on page 230. The discussers conclusions were inconsistent with the ones actually stated in the article, and frankly, the only inconsistencies found at all in regard to the writers research.

References
Ibbs, C. W., and Allen, W. E. 1995. Quantitative impacts of project change. Source Document 108, Construction Industry Inst., Univ. of Texas at Austin, Austin, Tex. Leonard, C., Moselhi, O., and Fazio, P. 1991. Impact of change orders on construction productivity. Can. J. Civ. Eng., 18, 484 492. Thomas, H. R., and Napolitan, C. L. 1995. Quantitative effects of construction changes on labor productivity. J. Constr. Eng. Manage., 1213, 290296.

Discussion of Customer Satisfaction in eljko M. Torbica Home Building by Z and Robert C. Stroh
January/February 2001, Vol. 127, No. 1, pp. 82 86.

Howard Bashford1; Anil Sawhney2; and Ken Walsh3


1

Associate Professor, Del E. Webb School of Construction, Arizona State Univ., P.O. Box 870204, Tempe, AZ 85287-0204. 2 Associate Professor, Del E. Webb School of Construction, Arizona State Univ., P.O. Box 870204, Tempe, AZ 85287-0204. 3 Associate Professor, Del E. Webb School of Construction, Arizona State Univ., P.O. Box 870204, Tempe, AZ 85287-0204.

xy

x * y n

S xx Sy y

x 2 y 2

x 2 n y 2 n

Specically, from these formulas we got the results shown in Fig. 1 regarding correlation data is also found in Tables 1 and 2, page 288 of paper. These formulas, which are basic statistical correlation functions, formed the foundation of the writers conclusions of correlation. In this study, research, statistical analysis, and years of data

The authors are to be congratulated for an excellent article on a very timely subject. JCEM is to be commended for publishing an article concerning the residential construction sector of the construction industry, certainly an area where additional research and development is needed considering the size of the residential construction industry. The authors have developed their investigation based upon an instrument for measuring home-buyer satisfaction, HOMBSAT. The instrument was developed through an empirical process and tested for reliability and validity. The three dimensions of HOMBSAT are House Design, House Quality, and Builder Service Performance. The development of this instrument was based upon the authors stated conclusion that there are no commonly accepted methods of measuring customer satisfaction in the construction industry. The discussers wish to point out that there are commonly accepted methods of measuring customer satisfaction in the construction industry, especially the residential sector of the construction industry. At least one generalized method for measuring customer satisfaction has been reported in the literature, that being quality function deployment QFD Babra, unpublished, 1998 and Oswald 1993. However, extending far beyond these quality models is the J.D. Power and Associates New-Home Builder Customer Satisfaction Study. J.D. Power and Associates conducted its rst new-home-buyer customer satisfaction survey in 1999. The survey covered only the Phoenix, Arizona, market. In 2000, the study was expanded to six major U.S. markets: Chicago, Dallas/ Fort Worth, Houston, Las Vegas, Phoenix, and Washington, D.C.

364 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT / JULY/AUGUST 2002

Table 1. Comparison of Results from Torbica and Stroh and from J.D. Power and Associates Torbica and Stroh Quality indicators in HOMBSAT Customer service Relative importance of predictora 0.648 Quality indicators in J.D. Power and Associate study Builders customer service representative Builders sales staff Builders design center Physical design elements Quality of workmanship/materials Price/value Recreational facilities Location J.D. Power and Associates Signicance of indicator % 24 17 7 8 26 12 3 3 48 8 38 Signicance of group %

Design House

0.277 Not dened

Other elements not considered


a

Reported as the regression coefcient in the regression equation

The 2000 survey was based upon responses from approximately 25,500 buyers of newly constructed homes in these six major markets J.D. Power 2000. Of particular interest in this case is the comparison of the factors that J.D. Power and Associates found to drive overall builder satisfaction with those found by Torbica and Stroh. Torbica and Stroh found the factors referred to in the paper as dimensions identied as House Design, House Quality, and Builder Service Performance to be the key indicators of customer satisfaction. Exactly what the customer is satised with, whether it is the home they have purchased or the homebuilder, is not clearly dened. However, it is clear that many items, including the home-buying experience, the home, and the homebuilder, are intertwined. The authors also calculated the relative importance of each of the indicators by comparing the regression analysis coefcients for each of the indicators. They concluded that service, with a regression coefcient of 0.648, was about twice as signicant as home design, with a regression coefcient of 0.277, in determining customer satisfaction. The authors also calculated the overall performance of the homebuilders included in the survey, nding that home-buyers were least satised with service, more satised with house quality, and most satised with house design. J.D. Power and Associates does not report on their method of analyzing the results of their questionnaire, nor do they report what questions they ask. One of the discussers acquired a new home in 1999 and hence received one of the J.D. Powers questionnaires. The questionnaire consists of 90 questions related to the overall home-buying experience. The J.D. Power and Associates report on their 2000 survey indicated eight factors that drive overall builder satisfaction, those being listed from most signicant to least signicant the quality of the workmanship/ materials, the builders customer service representative, the builders sales staff, the perceived ratio between price paid and value received, the physical design elements of the home, the builders design center, community recreational facilities, and location. The report does not give any other details. The discussers have prepared Table 1 as a summary comparison of the results of the authors work and the J.D. Power and Associates 2000 New-Home-Builder Customer Satisfaction Survey. The discussers have grouped the J.D. Power and Associates factors to match similar dimensions of the authors study. As shown in Table 1, the two methods did produce substantially different results. The authors found customer service to be

about twice as signicant as design, whereas J.D. Power and Associates found customer service to be six times as signicant as design. The authors did not draw a conclusion about the signicance of the quality of the home, whereas J.D. Power and Associates found quality of workmanship and materials to be nearly of equal importance to customer service, and almost ve times as important as design. The discussers do not feel that these differences signicantly affect the points raised or the conclusions of the authors article. Indeed, these differences reinforce the conclusions of the paper. In particular, both studies show that customer service is the most important factor in achieving home-buyer satisfaction. Both studies also reinforce the fact that homebuilders must inuence multiple factors in their product delivery process if they are to achieve customer satisfaction.

References
Oswald, T. H., and Burati, J. L. 1993. Adaptation of quality function deployment to engineering and construction project development. Source Document No. 97, The Construction Industry Institute, Clemson Univ., Clemson, S.C. J. D. Power and Associates. 2000. New-home builder customer satisfaction study, Agoura Hills, Calif. Report accessible at www.jdpa.com.

Closure to Customer Satisfaction in eljko M. Torbica Home Building by Z and Robert C. Stroh
January/February 2001, Vol. 127, No. 1, pp. 82 86.

eljko M. Torbica1 and Robert C. Stroh2 Z


1

Assistant Professor, Dept. of Construction Management, Florida International Univ., 10555 W. Flagler St., Miami, FL 33174. E-mail: torbica@eng.u.edu 2 Director, Shimberg Center for Affordable Housing, Univ. of Florida, M. E. Rinker, Sr., School of Building Construction, P.O. Box 115703, Gainesville, FL 32611-5703. E-mail: stroh@u.edu

The writers would like to thank the discussers for the kind remarks regarding the quality of their article. This is in line with the overwhelming positive responses and comments that have been addressed directly to writers since the article appeared in the

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT / JULY/AUGUST 2002 / 365

JCEM. It is encouraging that such an important issue, namely customer satisfaction, has nally begun to gain momentum, catching the attention of both construction practitioners and academicians. The writers would like to comment on two points that were made in the discussion, one as to the relevance of quality function deployment QFD to our study methodology and the other regarding the comparison of our studys ndings to those of the J.D. Power and Associates JDPA study. There is a key difference between the methodology that was used in our study and the QFD methodology. The HOMBSAT instrument is concerned with measuring customer satisfaction, while the QFD methodology focuses on how to identify and prioritize customers expectations, needs, and requirements, and subsequently how to transform them into the delivered product/ service Akao 1990; Bicknell and Bicknell 1995; Hauser and Clausing 1988. In that sense, QFD takes place early in the product/service development process, whereas the measurement of customer satisfaction takes place after the customer has started to experience the given product/service. The second comment addresses the attempt made by the discussers to directly compare ndings of our study with those of the J.D. Power and Associates 2000 New-Home Builder Customer Satisfaction Study. Very little is known about the methodology used in the J.D. Power and Associates study. For example, we dont know how percent contributions of individual factors were calculated. The most important factor in the JDPA 2000 study was Quality of Workmanship/Materials, with a 26% contribution to overall satisfaction, whereas in the 2001 study the very same factor ended up as the fourth most important, with only 14% contribution. In addition, we can only speculate about whether the percent contributions as reported in the JDPA study are the same thing as the relative importance reported in our study. As a result, it makes no sense to expect that the ratio of relative importance of different factors from one study would replicate ratios found in another study. Ignoring that fact, and trying to directly compare ratios, would only result in substantially different results.

In conclusion, the data collected by surveys and other empirical designs is of little use unless its reliability and validity can be determined. Reliability focuses on the extent to which empirical indicators provide consistent results across repeated measurements Carmines and Zeller 1979, and validity indicates the degree to which an instrument measures what it purports to measure Bohrnstedt 1970. The HOMBSAT instrument had passed tests of rigorous analysis demonstrating both high reliability and validity see Torbica and Stroh 2000. This is not to imply that our studys results are considered to be more accurate or more appropriate than those of the JDPA study. Strictly speaking, neither study proves anything; however, both studies suggest that the service appears to be the most inuential in shaping home-buyer satisfaction.

Bibliography
J. D. Power and Associates. 2001. New-Home Builder Customer Satisfaction Study. Agoura Hills, Calif. Report accessible at www.jdpa.com.

References
Akao, Y. 1990. Quality function deployment: Integrating customer requirements into product design, Productivity Press, Cambridge, Mass. Bicknell, B. A., and Bicknell, K. D. 1995. The road map to repeatable successusing QFD to implement change, CRC Press, Boca Raton, Fla. Bohrnstedt, G. W. 1970. Reliability and validity assessment in attitude measurement. Attitude measurement, G. F. Dummers, ed., Rand McNally, Chicago. Carmines, E. G., and Zeller, R. A. 1979. Reliability and validity assessment, Sage, Beverly Hills, Calif. Hauser J. R. and Clausing, D. 1988. The house of quality. Harvard Bus. Rev., 663, 6373. . M. and Stroch, C. R. 2000. HOMBSATAn instrument Torbica, Z for measuring home-buyer satisfaction. Qual. Manage. J., 74, 32 44.

366 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT / JULY/AUGUST 2002

Вам также может понравиться