Вы находитесь на странице: 1из 314

CONTENTS

LIST OF CONTRIBUTORS ix

EDITORIAL BOARD xi

STATEMENT OF PURPOSE AND REVIEW PROCEDURES xiii

EDITORIAL POLICY AND MANUSCRIPT


FORM GUIDELINES xv

INTRODUCTION
Marc J. Epstein and John Y. Lee xvii

NEW DIRECTIONS IN MANAGEMENT ACCOUNTING


RESEARCH: INSIGHTS FROM PRACTICE
Frank H. Selto and Sally K. Widener 1

THE PROFIT IMPACT OF VALUE CHAIN


RECONFIGURATION: BLENDING STRATEGIC COST
MANAGEMENT (SCM) AND ACTION-PROFIT-LINKAGE
(APL) PERSPECTIVES
John K. Shank, William C. Lawler and Lawrence P. Carr 37

THE MEASUREMENT GAP IN PAYING FOR


PERFORMANCE: ACTUAL AND PREFERRED MEASURES
Jeffrey F. Shields and Lourdes Ferreira White 59

v
vi

AN EMPIRICAL EXAMINATION OF COST ACCOUNTING


PRACTICES USED IN ADVANCED MANUFACTURING
ENVIRONMENTS
Rosemary R. Fullerton and Cheryl S. McWatters 85

THE INTERACTION EFFECTS OF LEAN PRODUCTION


MANUFACTURING PRACTICES, COMPENSATION, AND
INFORMATION SYSTEMS ON PRODUCTION COSTS: A
RECURSIVE PARTITIONING MODEL
Hian Chye Koh, Khim Ling Sim and Larry N. Killough 115

COMPENSATION STRATEGY AND ORGANIZATIONAL


PERFORMANCE: EVIDENCE FROM THE BANKING
INDUSTRY IN AN EMERGING ECONOMY
C. Janie Chang, Chin S. Ou and Anne Wu 137

ACCOUNTING FOR COST INTERACTIONS IN DESIGNING


PRODUCTS
Mohamed E. Bayou and Alan Reinstein 151

RELATIONSHIP QUALITY: A CRITICAL LINK IN


MANAGEMENT ACCOUNTING PERFORMANCE
MEASUREMENT SYSTEMS
Jane Cote and Claire Latham 171

MEASURING AND ACCOUNTING FOR MARKET PRICE


RISK TRADEOFFS AS REAL OPTIONS IN STOCK FOR
STOCK EXCHANGES
Hemantha S. B. Herath and John S. Jahera Jr. 191

CONNECTING CONCEPTS OF BUSINESS STRATEGY AND


COMPETITIVE ADVANTAGE TO ACTIVITY-BASED
MACHINE COST ALLOCATIONS
Richard J. Palmer and Henry H. Davis 219
vii

CHOICE OF INVENTORY METHOD AND THE


SELF-SELECTION BIAS
Pervaiz Alam and Eng Seng Loh 237

CORPORATE ACQUISITION DECISIONS UNDER


DIFFERENT STRATEGIC MOTIVATIONS
Kwang-Hyun Chung 265

THE BALANCED SCORECARD: ADOPTION AND


APPLICATION
Jeltje van der Meer-Kooistra and Ed G. J. Vosselman 287
LIST OF CONTRIBUTORS

Pervaiz Alam College of Business Administration, Kent


State University, Ohio, USA
Mohamed E. Bayou School of Management, University of
Michigan, Dearborn, Michigan, USA
Lawrence P. Carr F. W. Olin Graduate School of Management,
Babson College, Massachusetts, USA
C. Janie Chang Department of Accounting and Finance, San
Jose State University, California, USA
Kwang-Hyun Chung Lubin School of Business, Pace University,
New York, USA
Jane Cote School of Accounting, Information Systems
and Business Law, Washington State
University, Washington, USA
Henry H. Davis Lumpkin College of Business and Applied
Sciences, Eastern Illinois University, Illinois,
USA
Rosemary R. Fullerton School of Accountancy, Utah State University,
Utah, USA
Hemantha S. B. Herath Department of Accounting and Finance,
Brock University, Canada
John S. Jahera Jr. College of Business, Auburn University,
Alabama, USA
Larry N. Killough Virginia Polytechnic Institute and State
University, USA
Hian Chye Koh Nanyang Technological University,
Singapore

ix
x

Claire Latham School of Accounting, Information Systems


and Business Law, Washington State
University, Washington, USA
William C. Lawler F. W. Olin Graduate School of Management,
Babson College, Massachusetts, USA
Eng Seng Loh Business Intelligence Group, Caterpillar Inc.,
USA
Cheryl S. McWatters University of Alberta, Canada
Jeltje van der University of Groningen, The Netherlands
Meer-Kooistra
Chin S. Ou Department of Accounting, National Chung
Cheng University, Taiwan
Richard J. Palmer Lumpkin College of Business and Applied
Sciences, Eastern Illinois University, Illinois,
USA
Alan Reinstein School of Business, Wayne State University,
Michigan, USA
Frank H. Selto University of Colorado at Boulder, Colorado,
USA and University of Melbourne, Australia
John K. Shank F. W. Olin Graduate School of Management,
Babson College, Massachusetts, USA
Jeffrey F. Shields School of Business, University of Southern
Maine, Maine, USA
Khim Ling Sim School of Business, Western New England
College, Massachusetts, USA
Ed G. J. Vosselman Erasmus University Rotterdam,
The Netherlands
Lourdes Ferreira White Merrick School of Business, University of
Baltimore, Maryland, USA
Sally K. Widener Rice University, Texas, USA
Anne Wu School of Accounting, National Chengchi
University, Taiwan
EDITORIAL BOARD

Thomas L. Albright George J. Foster


Culverhouse School of Accountancy, Stanford University
University of Alabama
James M. Fremgen
Jacob G. Birnberg Naval Postgraduate School
University of Pittsburgh
Eli M. Goldratt
Germain B. Boer Avraham Y. Goldratt Institute
Vanderbilt University John Innes
William J. Bruns, Jr. University of Dundee
Harvard University Fred H. Jacobs
Peter Chalos Michigan State University
University of Illinois, Chicago H. Thomas Johnson
Chee W. Chow Portland State University
San Diego State University Larry N. Killough
Virginia Polytechnic Institute
Donald K. Clancy
Texas Tech University Thomas P. Klammer
University of North Texas
Robin Cooper
Emory University C. J. Mc Nair
Babson College
Srikant M. Datar
Harvard University James M. Reeve
University of Tennessee, Knoxville
Nabil S. Elias
University of North Carolina, Jonathan B. Schiff
Charlotte Fairleigh Dickinson University
K. J. Euske John K. Shank
Naval Postgraduate School Dartmouth College
Eric G. Flamholtz Barry H. Spicer
University of California, Los Angeles University of Auckland
xi
xii

George J. Staubus Lourdes White


University of California, Berkeley University of Baltimore
Wilfred C. Uecker S. Mark Young
Rice University University of Southern California
STATEMENT OF PURPOSE AND REVIEW
PROCEDURES

Advances in Management Accounting (AIMA) is a professional journal whose


purpose is to meet the information needs of both practitioners and academicians.
We plan to publish thoughtful, well-developed articles on a variety of current topics
in management accounting, broadly defined.
Advances in Management Accounting is to be an annual publication of quality
applied research in management accounting. The series will examine areas of
management accounting, including performance evaluation systems, accounting
for product costs, behavioral impacts on management accounting, and innovations
in management accounting. Management accounting includes all systems designed
to provide information for management decision making. Research methods will
include survey research, field tests, corporate case studies, and modeling. Some
speculative articles and survey pieces will be included where appropriate.
AIMA welcomes all comments and encourages articles from both practitioners and
academicians.
Review Procedures
AIMA intends to provide authors with timely reviews clearly indicating the
acceptance status of their manuscripts. The results of initial reviews normally
will be reported to authors within eight weeks from the date the manuscript is
received. Once a manuscript is tentatively accepted, the prospects for publication
are excellent. The author(s) will be accepted to work with the corresponding
Editor, who will act as a liaison between the author(s) and the reviewers to resolve
areas of concern. To ensure publication, it is the author’s responsibility to make
necessary revisions in a timely and satisfactory manner.

xiii
EDITORIAL POLICY AND MANUSCRIPT
FORM GUIDELINES

1. Manuscripts should be type written and double-spaced on 8 21 by 110 white


paper. Only one side of the paper should be used. Margins should be set to
facilitate editing and duplication except as noted:
(a) Tables, figures, and exhibits should appear on a separate page. Each should
be numbered and have a title.
(b) Footnote should be presented by citing the author’s name and the year of
publication in the body of the text; for example, Ferreira (1998), Cooper
and Kaplan (1998).
2. Manuscripts should include a cover page that indicates the author’s name and
affiliation.
3. Manuscripts should include on a separate lead page an abstract not exceeding
200 words. The author’s name and affiliation should not appear on the abstract.
4. Topical headings and subheadings should be used. Main headings in the
manuscript should be centered, secondary headings should be flush with the
left hand margin. (As a guide to usage and style, refer to the William Strunk,
Jr., and E. B. White, The Elements of Style.)
5. Manuscripts must include a list of references which contain only those works
actually cited. (As a helpful guide in preparing a list of references, refer to Kate
L. Turabian, A Manual for Writers of Term Papers, Theses, and Dissertations.)
6. In order to be assured of anonymous review, authors should not identify them-
selves directly or indirectly. Reference to unpublished working papers and
dissertations should be avoided. If necessary, authors may indicate that the
reference is being withheld for the reason cited above.
7. Manuscripts currently under review by other publications should not be sub-
mitted. Complete reports of research presented at a national or regional confer-
ence of a professional association and “State of the Art” papers are acceptable.
8. Four copies of each manuscript should be submitted to John Y. Lee at the
address below under Guideline 11.
9. A submission fee of $25.00, made payable to Advances in Management Ac-
counting, should be included with all submissions.
10. For additional information regarding the type of manuscripts that are desired,
see “AIMA Statement of Purpose.”
xv
xvi

11. Inquires concerning Advances in Management Accounting may be directed


to either one of the two editors:

Marc J. Epstein John Y. Lee


Jones Graduate School of Administration Lubin School of Business
Rice University Pace University
Houston, Texas 77251–1892 Pleasantville, NY 10570–2799
NEW DIRECTIONS IN
MANAGEMENT ACCOUNTING
RESEARCH: INSIGHTS
FROM PRACTICE

Frank H. Selto and Sally K. Widener

ABSTRACT
Although the “new economy” once again resembles the old economy, the
drivers of success for many firms continue to be intangible or service-related
assets. These changes in the economic basis of business are leading to
changes in practice which are creating exciting new opportunities for
research. Management accounting still is concerned with internal uses of
and demands for operating and performance information by organizations,
their managers, and their employees. However, current demand for internal
information and analysis most likely reflects current decision making needs,
which have changed rapidly to meet economic and environmental conditions.
Many management accounting research articles reflect traditional research
topics that might not conform to current practice concerns. Some accounting
academics may desire to pursue research topics that reflect current prob-
lems of practice to inform, influence, or understand practice or influence
accounting education.
This study analyzes attributes of nearly 2,000 research and professional
articles published during the years 1996–2000 and finds numerous, relatively

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 1–35
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12001-7
1
2 FRANK H. SELTO AND SALLY K. WIDENER

unexamined research questions that can expand the scope of current man-
agement accounting research. Analyses of theories, methods, and sources
of data used by published management accounting research also describe
publication opportunities in major research journals.

DATA AVAILABILITY
Raw data are readily available online, and coded data are available upon request
from the authors.

INTRODUCTION AND MOTIVATION


While some aspects of the “new economy” reflected an unrealistic bubble,
many firms continue to be driven by intangible assets, the highly competitive
global economy, and increasing technological change to forge changes in what
accountants have thought of as their “traditional” accounting responsibilities. In
many cases, accountants and financial staff are leading the way in changing their
internal roles. Accountants find themselves managing new business practices,
such as outsourcing, focusing more on cost control and process re-engineering,
and expanding their involvement with strategic planning and implementation.
The expansion of accountants’ duties beyond traditional budgeting and reporting
is occurring rapidly and is creating numerous opportunities for academic
management accountants to conduct innovative research.
According to a recent IMA study of practicing “management accountants”
(IMA, 2000),1 apparently no management accountants are left in practice.
Professionals in practice overwhelmingly have favored job titles such as financial
analyst, business advisor, and consultant over “cost accountant” or “management
accountant.” Perhaps this is not a purely cosmetic change. The IMA study also
shows that current job titles reflect broader duties than traditionally executed
by accountants. Instead of viewing this change as the end of management
accounting, a more optimistic viewpoint is to see this as an opportunity to broaden
management accounting, both in education and in research. This opens doors for
exciting new research opportunities.
Some accounting researchers conduct research that is explicitly oriented to or
has application to practice. Others might seek to do so. Several related motivations
or objectives for practice-oriented research include desires to: (1) gain increased
understanding of why organizations use certain techniques and practices; (2) gain
increased understanding of how and which techniques used in practice impact
organizational performance; (3) inform practitioners; (4) increase the applicability
New Directions in Management Accounting Research 3

of accounting textbooks, coursework, and programs; (5) satisfy personal taste;


and (6) increase consulting opportunities. While researchers pursuing any of these
might find this study interesting and helpful, this study is explicitly motivated by
the first four objectives.
One desirable outcome of practice-oriented research may be a positive impact
on accounting enrollments. Many university accounting programs in the U.S.
have been in decline, perhaps because of: (1) increased education requirements
for accounting certification in many states; (2) relatively greater employment
opportunities and salaries in other business fields, such as finance; (3) competitive
educational efforts by industrial and professional firms; (4) focused financial
support of only select universities by employers of accounting graduates; and
(5) perceived greater job-relevance of other courses. Many of the factors that
can contribute to enrollments in accounting are beyond the control of accounting
academics. Because research surely informs teaching, accounting faculty might
help increase accounting enrollments by managing what is researched.2

Research Objectives

Management accounting research, researchers, and education (and perhaps other


accounting sub-fields by analogy) might benefit from identifying interesting, less
researched topics that reflect issues of current practice. More influence on external
constituents might lead to greater prestige, esteem, and resources for researchers,
and, perhaps, improvements in practice (e.g. Anderson, 1983). The objective
of this study is to use observed divergences between management accounting
research topics and issues of practice to identify interesting, practice-oriented
research questions.
The study assesses and interprets correspondence (or lack thereof) between
published research topics and topics of the practice literature. High correspon-
dence can be misleading because it might represent good synergy, coincidence,
or little interest. Low correspondence might present opportunities for interesting
new research. Thus, this study examines both types of topics as potential sources
of interesting research questions. Finally, the study then addresses the equally
important issue of matching these research questions with theory, data, and
research methods. Without these matches, management accounting research will
have difficulty moving beyond pure description or endless theory building. It also
might be possible to increase the probability of publication of these new questions
by assessing journals’ past publication histories.
This study is unlike recent, more focused reviews of management accounting
research, which include Covaleski and Dirsmith (1996) – organization and
4 FRANK H. SELTO AND SALLY K. WIDENER

sociology-based research; Elnathan et al. (1996) – benchmarking research;


Shields (1997) – research by North Americans; Demski and Sappington (1999)
– empirical agency theory research; Ittner and Larcker (1998) – performance
measurement research; and Ittner and Larker (2001) – value-based management
research. The present study is in the spirit of Atkinson et al. (1997), which
seeks to encourage broader investigations of management accounting research
topics. The present study extends Atkinson et al. by documenting and identifying
practice-oriented, innovative research questions in major topic areas based on
observed divergences between practice and research.

RESEARCH DESIGN AND METHOD


The study’s research design is to first compare topic coverage of research and
professional publications. Differences between research and practice topics
are indications of correspondence between the domains of inquiry. The study
measures correspondence by levels and changes in relative topic coverage.
The study further analyzes research articles’ use of theory, sources of data,
and methods of analysis, which are sorted by topic and publication outlet. The
remainder of this section describes the study’s research domain, sampling plan,
data collection, and data analysis.

Research Domain

The study’s research domain is limited to published articles that address


conventional management accounting topics (i.e. as reflected in management
accounting textbooks) plus several that additionally are salient in the professional
financial and accounting literature (described in the next section). Both published
research and practice topics are assumed to be reasonable proxies of issues and
questions of interest to researchers and practitioners. Several problems arise in
the use of these proxies. (1) It is well known that time between completion and
publication of articles differs between the research and practice literatures. This
study examines various time lags between research and practice topics to account
for the publication lag. (2) Not all research efforts or practice issues appear in
the published literature. This study assumes that unpublished research articles
do not meet academic quality standards, although some researchers might harbor
other explanations. This study also compares the practice literature to the IMA’s
study of practice to confirm conformance between the practice literature and
issues expressed by surveyed practitioners (see Note 2 and the later discussion of
aggregate results).
New Directions in Management Accounting Research 5

The study considers an article to be of direct interest to “management


accountants” if it addresses one or more of the following topics:
 Accounting software.  Improving profits.
 Budgeting.  Internal control.
 Business process improvement.  Management accounting practices.
 Cash management.  Management control.
 Compensation plans.  Outsourcing.
 Cost accounting.  Performance measurement.
 Cost management.  Research methods.
 Effects of financial reporting on  Shareholder value.
internal systems.
 Effects of information technology
on internal systems.

Sampling

The study analyzes articles that appeared in print during the years 1996–2000. This
five-year period witnessed dramatic changes in technology, business conditions,
and the responsibilities of financial and accounting professionals. There is no
reason to believe that future years will be any less volatile. The study further defines
the domain of management accounting research as articles fitting the above topics
that were published in the following English-language research journals:
 Academy of Management Journal  Journal of Accounting and
(AMJ) Economics (JAE)
 Academy of Management Review  Journal of Accounting Research
(AMR) (JAR)
 Accounting and Finance (A&F)  Journal of Management Accounting
Research (JMAR)
 Accounting Organizations and  Management Accounting Research
Society (AOS) (MAR)
 Advances in Management  Review of Accounting Studies (RAS)
Accounting (AIMA)
 Contemporary Accounting  Strategic Management Journal (SMJ)
Research (CAR)
 Journal of Accounting, Auditing,  The Accounting Review (TAR)
and Finance (JAAF)

We assume that the research literature in other languages either covers similar
topics or is not related to the practice literature aimed at English-speaking
professionals.3
6 FRANK H. SELTO AND SALLY K. WIDENER

Similarly, the study defines the domain of management accounting practice to


be articles fitting the topical boundaries that were published in English-language
professional magazines and journals aimed at financial managers, executives,
and consultants. We, therefore, assume that articles published in the professional
literature accurately reflect issues of importance to professionals themselves. The
professional literature sources include:
 Strategic Finance (SF)  Sloan Management Review (SMR)
 Management Accounting  Harvard Business Review (HBR)
(MA-U.S. and U.K.)
 Journal of Accountancy (JOA)  Business Finance (BF)
 Financial Executive (FE)

Data Collection

The study uses the online, electronic contents of the abstracts of management
accounting articles from research and practice journals published during the
years 1996–2000 as its source of data. The study includes the entire contents of
explicitly named management accounting journals (e.g. Advances in Management
Accounting, Strategic Finance) and selected articles from other journals and
magazines if articles matched the topic domain. The database of management
accounting articles consists of information on:
 373 research articles;
 1,622 professional or practice articles.

Data Analysis

Qualitative Method
The study uses a qualitative method to label, categorize, and relate the management
accounting literature data (e.g. Miles & Huberman, 1994). The study uses Atlas.ti
software (www.atlasti.de), which is designed for coding and discovering relations
among qualitative data.4 The study began with predetermined codes based on the
researchers’ expectations of topics, methods, and theories. As normally happens
in this type of qualitative study, the database contains unanticipated qualitative
data that required creation of additional codes. This necessary blend of coding,
analysis, and interpretation means that the coding task usually cannot be outsourced
to disinterested parties. Thus, this method is unlike content analysis, which counts
pre-defined words, terms, or phrases.
New Directions in Management Accounting Research 7

Table 1. Article Database Codes.


ARTICLE
article-ABSTRACT
article-AUTHOR(S)
article-DESCRIPTORS
article-journal-A&F
article-journal-AIMA
article-journal-AMJ
article-journal-AOS
article-journal-CAR
article-journal-JAAF
article-journal-JAE
article-journal-JAR
article-journal-JMAR
article-journal-MAR
article-journal-RAS
article-journal-SMJ
article-journal-TAR
article-TITLE
article-VOLUME & PAGES
article-YEAR
article-year-1996
article-year-1997
article-year-1998
article-year-1999
article-year-2000
buddgeting-soc psych
budgeting-agency
budgeting-contingency
budgeting-econ-other
budgeting-jdm
budgeting-org change
budgeting-soc justice
budgeting-systems
GEOGRAPHY
geography-ANZ
geography-ASIA
geography-EUROPE
geography-INTERNATIONAL
geography-LATIN AMERICA
geography-NORTH AMERICA
METHOD
method-ANALYSIS
method-analysis-analytical
method-analysis-qualitative
method-analysis-statistical
8 FRANK H. SELTO AND SALLY K. WIDENER

Table 1. (Continued )
method-ARCHIVAL
method-EXPERIMENT
method-FIELD/CASE STUDY
method-LOGICAL ARGUMENT
method-SURVEY
THEORY
theory-AGENCY
theory-CONTINGENCY
theory-CRITICAL
theory-ECONOMIC CLASSIC
theory-INDIVID/TEAM JDM
theory-ORGANIZATION CHANGE
theory-POSITIVE ACCOUNTING
theory-SOCIAL JUSTICE/POWER/INFLUENCE
theory-SOCIAL/PSYCH
theory-SYSTEMS
theory-TRANSACTION COST
TOPIC
topic-BUDGETING
topic-budgeting-activity based
topic-budgeting-capital budgeting
topic-budgeting-general
topic-budgeting-participation
topic-budgeting-planning&forecasting
topic-budgeting-slack
topic-budgeting-variances
topic-BUSINESS INTELLIGENCE
topic-BUSINESS PROCESSES
topic-business processes-credit management
topic-business processes-fixed assets
topic-business processes-inventory management
topic-business processes-procurement
topic-business processes-production management
topic-business processes-reengineering
topic-business processes-travel expenditures
topic-CASH MANAGEMENT
topic-cash management-borrowings
topic-cash management-collections
topic-cash management-credit policies
topic-cash management-electronic banking
topic-cash management-electronic exchange
topic-cash management-foreign currency
topic-cash management-investing
topic-cash management-payments
topic-COMPENSATION
New Directions in Management Accounting Research 9

Table 1. (Continued )
topic-compensation-accounting measures
topic-compensation-design/implementation
topic-compensation-executive
topic-compensation-pay for performance
topic-compensation-stock options
topic-CONTROL
topic-control-alliances/suppliers/supply chain
topic-control-complementarity/interdependency
topic-control-cost of capital
topic-control-customers/customer profitabilty
topic-control-environmental
topic-control-information/information technology
topic-control-intangibles
topic-control-international/culture
topic-control-JIT/flexibility/time
topic-control-org change
topic-control-quality
topic-control-R&D/new product develop
topic-control-risk
topic-control-smart cards/purchasing cards
topic-control-strategy
topic-control-structure
topic-control-system
topic-COST ACCOUNTING
topic-cost accounting-environmental
topic-cost accounting-general
topic-cost accounting-standards
topic-cost accounting-throughput
topic-COST MANAGEMENT
topic-cost management-ABC
topic-cost management-ABM
topic-cost management-benchmarking
topic-cost management-cost efficiency/reduction
topic-cost management-cost negotiation
topic-cost management-costing
topic-cost management-process mapping
topic-cost management-quality/productivity/tqm
topic-cost management-shared services
topic-cost management-strategy
topic-cost management-target costing
topic-cost management-theory of constraints/capacity
topic-ELECTRONIC
topic-electronic-business
topic-electronic-commerce
topic-electronic-intranet
topic-electronic-processing
10 FRANK H. SELTO AND SALLY K. WIDENER

Table 1. (Continued )
topic-electronic-web sites
topic-electronic-xml/xbrl
topic-EXPERT SYSTEMS
topic-FINANCIAL ACCOUNTING
topic-financial reporting-accounting standards/SEC
topic-financial reporting-depreciation
topic-financial reporting-drill downs
topic-financial reporting-e reporting
topic-financial reporting-environmental
topic-financial reporting-general
topic-financial reporting-international
topic-financial reporting-open books
topic-financial reporting-realtime accounting
topic-INTERNAL CONTROL
topic-internal control-controls
topic-internal control-corporate sentencing guidelines
topic-internal control-data security/computer fraud
topic-internal control-ethics
topic-internal control-fraud awareness/detection
topic-internal control-internal audit
topic-internal control-operational audits
topic-MANAGEMENT ACCOUNTING-practices
topic-OTHER
topic-OUTSOURCING DECISION
topic-PERFORMANCE MEASUREMENT
topic-performance measurement-balanced scorecard
topic-performance measurement-business process
topic-performance measurement-EVA/RI
topic-performance measurement-evaluation/appraisal
topic-performance measurement-group
topic-performance measurement-incentives
topic-performance measurement-individ
topic-performance measurement-manipulation
topic-performance measurement-nonfinancial
topic-performance measurement-productivity
topic-performance measurement-strategic
topic-performance measurement-system
topic-PRICING
topic-PROFITABILITY
topic-PROJECT MANAGEMENT
topic-RESEARCH METHODS
topic-SHAREHOLDER VALUE
topic-SOCIAL RESPONSIBILITY
topic-SOFTWARE
topic-software-ABC/product costing
topic-software-accounting technology (general)
New Directions in Management Accounting Research 11

Table 1. (Continued )
topic-software-budgeting
topic-software-costing
topic-software-credit analysis
topic-software-data conversion
topic-software-database
topic-software-decision support
topic-software-document management
topic-software-erp
topic-software-fixed assets
topic-software-graphical accounting
topic-software-groupware
topic-software-human resources/payroll
topic-software-internet
topic-software-mindmaps
topic-software-modules
topic-software-operating system
topic-software-project accounting
topic-software-purchasing
topic-software-reporting
topic-software-sales/C/M
topic-software-selection/accounting platforms/implementation
topic-software-spreadsheets
topic-software-t&e
topic-software-warehousing/datamarts/intelligent agents
topic-software-workflow
topic-software-year2000 compliant
topic-TRANSFER PRICING
topic-VALUATION
topic-VALUE BASED MANAGEMENT
topic-VALUE CHAIN

Table 1 contains the complete list of research-literature codes used in this


study. The practice literature codes are identical except for journal codes. Codes
shown in capital letters (e.g. ARTICLE) are major codes, or “supercodes,” that
contain related minor or subcodes (e.g. article-ABSTRACT). An “other” code
collects topics that apparently are of minor interest at this time. Figure 1 displays
sample information related to one of the data records. The left-hand panel shows
a typical article’s data, while the right-hand panel contains the codes applied by
the researchers to the data. An article may cover several topics and use several
methods and theories; thus the numbers of topic, method, and theory codes
exceeds the number of articles in the sample.
The software’s query features allow nearly unlimited search and discovery of re-
lations among coded data. These queries form the analyses that follow in this study.
12 FRANK H. SELTO AND SALLY K. WIDENER

Fig. 1. Example of Coded Article Data.

Measures of Correspondence
The study measures correspondence between research and practice to capture
different dynamics of information exchange between the realms of inquiry. The
study defines differences in changes and levels of topic frequency as measures of
correspondence. Research and practice topic frequencies are scaled by the total
number of research or practice topics to control for the relative sizes of the two
outlets. The study examines contemporaneous and lagged differences, as the data
permit, for evidence of topic correspondence. Furthermore, the study investigates
whether research topic frequency leads or lags practice.

Validity Issues
One researcher coded all of the practice article abstracts in the database and a
5% random sample of the research abstracts. Another researcher coded all of
the research abstracts and a 5% random sample of the practice abstracts. Inter-
rater reliability of the overlapped coding was 95%, measured by the proportion of
coding agreements divided by the sum of agreements plus disagreements from the
New Directions in Management Accounting Research 13

5% random samples of articles in the research and practice databases.5 Because the
measured inter-rater reliability is well within the norms for this type of qualitative
research (i.e. greater than 80%) and because hypothesis testing or model building
is not the primary objective of the study, the researchers did not revise the database
to achieve consensus coding.

Aggregate Analysis
Figure 2 shows the most aggregated level of analysis used in this study, which
reflects the levels of research and practice frequencies of major topics. The three
most frequent practice topics in Fig. 2 are: (1) software; (2) management control;
and (3) cost management. 6
The Institute of Management Accountants (IMA) analyzed the practice of man-
agement accounting (1997, 2000) in part by asking respondents to identify critical
work activities that are currently important and that are expected to increase in
the future. The IMA reports that 21% of respondents identified computer systems
and operations as one of the five most critical current work activities and 51%
believe that this work activity will increase in importance in the future. Eighteen
percent of respondents in the IMA practice analysis state that control of customer
and product profitability is one of the most critical work activities; however, 59%
of respondents believe that this is one of the work activities that will increase

Fig. 2. Major Topic Frequencies, 1996–2000.


14 FRANK H. SELTO AND SALLY K. WIDENER

in importance in the future. The topic code “management control” includes


sub-topics related to control of customers, customer profitability, quality, and
new products. Finally, the IMA practice analysis found that 25% of respondents
stated that “financial and economic analysis” was one of the most critical current
work activities. Forty-two percent believed it would be more important in the
future. The topic code “cost management” includes cost reduction, efficiency,
activity-based management, and activity-based costing.
The aggregate results of applying this study’s coding scheme to the practice
literature are consistent with those of the IMA’s practice analysis. The similarity
of aggregate results from this study and the IMA’s survey of practice support the
validity of this study’s coding scheme.

ANALYSIS OF TOPIC FREQUENCY CHANGES

The qualitative software enables several types of “drill-down” analyses at major


topic and subtopic levels. These analyses support the statistical and graphical
analyses that follow. The basic analysis in Fig. 2 guides all subsequent analyses.
Relatively large differences in overall topic frequency are evident in this graph
(e.g. budgeting, management control, performance measurement, and software),
but more detailed analyses are used to identify less researched questions.
Associated changes in topics can be evidence of information exchange between
research and practice. If researchers and practitioners are communicating about
topics of mutual interest, one expects changes in topic frequency (contempo-
raneous or lagged) to be closely associated over time. Creating tables of topic
frequencies for each year (by disaggregating the data underlying Fig. 2) supports
an investigation of contemporaneous and lagged topic changes. The study finds
no significant correlations (␣ = 0.10) between changes in research and practice
topic-frequencies that are either contemporaneous or lagged (plus or minus one
year). Analysis of topic levels finds numerous opportunities for communication
and exchange of findings between research and practice.

ANALYSIS OF TOPIC FREQUENCY LEVELS


Contemporaneous Frequency Levels

Analysis of contemporaneous levels shows some evidence of topic correspon-


dence. For example, a glance at Fig. 2 shows visual correspondence. The
contemporaneous overall correlation coefficient, which equals 0.45, is highly
significant (p < 0.0001). We obtain similar overall results for individual years
New Directions in Management Accounting Research 15

(0.3 < R < 0.6). Note that these annual correlations do not reflect a monotonic
increase of correspondence over time. However, the data show that modest
contemporaneous correspondence of research and practice topics exists.

Lagged Frequency Levels

Analysis of lagged topic frequency levels also shows similar correspondence.


Examining whether practice leads research by 1 year yields an overall correlation
coefficient (rounded) of 0.4 (p < 0.0001). Annual correlation coefficients range
between 0.3 and 0.5 for each lagged year. These are also highly significant and
reflect a “U” shaped pattern over time. Testing if research leads practice by 1
year generates an overall correlation coefficient of 0.5 and 0.3 < R < 0.6 for
each lagged year (all highly significant). Furthermore, coefficients of research
leading practice increase monotonically, suggesting increasing correspondence
over time.
Thus, this study finds mixed evidence of correspondence between research
and practice: Analysis of lagged topic frequency levels suggests increasing
correspondence, but changes in topic frequency show no evidence. This suggests
that evidence of correspondence may reflect coincidence rather than active or
causal exchange of information between researchers and professionals. To resolve
this ambiguity we look more closely at topic levels.

ANALYSIS OF CORRESPONDENCE
OF TOPIC LEVELS
One can observe many instances in Fig. 2 where topic frequency differences are
less than 5%, which indicate high correspondence between research and practice.
Most of these topics apparently are of relatively minor interest to both researchers
and professionals (i.e. total frequency of either practice or research is less than
5%). While these low frequency topics may represent emerging areas for both
realms, we focus here on topics that also have at least 5%7 of the total article
coverage in either practice or research. The only major topic meeting these criteria
is “cost management.”

Cost Management

Topics coded as cost management comprise approximately 14% of all practice


topics and 13% of research topics, leaving only a 1% difference. Is this high
16 FRANK H. SELTO AND SALLY K. WIDENER

Fig. 3. Cost Management Sub-Topics, 1996–2000.

correspondence the result of coincidence or cross-fertilization? To answer that


question, one can drill down into the database to contrast cost-management
subtopics. The result of this analysis is shown in Fig. 3.
A close look at Fig. 3 indicates that general cost-management correspondence is
questionable. Benchmarking is the only subtopic with appreciable topic frequency
and relatively high correspondence, comprising roughly 13% of practice and 9%
of research subtopics. Examination of benchmarking-research articles shows they
are evenly split between prescription and statistical analyses of the properties of
benchmarks. There are, however, no research studies of the impacts of benchmark-
ing. Practice articles are either prescriptions or self-reports of implementation or
reports of organizational improvements attributed to benchmarking.

Benchmarking Questions
Several benchmarking research questions seem obvious, including:

What are the costs and benefits of benchmarking at the process, service, or firm levels? One
should be able to measure costs of benchmarking activities, but, as is usually the case, benefits
may be more elusive. Attributing improvements in processes to benchmarking may be more
feasible than attempting to explain business unit or firm-level financial performance.
What are the attributes of successful or unsuccessful design and implementation of
benchmarking? Addressing this question perhaps should follow the first unless one wants to
proxy costs and benefits with user satisfaction measures.
New Directions in Management Accounting Research 17

Given that apparent correspondence at the cost-management subtopic level yielded


new research questions, examination of other cost management subtopics also
might bear fruit. For example, only 3% of research studies exist in the area of
activity-based management (ABM) – a difference of nearly 8% – which seems
surprising given this topic’s high profile over the past decade, and no research on
shared services – a difference of 6%. Several interesting research questions for
these topics include:

Activity-Based Management Questions


Several ABM research questions from practice are:
Does ABM lead to observable improvements in processes, products, services, and financial
performance? Self-reports indicate that ABM delivers improvements, but one suspects that
these self-reports are censored and most reports of failures are either not written or published.
What are determinants of successful ABM efforts? Determinants may include communica-
tion, team structure, management style, and management support and involvement.
How can organizations successfully move from ABM pilot projects to wider deployment?
Most ABM self-reports reflect results of limited pilot projects. Does implementation of ABM
spread? How?

Shared Services Questions


The topic of shared services refers to centers that provide business services, such
as finance, human resources and legal. This service center would contract with
business units, much as an outsourced-service provider would. Questions include:
What are the efficiencies of locating business services in shared service centers? Cost savings
are part of the equation, but effects on usage and quality of service also are important. This
leads to related considerations of transfer pricing and performance evaluation.
Is ABM necessary to justify shared or outsourced services? Is ABM the tool to use to
identify opportunities, communicate rationales, and ease transition to shared service centers?
What are the organizational impediments and arguments for shared services vs. distributed
or outsourced services? Alternative organizational structures and contracting may have inertia
and power considerations as well as economic.

Relatively more research than practice exists in several cost-management areas,


including activity-based costing (ABC) – a difference of 29% – and strategy –
a difference of 5%. However, perhaps surprisingly, numerous practice-oriented
questions remain relatively un-researched.

Activity-Based Costing Questions


Practice-oriented research questions in this area include:
What is the optimal complexity of ABC systems? Costs and benefits of complexity include
design and maintenance costs, cognitive complexity, and value of finer information. Standard
costing systems are notoriously expensive to maintain; are ABC systems even more so? Is it
18 FRANK H. SELTO AND SALLY K. WIDENER

possible to prescribe optimal complexity or describe the complexity of apparently successful


ABC systems? Are different levels of complexity appropriate for different purposes (e.g. product
costing vs. strategic decision making)?
Are objectivity and precision of measurement incompatible with efficient ABC systems?
What are the information quality tradeoffs? This has added importance for ABC systems that
are intended to serve multiple purposes, including reporting, costing, decision making, and
performance measurement.

Strategy Questions
Most research studies use measures of strategy as independent variables to
explain performance or other organizational outcomes. Practical concerns related
to strategy include:
What are appropriate ratios or indicators to measure whether an organization is meeting its
strategic goals, which may be heavily marketing and customer oriented? Are these indicators
financial, non-financial, or qualitative? Numerous practice articles argue that strategic manage-
ment is possible only with the “right indicators.” But what are they? How are they used? With
what impact?
Is the balanced scorecard an appropriate tool for performance evaluation, as well as for
strategic planning and communication? The BSC is offered as a superior strategic planning
and communication tool. Many organizations are inclined to also use the BSC (or similar,
complex performance measurement models) as the basis for performance evaluations. What
complications does this extension add? With what effects?
Are scenarios from financial planning models effective tools for strategic management?
Financial modeling is an important part of financial and cost management. Do strategic planners
need or use scenarios from these models? Why or why not? With what effects?

Relatively less research than practice exists in the area of cost reduction/efficiency
– a difference of 22%. Practice coverage of this topic is fairly uniform over the
5-year study period. Although research coverage peaked in 1998, some coverage
continued into 2000.

Cost Reduction/Efficiency Questions


Perhaps accounting researchers consider issues of cost reduction and efficiency
to be too basic for research inquiry. Practice is concerned with interesting
developments that could benefit from a research perspective. Sample questions
from practice include:
What is the efficiency of spending to increase customer satisfaction? Though a number of
researchers have addressed this issue (e.g. tests of statistical relations between customer sat-
isfaction and profitability), it is of continuing interest to practice, particularly at the level of
developing guidelines for efficient management of customer relations.
What are the effects of IT and organizational changes on efficiency? There appears to
be no research that addresses this type of question. Relevant contexts abound, including
telecommuting, outsourcing, and internal support services.
New Directions in Management Accounting Research 19

What are the effects of IT on total costs and productivity? The information systems literature
commonly focuses on measuring IT-user satisfaction while larger issues of efficiency remain
under-researched. Application contexts include finance, human resources, procurement,
payables, travel, payroll, customer service.

Perhaps surprisingly, observed major-topic correspondence yields much evidence


of low correspondence and many interesting, less researched questions. Even
when researchers and professionals address similar topics, they focus on different
questions. An examination of topics with more obvious low correspondence
yields even more research opportunities.

ANALYSIS OF LOW CORRESPONDENCE


OF TOPIC LEVELS
Low correspondence is defined as topic-frequency differences in excess of 5%.
These differences include topic areas where research exceeds practice: budgeting
(6.5% difference), management control (14% difference), and performance
measurement (16% difference).
The data also reveals major topic areas where practice exceeds research,
including business processes (5% difference), internal control (6% difference),
electronic business (7% difference), and software (19% difference). Perhaps these
are areas of particularly abundant new research opportunities.8 The study presents
analysis of low correspondence where budgeting research exceeds practice
and where practice writings on electronic business issues exceed research. The
appendix contains similarly identified research questions from each of the other
topic areas.

Budgeting (Research > Practice)

Budgeting is a venerable management accounting research topic, and one might


think that there are few un-researched questions remaining. If so, have researchers
not communicated results to practice, or are they pursuing less practice-oriented
topics? As before, one can drill down into the data to the budgeting sub-topic
level to assess budgeting correspondence. Figure 4 presents topic frequencies of
budgeting sub-topic coverage.
The data contain no practice publications in topic areas of budget slack
(Difference = 10%) and budget variances (Difference = 16%). More research
than practice exists in the areas of capital budgeting (Difference = 11%) and
participative budgeting (Difference = 21%). These topics appear to be of little
20 FRANK H. SELTO AND SALLY K. WIDENER

Fig. 4. Budgetting Sub-Topics, 1996–2000.

current, practical interest, but they continue to attract research efforts, perhaps
because of tradition and the interesting theoretical issues they present. It also is
possible that researchers’ long concern with budgetary slack still leads practice.
For example, excess budget slack conceivably might be included with other dys-
functional actions designed to manipulate reported performance and targeted for
elimination by financial reforms. Conversely, the data contain no research in topic
areas of activity-based budgeting (Difference = 10%) and planning & forecasting
(Difference = 65%). The latter area, planning and forecasting, has a large topic
difference and has grown in practice coverage each year of the study period.

Planning and Forecasting Questions


Just a few questions from practice include:

What are the determinants of effective planning and forecasting? Effective planning and
forecasting can be defined as: (1) accurate, timely, and flexible problem identification; (2)
communication; and (3) leading to desired performance. Researchers may find environmental,
organizational, human capital, and technological antecedents of effective planning and
forecasting methods and practices. Whether these are situational or general conditions would
be of considerable interest.
New Directions in Management Accounting Research 21

What exogenous factors affect sales and cost forecasting? This includes consideration of
the related question, What is a parsimonious model? Nearly every management accounting
text states that sales forecasting is a difficult task. Likewise, cost forecasting can be difficult
because of the irrelevancy or incompleteness of historical data. Yet both types of forecasting
are critical to building useful financial models and making informed business decisions.
What are effects of merging BSC or ABC with planning & forecasting? ABC and the
balanced scorecard represent current recommendations for cost and performance measurement.
However, the research literature has not extensively considered the uses or impacts of these
tools, which may be particularly valuable for planning and forecasting.
What are the roles of IT & decision-support systems in improving planning & forecasting?
Most large organizations use sophisticated database systems, and accessing and using
information can be facilitated by intelligent interfaces and decision support systems. Yet we
know little about the theoretical and observed impacts of these tools in general and almost
nothing about their effects on planning and forecasting.

Electronic Business (Practice  Research)

Topics coded as electronic business comprise approximately 7% of all practice


topics, yet there is no management accounting research in this area. To determine
if perhaps researchers are investigating electronic business issues and publishing
in journals outside of mainstream management accounting journals, we also
reviewed Information Systems research journals (MISQ, JMIS, JIS) and found
no evidence that research in this area is being conducted in these journals. Due
to the increasing emphasis of the role that technology plays in business in today’s
competitive, global and fast-changing world, a 7% difference in this topic with
no research seems surprising. Surely, electronic business is a research topic
guaranteed to generate practical interest.
An in-depth look at Fig. 5 shows that there are several main categories of
sub-topics within electronic business. Approximately 31% of electronic business
topics are articles of a general nature, 25% are related directly to issues on
electronic commerce, 22% are concerned with the internet and websites, 16% are
about processing transactions electronically, and the remaining 6% focus on the
use of XML (extensible markup language).

Electronic Business (General) Questions


General electronic business issues center on the reengineering of business pro-
cesses and business models to take advantage of electronic means of transacting
business and creating efficiencies and enhanced performance for the firm. This
generates opportunities for research questions related to the successful start-up of
e-ventures, the changes in underlying business models, and the use of technology
to reduce costs.
22 FRANK H. SELTO AND SALLY K. WIDENER

Fig. 5. Electronic Business Sub-Topics, 1996–2000.

What are appropriate management controls, internal controls, and performance measures
for E-business ventures? This includes the related question, Do they differ from conventional
business? Doing business in the “New Economy” has impacted the underlying business model
of most firms thus impacting the design of the firm’s management control system, internal
control environment, and performance measurement system.
What technologies drive enhanced productivity and efficiencies in the firm? Firms
must be able to perform cost/benefit analysis weighing the potential benefits to be gained
from employing new technologies against the cost of implementing that technology and
reengineering the business process. Two related questions are What is the optimal capital
budgeting model for electronic business? and Which business processes lend themselves to a
reengineering process that would result in increased efficiencies and reduced costs?

Electronic Commerce Questions


Practitioners appear to be primarily concerned with the management of costs in
the electronic commerce space and the proper tracking and measurement of per-
formance. These concerns lead to several promising research questions:
What is the optimal amount for web-retailers to spend on customer acquisition costs? The
prevailing business model among e-tailers was to increase traffic on the website and worry
about revenues later. But what is too much to pay to acquire a new customer? How do e-tailers
know how much to spend on customer acquisition costs?
What performance metrics do firms need to track to effectively manage electronic customers
and the integration of e-commerce with their current business model? How does a firm evaluate
investments in e-commerce? What metrics are appropriate for measuring the performance of
e-commerce initiatives? Not too long ago metrics focused on traffic, now firms are more focused
on the generation of revenue. Identifying the appropriate drivers, outcome measures, and the
timing and pattern of associations between the two are interesting areas for potential research.
New Directions in Management Accounting Research 23

How does electronic data interchange affect the management control system? Electronic
commerce is changing traditional business practices in areas such as increased use of bar
coding of transactions and inventory, and the use of electronic procurement. How do these
new business practices impact the design of the MCS?

Internet and Website Questions


The internet both facilitates the timeliness, exchange, and availability of infor-
mation. Practice is particularly concerned with the impact the internet has on the
reporting and use of financial information.
What are the characteristics of an effective website? Firms are implementing intranets
and websites for communication of information within the firm. This question includes a
related question: What characteristics of websites facilitate effective exchange of accounting
information between the firm and its investors? Or between users and/or business units within
the firm?
What is the impact of displaying financial information on a web site on the firm’s business
risk? Firms now distribute financial accounting information on company websites. How
does this practice impact the firm’s internal control environment? How does it impact a
company’s risk of litigation? Are there controls that the firm can implement to reduce the
associated risk?

Electronic Processing
The processing of accounting transactions can be a tedious and time-consuming
task. Electronic processing of transactions can create efficiencies within organi-
zations. Questions of interest are primarily related to how electronic processing
can improve firm performance and efficiencies.
How should accounting workflows and transaction processing be reengineered to take
advantage of electronic processing? Firms need to know how to integrate an environment that
traditionally generates lots of paper and incorporates many formal controls with an electronic
processing environment that may not generate any paper and dispenses with some of the
traditional controls.
What is the impact of electronic processing on the firm’s control environment? With the
potential for increased efficiencies arising from the reduction of traditional paper documents,
there may not be a paper trail left to substantiate and document transactions. What is the
impact on internal control? Is a paperless environment cost effective?

Extensible Markup Language


Extensible markup language (XML) (also, extensible business reporting language,
XBRL, and extensible financial reporting markup language, XFRML) is fast
becoming the language of accounting. XML is used for a multitude of purposes
including reporting accounting information to investors via the firm’s website,
uploading of SEC files and so forth. This leads to the question:
How do accountants successfully use extensible markup language (XML) to facilitate the
exchange and communication of accounting information?
24 FRANK H. SELTO AND SALLY K. WIDENER

OPPORTUNITIES FOR PUBLICATION


This section of the study addresses designing research for publishability. Because
the major portion of the study has focused on identifying new research questions,
it seems only prudent to anticipate the opportunities to publish this novel research.
It is one thing to recommend that researchers take risks and tackle new research
questions, but it might be quite another to get these efforts published in quality re-
search journals. The analysis that follows finds that some research journals, which
published management accounting articles during the period of study, have special-
ized but others have been more general. Certainly, publication history might be an
imperfect predictor of future publications opportunities, but a Bayesian might con-
dition estimates of publication probability with priors based on history. Prudence
(or a strategic approach to conducting research) also suggests that researchers
conceive and design their efforts to meet target journals’ revealed preferences.
The study next analyzes the management accounting research database for
topic coverage by major journal. The study also analyzes each journal’s past
publication practices regarding underlying theory, sources of data, and methods
of analysis. This analysis is not intended to be a cookbook, but rather it is intended
as realistic guidance based on historical evidence. Figure 6 displays coverage of
major management accounting topics by journal. Figure 7 shows theories used
in management accounting articles by journal. Similarly, Fig. 8 shows sources of
data by journal. Finally, Fig. 9 shows methods of analysis by journal.

Fig. 6. Management Accounting Topics by Journal, 1996–2000.


New Directions in Management Accounting Research 25

Fig. 7. Theories Used in Management Accounting Articles by Journal, 1996–2000.

Fig. 8. Methods of Analysis Used in Management Accounting Articles by Journal,


1996–2000.
26 FRANK H. SELTO AND SALLY K. WIDENER

Fig. 9. Sources of Data in Management Accounting Articles by Journal, 1996–2000.

Topic Coverage

Figure 6 shows some evidence of journal specialization by topic, although all the
research journals have published at least some articles addressing these topics.
For example, management control topics have appeared most often in AOS
and MAR, both U.K.-based journals. Performance measurement issues have
appeared most often in the North American journals, JAR, CAR, TAR, and JAE,
and these journals also publish management control articles. This concentration
may reflect editorial policies or results of years of migration of topics. Apart
from concentration of performance measurement and management control, it
appears that all the surveyed journals are open to publishing various management
accounting topics.

Theories

Figure 7 shows some strong evidence of theory specialization by journals. For


example, virtually the only theories used in management accounting articles
published in JAR, etc. are economic in nature (agency or microeconomic
theories). This is true also for management accounting articles published in
predominantly management journals, SMJ, etc. Nearly the only outlets for papers
New Directions in Management Accounting Research 27

using contingency theory are the U.K. journals, AOS and MAR. These journals
plus AIMA and JMAR appear to be the broadest in using alternative theories.

Methods of Analysis

As shown in Fig. 8, articles in JAR, etc. tend to use either analytical or statistical
methods, but almost never use qualitative analyses. On the other hand, management
accounting articles in other journals rarely use analytical methods, though they
often use statistical methods. For example, articles in AIMA, AOS, and JMAR
most often use statistical methods. Qualitative analysis appears mostly in the U.K.
journals, AOS and MAR, followed by AIMA and JMAR.

Sources of Data

Figure 9 shows specialization by journals in their uses of alternative data sources.


JAR, etc. articles predominantly use archival data, though data from laboratory
experiments also appear in JAR, etc. Field study and survey data appear most
often in AOS and MAR, the U.K. journals. AIMA appears to be the most balanced
in its data sources. JMAR, though a small player, publishes papers with a wide
range of data, as does MAR and, to a lesser degree, AOS.

Conclusions about Publication Opportunities

Authors want to place their work in the most prestigious journals (a designation
that varies across individuals and universities) and also want to receive competent
reviews of their work. Thus it seems sensible (or perhaps explicitly strategic) to
design research for publishability in desired outlets. As a practical matter, this
strategic design perspective may lead researchers, who themselves specialize
in theories and methods, to design practice-oriented management accounting
research for specific journals. Historical evidence indicates that all surveyed
journals may be open to new topics. Although several journals seem open to
alternative theories and methods (AIMA, JMAR, MAR, AOS), the major North
American journals have been more specialized. This may reflect normative values
and practical difficulty of building and maintaining competent editorial and
review boards. Thus, if one wants to pursue a new topic in research aimed at JAR,
etc., for example, one perhaps should use a theory, source of data, and method
that these journals have customarily published.
28 FRANK H. SELTO AND SALLY K. WIDENER

CONCLUSION
There is no shortage of interesting, potentially influential management accounting
research questions. From an analysis of published research and practice articles,
this study has identified many more than could be reported here. Even where
research and practice topics appear to correspond, considerable divergence in
questions exists. Identified research questions offer opportunities for ALL per-
suasions of accounting researchers. Synergies between management accounting
and accounting information systems seem particularly obvious and should not be
ignored. Furthermore, research methods mastered by financial accountants and
auditors can be applied to management accounting research questions.
Even with efforts to design practice-oriented management accounting research
for publishability, challenges to broader participation and publication might
remain. Some of the challenges to publishing this type of management accounting
research might include lack of institutional knowledge of authors, reviewers, and
editors. To be credible, authors must gain relevant knowledge to complement
their research method skills. For example, research on management control of
information technology and strategic planning should be preceded by knowledge
of the three domains, in theory and practice. Furthermore, editors and reviewers
who want to support publication of practice-oriented research should be both
knowledgeable of practice and open minded, particularly with regard to less ob-
jective sources of data. However, it does not seem necessary or desirable to lower
the bar on theory or methods of analysis to promote more innovative research. In
summary, we hope that this paper encourages management accounting researchers
to take on the challenges of investigating interesting, innovative questions oriented
to today’s business world and practice of management accounting.

NOTES

1. See http://www.imanet.org/content/Publications and Research/IMAstudies/moreless.


pdf.
2. Although the data are available, we have resisted the temptation to classify the practice
orientation of management accounting researchers or educational institutions.
3. Some management accounting researchers are placing work in other management and
operations journals, such as Management Science. Omitting these articles could be a source
of sampling bias if this is a growing trend.
4. Malina and Selto (2001) describe this qualitative method in more detail.
5. Ninety-seven article abstracts (containing 126 supercodes) were dual coded by both
researchers. Five articles contained multiple codes of which one super code in each article
was not in agreement between researchers.
New Directions in Management Accounting Research 29

6. The term “software” reflects selection, implementation, and management of software


systems and the hardware to run them. “Cost management” refers to activities to create
more value at lower cost and is distinguished from cost accounting, which measures costs.
7. The 5% cutoffs are arbitrary but retain the great majority of research and practice
articles for study. Without some cutoff, the research would resemble an annotated bibliog-
raphy of 2,000 articles. We do run the risk of ignoring particularly interesting but relatively
unreported topics.
8. In concept, one might prefer to separate practice descriptions of emerging problems
from advocacy for preferred solutions. Some also might argue that research naturally in-
vestigates different topics than practice. This study regards all differences as opportunities
for interesting research.

ACKNOWLEDGMENTS
We acknowledge and thank Shannon Anderson, Phil Shane, Naomi Soderstrom
and participants at the 2003 Advances in Management Accounting Conference,
2002 MAS mid-year meeting, a University of Colorado at Boulder workshop and
the AAANZ-2001 conference for their comments and suggestions for this paper.

REFERENCES
Anderson, P. F. (1983). Marketing, scientific progress, and scientific method. Journal of Marketing,
47(4, Fall), 18–31.
Atkinson, A. A, Balakrishnan, R., Booth, P., Cote, J., Groot, T., Malmi, T., Roberts, H., Uliana, E., &
Wu, A. (1997). New directions in management accounting research. Journal of Management
Accounting Research, 79–108.
Demski, J. S., & Sappington, D. E. M. (1999, March). Summarization with errors: A perspective on em-
pirical investigations of agency relationships. Management Accounting Research, 10(1), 21–37.
Elnathan, D., Lin, T., & Young, S. M. (1996). Benchmarking and management accounting: A
framework for research. Journal of Management Accounting Research, 37–54.
Institute of Management Accountants (2000). Counting more, counting less.
http://www.imanet.org/content/publications and research/IMAstudies/moreless.pdf.
Ittner, C. D., & Larcker, D. F. (1998). Innovations in performance measurement: Trends and research
implications. Journal of Management Accounting Research, 205–238.
Ittner, C. D., & Larcker, D. F. (2001). Assessing empirical research in managerial accounting: A
value-based management perspective. Journal of Accounting and Economics.
Malina, M. A., & Selto, F. H. (2001). Communicating and controlling strategy: An empirical study of
the effectiveness of the balanced scorecard. Journal of Management Accounting Research, 13,
47–90.
Miles, M., & Huberman, A. (1994). Qualitative data analysis: An expanded sourcebook. Thousand
Oaks, CA: Sage.
Shields, M. D. (1997). Research in management accounting by North Americans in the 1990s. Journal
of Management Accounting Research, 3–62.
30 FRANK H. SELTO AND SALLY K. WIDENER

APPENDIX: PRACTICE-ORIENTED
RESEARCH QUESTIONS

Major Topic Sub-Topic Selected Research Questions

Budgeting Activity What are effects of merging ABC with


based planning & forecasting?
Capital What is the optimal capital budgeting
model for electronic business?
Planning & What are the determinants of effective
forecasting planning and forecasting?
What are effects of merging the BSC with
planning & forecasting?
What exogenous factors affect sales and
cost forecasting? What is a parsimonious
model?
What are the roles of IT & decision-support
systems in improving planning &
forecasting?
Business Which activities in the finance function can
processes be eliminated leading to reduced costs
while maintaining high levels of support
and integrity in the accounting
information?
Under what business conditions (e.g. size,
industry, strategy, organization type, etc.)
can the “lean support model” be effectively
implemented in the finance function?
Is the reduction in finance costs as a
percent of sales “real” or have the costs and
related work simply been shifted to other
areas of the organization?
There is anecdotal evidence supporting
initiatives (e.g. corporate purchasing cards,
single consolidated credit card system,
online marketplace) designed to streamline
the procurement system in order to increase
efficiency and reduce costs. Which is most
effective? What are the determinants of
effective procurement initiatives?
New Directions in Management Accounting Research 31

Appendix (Continued )

Major Topic Sub-Topic Selected Research Questions

How does technology play a role in


successfully reengineering business
processes such as the travel and expense
process?
Management Alliances What is the nature of optimal business
control partner or exclusive supply contracts?
What are the determinants of successful
or failed alliances?
What are appropriate controls for
outsourced services or manufacturing?
How do controls differ between using
sole sources and a portfolio of
suppliers?
Customers/ What are the costs, benefits, and risks of
Information sharing information with customers?
Intangibles What is the influence of corporate
culture on control, frequency and
success of alliances?
Smartcards How does decentralizing purchasing
authority via smartcards affect the
control environment?
Structure What is the control tradeoff between
keiretsu organizations and
independence?
Cost accounting Environmental How can firms measure current and
future environmental costs and liabilities
of products, processes, and projects?
Can ABC and ABM reduce
environmental costs and liabilities?
Standards How do cost accounting standards affect
entry, exit, and profitability of affected
firms?
Do cost accounting standards result in
improved contracting and performance?
Cost management ABC What is the optimal complexity of ABC
systems?
32 FRANK H. SELTO AND SALLY K. WIDENER

Appendix (Continued )

Major Topic Sub-Topic Selected Research Questions

Are objectivity and precision of


measurement incompatible with efficient
ABC systems? What are the information
quality tradeoffs?
ABM Does ABM lead to observable
improvements in processes, products,
services, and financial performance?
What are determinants of successful ABM
efforts?
How can organizations successfully move
from ABM pilot projects to wider
deployment?
Benchmarking What are the costs and benefits of
benchmarking at the process, service, or
firm levels?
What are the attributes of successful or
unsuccessful design and implementation of
benchmarking?
Cost reduction/ What is the efficiency of spending to
efficiency increase customer satisfaction?
What are the effects of IT and
organizational changes on efficiency?
What are the effects of IT on total costs and
productivity?
Shared services What are the efficiencies of locating
business services in shared service centers?
Is ABM necessary to justify shared or
outsourced services?
What are the organizational impediments
and arguments for shared services vs.
distributed or outsourced services?
Strategy Is the balanced scorecard (BSC) an
appropriate tool for performance evaluation,
as well as for strategic planning and
communication?
New Directions in Management Accounting Research 33

Appendix (Continued )
Major Topic Sub-Topic Selected Research Questions

Are scenarios from financial planning


models effective tools for strategic
management?
Electronic Commerce What are appropriate management
business controls, internal controls, and
performance measures for E-business
ventures? Do they differ from
conventional business?
What is the optimal capital budgeting
model for electronic business?
Internet/WWW How do on-line auctions affect
transaction costs?
What are the impacts of intranet
exchange of accounting information?
What determines effective intranet
exchange of accounting information?
Processing What is the impact of electronic
processing on the firm’s control
environment?
XML How do accountants successfully use
extensible markup language (XML) to
facilitate the exchange and
communication of accounting
information?
Internal control Controls Is the implementation of controls a cost
savings or a cost expenditure (e.g. what
are the costs and benefits of certain
control procedures? Which decision
models can facilitate this decision?)
Data security/ How do small firms, perhaps without
computer fraud necessary resources to support data
security, implement a strong control
environment?
34 FRANK H. SELTO AND SALLY K. WIDENER

Appendix (Continued )

Major Topic Sub-Topic Selected Research Questions

Ethics What are the characteristics of an


effective ethical environment (e.g.
training programs, codes of conduct,
ethical standards, tone at the top, etc.)?
Fraud Does implementing and adhering to the
awareness/ Internal Control Framework reduce the
detection prevalence of fraud and positively impact
firm performance?
Performance BSC What are the observable impacts of BSC
measurement implementation and use?
What are the effects of alternative means
of BSC implementation?
Systems How does the design and structure of
incentive compensation systems relate to
incidences of fraud?
Can action-profit-linkage chains be
designed to capture affects of
investments in intangibles such as
training, information technology and
employee satisfaction?
Strategic What are the appropriate ratios or
indicators to measure whether an
organization is meeting its strategic
goals, which may be heavily marketing
and customer oriented? Are these
indicators financial, non-financial, or
qualitative?
Software Accounting What are the costs and benefits of
maintaining dual costing systems to
satisfy demands for information related
to both strategic costing management and
operational improvements?
What steps can be taken to ensure that a
software conversion is performed
competently and efficiently?
New Directions in Management Accounting Research 35

Appendix (Continued )

Major Topic Sub-Topic Selected Research Questions

ERP What are the benefits and costs of


implementing an enterprise resource
system?
What are the attributes of efficient data
mining?
Human What are the effects of HR software on
resources improvements in hiring, training,
retention and evaluation (e.g. linked to
BSC performance models)?
Selection What characteristics of accounting
software packages affect the efficiency of
the organization?
THE PROFIT IMPACT OF VALUE CHAIN
RECONFIGURATION: BLENDING
STRATEGIC COST MANAGEMENT
(SCM) AND ACTION-PROFIT-LINKAGE
(APL) PERSPECTIVES

John K. Shank, William C. Lawler and


Lawrence P. Carr

ABSTRACT
An important management topic across a wide spectrum of firms is recon-
figuring the value delivery system – defining the boundaries of the firm.
Profit impact should be the way any value chain configuration is evaluated.
The managerial accounting literature refers to this topic as “make versus
buy” and typically addresses financial impact without much attention to
strategic issues. The strategic management literature refers to the topic
as “level of vertical integration” and typically sees financial impact in
broad “transaction cost economics” terms. Neither approach treats fully the
linkages all along the causal chain from strategic actions to resulting profit
impact. In this paper we propose a theoretical approach to explicitly link
supply chain reconfiguration actions to their profit implications. We use the
introduction by Levi Strauss of Personal Pair™ jeans to illustrate the theory,
evaluating the management choices by comparing profitability for one pair

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 37–57
© 2004 Published by Elsevier Ltd.
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12002-9
37
38 JOHN K. SHANK ET AL.

of jeans sold through three alternative value delivery systems. Our intent is
to propose a theoretical extension to the make/buy literature which bridges
the strategic management literature and the cost management literature,
using A-P-L and SCM, and to illustrate one application of the theory.

THE PROBLEM SETTING


More explicitly, managing and even re-engineering relationships all along the
supply chain has become a central element of strategy for many organizations
in a wide range of industrial contexts (Beamish & Killing, 1997; Gulati, 1995;
Mowery, 1988; Nohria & Eccles, 1992). Several rationales for more formal
“partnering” have been offered:
 achieving production efficiency (Womack & Roos, 1990);
 sharing R&D risks (Westney, 1988);
 gaining access to new markets and skills (Kogut, 1988);
 reducing the time to market in the development of new products (Clark &
Fujimoto, 1991);
 searching for new technological opportunities (Hagedoorn, 1993).

It has been reported that firms which have restructured their value delivery
system have experienced lower overhead costs, enhanced responsiveness and
flexibility, and greater efficiency of operations (Lorenzoni & Baden Fuller, 1995).
Furthermore, from the single-firm perspective, alliances and partnerships have
created new strategic options, induced new rules of the game, and enabled new
complementary resource combinations (Kogut, 1991).
From the initial emphasis on joint ventures, a wider spectrum of forms of network
alliances has emerged. The basic idea in “outsourcing strategies” is to transform the
firm’s value chain to reduce the assets required and the number of traditional func-
tional activities performed inside the organization, resulting in a much different
configuration of the corporate boundaries. This can challenge the firm to carefully
reconsider its core capabilities (Prahalad & Hamel, 1994). Alliances and partner-
ships create leverage in strategic maneuvering, shaping the so-called “intelligent
firm” (Quinn, 1992) or the “strategic center” (Lorenzoni & Baden Fuller, 1995).
In “networked organization” design, the key choice is which activities to per-
form internally and which to entrust to the network (Khanna, 1998). The choice
can free resources from traditional supply chains to focus on core competencies
that foster the firm’s competitive advantage. One example is Dell’s de-emphasis
of manufacturing in favor of web-enhanced direct distribution in business PCs in
the 1990s. Williamson (1975) frames this choice as how much “market” and how
The Profit Impact of Value Chain Reconfiguration 39

much “hierarchy” and sees it as a cost-benefit trade-off. If transaction costs in the


marketplace exceed the benefits of the outsourcing, the vertical integration option
(hierarchy) is preferred.
Accounting information should play a fundamental role in evaluating the
placement of the boundaries of the firm. Indeed, cost estimation for the resources
employed and for the related benefits should shape how managers make decisions
about the rational level of vertical integration. But management accounting and
strategic management studies to date do not provide full financial analysis for this
trade-off. Typically, “make/buy” decisions are framed in managerial accounting
from a short-run, differential cost perspective. This approach typically ignores
long-run costs and asset investments related to the management activities involved
and also ignores linkages with customers and suppliers. In contrast, the strategic
management literature frames such decisions in terms of transaction costs which
emerge in using markets instead of hierarchies. The former approach includes
very little strategy, while the latter includes very little accounting. We will argue
here that they are not alternative viewpoints but rather potentially reinforcing
partial lenses. We support the view that the two approaches can and must be
joined (Johnson, 1992; Johnson & Kaplan, 1987).
The strategic cost management (SCM) framework uses cost information to
develop and implement strategies to acquire or sustain competitive advantage
(Shank & Govindarajan, 1993). Strategic cost management is a comprehensive
cost analysis framework which explicitly considers the firm’s competitive
positioning in light of the value creation process all along the value chain (Shank,
1989; Shank & Govindarajan, 1992). A useful extension to this approach is to
model the profit implications of causal links all along the supply chain with the
A-P-L framework proposed by Epstein et al. (2000).

THEORETICAL BACKGROUND
The Transaction Cost Approach

Transaction cost economics (TCE) defines the rational boundaries of the firm in
terms of the trade-off between the costs of internally producing resources and the
costs associated with acquiring resources in an external exchange. Williamson
(1975) developed this approach, drawing upon the institutional economics studies
of Coase (1937). A “transaction” is defined as the exchange of a good or a service
between two legally separated entities. Williamson (1985) holds that such ex-
changes can increase costs, relative to internal production, because of delays, lack
of communication, transactor conflicts, malfunctions or other maladjustments.
40 JOHN K. SHANK ET AL.

With the purpose of avoiding such transactions costs, firms set up internal gov-
ernance structures with planning, coordination and control functions. However,
these structures, themselves, also cause resource consumption and thus costs.
Transaction costs derive from managing supplier and buyer activities both ex
ante in searching, learning, and negotiating (or safeguarding) agreements, and ex
post in organizing, managing and monitoring the resulting relationship. The cost
of transacting may become higher than the production cost savings because of
increases in opportunistic behavior, bounded rationality, uncertainty, transactor
conflicts or asset specificity. Firms then tend to prefer the integrated organization
over the market transaction.
The vertically integrated organization may provide numerous benefits:
controllability of the actors, better attenuation of conflicts, and more effective
communications (Williamson, 1986). But, a great many people believe today
that markets provide better incentives and limit bureaucratic distortions more
efficiently than vertical integration. The market may also better aggregate several
demands, yielding economies of scale or scope (Williamson, 1985). Vertical
integration, as a generalization, is in decline. For further theoretical literature on
vertical integration see D’Aveni and Ravenscraft (1994), D’Aveni and Illinitch
(1992), Harrigan (1983), Hennart (1988), or Quinn et al. (1990).

The Management Accounting Approach

The management accounting approach to evaluating outsourcing is basically to


compare the short-run differential costs and benefits of the different make or buy
options (Atkinson et al., 1997; Horngren et al., 1997; Shillinglaw, 1982). This
approach, in principle, takes into consideration all the “economic” aspects of the
decision. In practice, however, the analysis tends to be rather narrow for at least
two reasons. First is the typical assumption that the relevant time frame is the short
run where management support costs and asset structures are considered fixed
and thus irrelevant for the choice (Horngren et al., 1997). The second limitation is
that accounting systems typically do not provide explicit cost information related
to all of the activities that would emerge or disappear with the decision (Anthony
& Welsch, 1977). Recently, management accounting has become more cognizant of
these “management systems” costs through activity based costing (Atkinson et al.,
1997; Horngren et al., 1996; Kaplan & Cooper, 1998; Shank & Govindarajan,
1993). Activity based costing (ABC) focuses cost analysis on all the activities
required to produce a product or support a customer.
In principle, to emphasize transaction costs issues, ABC could be linked to
a full value chain framework. This would facilitate a broader conception of
the costs and benefits associated with make or buy decisions. In particular, it
The Profit Impact of Value Chain Reconfiguration 41

could allow managers to better estimate the “cost of ownership” (Carr & Ittner,
1992; Kaplan & Atkinsons, 1989; Kaplan & Cooper, 1998). Cost of ownership
would include not only purchase price, but also those costs related to purchasing
activities (ordering, receiving, incoming inspection), holding activities (storage,
cost of capital, obsolescence), poor quality issues (rejections, re-receiving, scrap,
rework, repackaging), or delivery failures (expediting, premium transportation,
lost sales owing to late deliveries). Cost of ownership is obviously dramatically
higher than purchase price alone.
For example, Carr and Ittner (1992) note that Texas Instruments increased its
estimate of the cost of ownership of an integrated circuit from $2.50 to $4.76
when considering poor system quality. In another survey, Ask and Laseter (1998)
found that in selected commodities such as office supplies, fabrication equipment
and copy machines, total cost of ownership was, respectively, 50, 100, and 200%
higher than purchase price alone. Clearly, ABC is a necessary augmentation to
the traditional management accounting conception of the make/buy decision, but
the result is still internally focused on the firm rather than the full supply chain.

The Strategic Cost Management Perspective

Strategic Cost Management is the view that cost analysis and cost management
must be tackled broadly with explicit focus on the firm’s strategic positioning in
terms of the overall value supply chain of which it is a part.

Strategic Positioning
For sustained profitability, any firm must be explicit about how it will compete.
Competitive advantage in the marketplace (Porter, 1985) ultimately derives from
providing better customer value for equivalent cost (differentiation) or equivalent
customer value for lower cost (low cost). Occasionally, in a few market niches,
a company may achieve both cost leadership and superior value simultaneously,
for awhile. Examples include IBM in PCs in 1986 or Intel in integrated circuits
in 1992. In general, shareholder value derives more clearly from differentiation,
since the benefits of low cost are ultimately passed more often to customers than
to shareholders.

Value Chain Analysis


The Value Chain framework sees any business as a linked and interdependent pro-
gression of value-creating activities, from basic raw material acquisition through to
end-use customers. Each link in the chain is strategically relevant. Where along the
chain is value generated and where destroyed? More broadly, each firm is part of an
industry which also is a linked system of multiple chains with comparable issues
42 JOHN K. SHANK ET AL.

about value creation and destruction at each stage (Shank et al., 1998). In carefully
analyzing its internal value chain and the industry chain of which it is a part, a firm
might discover that economic profits are earned in the downstream activities, such
as distribution or customer service or financing, but not upstream in basic manufac-
turing. For example, Shank and Govindarajan (1992) show that in the consumer liq-
uid packaging industry, much higher returns to investment are earned downstream
at the filling plant than upstream in package manufacturing. In such a case, any
incremented resource allocation upstream would require very strong justification.
Many businesses today are showing that value is moving downstream in the
chain (Slywotzky, 1996). General Electric and Coca Cola have experienced the
benefits of moving downstream (Slywotzky & Morrison, 1997). In the U.S. auto
industry, a very high percentage of overall profit is in after-market services such
as leasing, insurance, rentals, and repairs. New car sales and auto manufacturing
show low profit (Gadiesh & Gilbert, 1998). At the deepest level, value chain
analysis allows managers to better understand their activities in relation to
their core competencies and to customer value. Many firms have discovered
that streamlining the chain can reduce costs and enhance the value provided to
customers (Hergert & Morris, 1989; Normann & Ramirez, 1994; Porter, 1985;
Rackharn et al., 1996; Womack & Jones, 1996).

The Action-Profit-Linkage (APL) Model

Epstein et al. (2000) argue that evaluating the impact of any strategic initiative
requires assessing the profit implications all along the linked set of causal steps
that make up the initiative. Too often, they say, the linkages from a decision to the
related action variables to the related intervening system variables to profit are not
clearly identified and quantified. They propose and illustrate a theoretical model
(APL) to make such linkages explicit. The APL model is intended to promote an
integrative and systemic approach to evaluating strategic choices and to propose
a new performance metric – full supply chain profitability – for use in monitoring
strategy implementation.
We believe that APL is a very appealing extension of the SCM framework for
evaluating strategic choices and we incorporate it here.

Lean Thinking

The work by Womack and Jones (1996) on supply chain reconfiguration provides
a context to apply SCM and APL to the decision by Levi Strauss to introduce
The Profit Impact of Value Chain Reconfiguration 43

Personal Pair™ jeans. Although Levi’s has long been a dominant brand in apparel,
Levi Strauss is primarily a manufacturing company, selling its products to whole-
salers or retailers rather than end-use customers. As the apparel business continues
to evolve, firms all along the industry value chain are continually presented with
opportunities to create new ways to compete. This requires the firm to carefully
position itself within the industry structure, avoiding or mitigating the power of
competitors. Successful firms have the ability to differentiate an idea from an
opportunity and can quickly marshal physical resources, money, and people to take
advantage of windows of business opportunity. Competitive advantage is always a
dynamic concept, continually shifting as firms either reposition within industries
or position in such a manner that existing industry boundaries are redrawn
(D’Aveni, 1993).
Womack and Jones have studied this process since the mid-1980s, starting with
the auto industry. They later expanded their research base in an attempt to identify
“best-in-class,” across industries (Womack & Jones, 1996). They termed their point
of view “lean thinking.” In their cross-industry study, they demonstrate that many
companies have been able to create substantial shareholder wealth by challenging
the way they implement their strategies through a five-step process. First, identify
the value criterion from the customer viewpoint at a disaggregated level – a specific
product to a specific customer at a specific price at a specific place and time.
Second, map the value chain’s three elements: the physical stream which originates
with the first entity that supplies any raw input to the system and ends with a satis-
fied customer, regardless of legal boundaries; the information stream that enables
the physical stream; and the problem solving/decision stream which develops
the logic for the physical stream. Third, focus on continuous flow and minimize
disruptions such as those in a typical “push-based, batch-and-wait” system. This is
accomplished by the fourth step – creating “pull,” such that the customer initiates
the value stream. And fifth, strive for continuous improvement (Kaizen) by
creating a “virtuous circle” where transparency allows all members to continually
improve the system.
In their study of world-class lean organizations, the authors cite results such
as 200% labor efficiency increases, 90% reductions in throughput time, 90%
reductions in inventory investment, 50% reductions in customer errors and 50%
reductions in time-to-market with wider product variety and modest capital
investment. An APL model is necessary to tie down the profit implications of
improvements in these leading performance indicators.
In the next section of the paper, we present the Levi’s Personal Pair™ business
initiative as one example of a management innovation demonstrating attention to
all five of these steps. In Section IV, we present for the example a full value chain
profitability impact assessment using APL.
44 JOHN K. SHANK ET AL.

LEVI’S “PERSONAL PAIR”TM JEANS

In 1995, women’s jeans was a $2 billion fashion category in the U.S. and growing
fast. Levi’s was the market leader with more than 50% share of market, but their
traditional dominant position was under heavy attack. Standard Levi’s women’s
jeans, which were sold in only 51 size combinations (waist and inseam) had
been the industry leading product for decades, but “fashion” was now much more
important in the category. Market research showed that only 24% of women were
“fully satisfied” with their purchase of standard Levi’s at a list price of about
$50 per pair.
“Fashion” in jeans meant more styles, more colors, and better fit. All of these
combined to create a level of product line complexity that was a nightmare for
manufacturing-oriented, push-based companies like Strauss which depend on
independent retailers to reach consumers. Recognizing a need for better first-hand
market information, in the early 1990s Strauss opened a few retail outlets, Original
Levi’s stores. By 1994, Strauss operated 19 retail outlets across the country (2,000
to 3,000 square foot mall stores) to put them in closer touch with the ultimate
customers. But this channel was still a tiny part of their overall $6 billion sales
which were still primarily to distributors and independent retailers.
Strauss was as aggressive as most apparel manufacturers and retailers in
investing in process improvements and information technology to improve
manufacturing and delivery cycle times and (pull-based) responsiveness to actual
buying patterns. But the overall supply chain from product design to retail sales
was still complex, expensive and slow. In spite of substantial improvements in
recent years, including extensive use of Electronic Data Interchange (“EDI”),
there was still an eight-month lag, on average, between receiving cotton fabric
and selling the final pair of Levi’s jeans (see Fig. 1). The industry average lag was
still well over twelve months in 1995.
Custom Clothing Technology Corp. (CCTC), a small Newton, MA-based
software firm, offered Levi’s a very innovative business proposal in 1994 based on
an alternative value chain concept. CCTC specialized in client/server applications
linking point-of-sale custom fitting software directly with single-ply fabric cutting
software for apparel factories. CCTC suggested a joint venture to introduce
women’s Personal Pair™ kiosks in 4 of the Original Levi’s stores. The management
of CCTC had solid technology backgrounds but little retail experience. They were,
however, convinced of the attractiveness of their new process, which operates
as follows:

(1) The Personal Pair™ kiosk is a separate booth in the retail store equipped with
a touch screen PC.
The Profit Impact of Value Chain Reconfiguration
Fig. 1. The Conventional Supply Chain for Levi’s Jeans.

45
46 JOHN K. SHANK ET AL.

(2) A specially-trained sales clerk uses a tape to take three measurements from
the customer (waist, hips and rise) and record them on the touch screen. There
are 4,224 possible combinations of these three measurements. Inseam length
is not yet considered.
(3) The computer flashes a code corresponding to one of 400 prototype pairs
which are stocked at the kiosk. The sales clerk retrieves the prototype pair for
the customer to try on.
(4) With one or two tries, the customer is wearing the best available prototype.
Then the sales clerk uses the tape again to finalize the exact measurements
for the customer (4,224 possible combinations) and to note the inseam length
desired.
(5) The sales clerk enters the 4 final measurements on the touch screen and records
the order. The system was available only for the Levi’s 512 style, but 5 color
choices were offered in both tapered and boot-cut legs.
(6) The customer pays for the jeans and pays a $5 Fed Ex delivery charge (per
pair). Delivery is promised in not more than three weeks.
(7) Each completed customer order is transmitted by modem from the kiosk
to CCTC where it is logged and retransmitted daily to a Levi’s factory in
Tennessee.
(8) At the factory, each pair is individually cut, hand sewn, inspected and packed
for shipment. Each garment includes a sewn-in bar code unique to the customer
for easy re-ordering at the store where the bar code is on file in the kiosk.
(9) There is a money-back guarantee of full satisfaction on every order.

Using Lean Principles to Analyze the CCTC Offer

As is immediately obvious in Fig. 1, the Original Levi’s store system is the an-
tithesis of lean! Due to uncertainty in demand forecasting and inconsistent supply
chain lead times, large investments in inventories are necessary (raw, WIP and
finished). This, in turn, necessitates investments in logistics support assets such as
warehouses, IT systems, and vehicles.
For the business that CCTC targeted, the five elements of lean thinking can be
summarized as follows:
(1) Value. Although Levi Strauss was a very profitable and very large firm, only
24% of women were satisfied with the fit of their new jeans. This “opportunity”
in women’s jeans that Levi’s was missing illustrates the need to apply the lean
thinking methodology on a disaggregated level.
(2) Value stream. It is unclear how concerned Levi’s was about the cumbersome
value stream for this particular product. Their approach was typical in the
The Profit Impact of Value Chain Reconfiguration 47

industry, and Levi’s corporate ROE averaged a very robust 38% for the three-
year period, 1993 to 1995.
(3) Continuous flow. Given the eight month denim-to-sale cycle, with frequent
inventory “stops,” there is clearly very little “flow.”
(4) Pull. Likewise, this is the classic push system. The customer initiates noth-
ing. All production and distribution activity is driven by sales forecasts and
production lot-sizing.
(5) Kaizen. Again, it seems that Levi’s satisfaction with a high overall ROE may
have led them to miss the lack of transparency in the women’s jeans chain
which blocks the opportunity for continuous improvement.

PERSONAL PAIRTM IMPACT ON WEALTH CREATION

Although Levi’s does not publish financial results for women’s jeans sold through
the Original Levi’s channel, we were still able to analyze this channel with some
degree of confidence using field site visits, industry averages, benchmark company
comparisons, and interviews with industry participants.

Fig. 2. The Levi Strauss Aggregate Financial Footprint (1993–1995).


48 JOHN K. SHANK ET AL.

Publicly available information allows us to construct the overall financial


footprint for Levi Strauss shown in Fig. 2. One key element of our analysis
here is converting this aggregate level financial information for Levi Strauss to a
disaggregated unit – one pair of Personal Pair™ jeans. This is the basis for building
plausible profitability estimates for one pair of jeans sold in different distribution
channels.

The Wholesale Distribution Channel

A breakdown of the profitability impact along each stage of the chain for the
normal wholesale channel, using the APL model, is shown in Fig. 3. As noted
earlier, the retail list price for a pair of jeans is approximately $50. Assuming a
typical retail gross margin of 30%, the Levi’s wholesale price is close to $35. In
addition, historically, approximately 1/3 of Levi’s jeans are sold at markdowns
averaging approximately 30% off list. This equates to average price allowances of

Fig. 3. An APL Formulation of the Profitability of Women’s Jeans to Levi Strauss as a


Wholesale Supplier.
The Profit Impact of Value Chain Reconfiguration 49

about $5 per pair (1/3 × 30% × $50). About 60% of this, or $3 per pair, is made
good by Levi’s in some type of co-op agreement. The result is a net sales price for
Levi’s of $32 ($35 – $3).
The footprint gross margin in Fig. 2 for Levi’s as the manufacturer are about
40%. This implies that cost of goods sold for one pair of jeans is about $19
(60% × $32). From research, we know that denim costs about $5 per pair and
conversion another $5. This leaves approximately $9 for distribution logistics.
Overall S,G&A is 25% of sales per Fig. 2, which would be $8 per pair, based
on net sales of $32. We estimate S,G&A to be moderately higher ($9 per pair)
for women’s jeans because of the more complex supply chain for a fashion item.
Pre-tax profit per pair is thus $4 ($32 – $19 – $9). Note that a significant part of
cost is directly due to the “push” system in place ($3 in markdowns, the additional
$1 in S,G&A, and $9 in distribution costs).
The investment per pair can also be estimated from the financial footprint. Using
the average inventory turnover of 4.73, we estimate the inventory investment to
be about $4 ($19/4.73). Accounts payable (27 days) for this channel rounds to
$1, yielding a net inventory requirement of $3 for every pair sold. The collection
period for women’s jeans should not be that much different from the overall
Levi’s collection period of 51 days, which translates to $4 in accounts receivable
for each pair. In a like manner, the 5.33 fixed asset turn gives us a total of $6
per pair ($32/5.33) in property assets. Our field research indicates that this plant
investment for the normal channel is mostly in the factory, rather than distribution.
In total, we estimate that, for this channel, every pair sold requires capital of
approximately $13 ($4 – $1 + $4 + $6). With the above pre-tax operating profit
of $4, this is an overall very healthy ROIC of about 31%. The figure is marginally
less than the corporate average because of extra downstream costs for a women’s
fashion item.

The Own Store Channel

We next make the adjustments necessary to convert this analysis to one pair of
women’s jeans sold through Original Levi’s stores. The profitability impact at
each stage along this chain is shown in the APL model in Fig. 4. Basically, the
financial consequences of adding a retail outlet can be derived from databases for
retail clothing companies. The only difficult element to estimate is the in-store
investment. A visit to a local Levi’s outlet revealed the following information:
 Building investment – 3,000 square feet leased for about $240,000 per year. The
lease rate of $80 per foot per year is typical for high-end malls. We capitalized
50 JOHN K. SHANK ET AL.

Fig. 4. An APL Formulation of the Profitability of Women’s Jeans Sold Through the
Original Levi’s Channel.

the lease cost at 10% to estimate an investment of approximately $2,400,000 in


the building space.
 Volume – The average Original Levi’s store has 1,000 SKUs and 20,000 pairs of
pants on hand, on average. This inventory turns approximately 6 times per year,
yielding an annual store volume of approximately 120,000 pairs.
 Store investment per pair sold – $2,400,000/120,000 pairs = $20 per pair. This
calculation assumes Levi’s to be the implicit owner of this space. Whether Levi
Strauss should actually own or rent retail space is beyond the scope of this
analysis.
The Profit Impact of Value Chain Reconfiguration 51

Comparing the normal wholesale channel with the owned retail channel, prof-
itability (ROIC) for women’s jeans falls by about 50%, from 31 to 16%. Levi
Strauss is paying a high price to gain customer intimacy in this segment!

The Personal Pair™ Channel

Our next step is to estimate how Personal Pair™ changes the profitability analysis.
This requires that we first understand the CCTC value chain, its impact on the
aggregate financial footprint, and its impact on each pair of jeans. Based on the
CCTC proposal outlined earlier, a reasonable estimate of the new value chain is
as shown below.

As is obvious in comparing the original Levi’s and Personal Pair™ value chains,
the CCTC system is indeed much more “lean.” It adds “fit” value for the customer.
It has a well-defined value stream, including not only the physical and information
flows, but also the decision-making. The flow is interrupted only at the transporta-
tion nodes and is initiated by “pull” from a customer order. Although perfection
can never be achieved, the areas for kaizen seem obvious given the transparency
and simplicity of the system. All five lean thinking criteria have been markedly
improved. But, how is profitability affected?
Specific financial information with respect to this system is difficult to estimate
because of the short history for CCTC and the vastly different structure of the
chain. However, our research enabled us to make the estimates summarized in
Fig. 5. Again, the APL framework is used to show profit impact all along the causal
chains. If, indeed, CCTC can deliver what it has promised, the results are dramatic.
More customer satisfaction implies an opportunity for higher selling prices.
Levi’s priced each Personal Pair™ $15 higher initially, with very little customer
resistance. One year later, the premium was cut to $10 based on the estimated
price elasticity. Custom fit also eliminates mark-downs, driving up the net price.
Distribution costs are transferred to Fed Ex for which the customer pays separately.
Operating costs per pair in the store are cut by half, assuming half the orders are
52 JOHN K. SHANK ET AL.

Fig. 5. An APL Formulation of the Profitability of Personal Pair™ Women’s Jeans. Note:
The normal $8 for Strauss plus the normal $10 for the store (increase in personal selling
offset by decrease in space costs) but divided by 2 for 50% repeat orders by mail, plus $3
for CCTC ($8 + $ 10/2 + $3 = $16).

repeat business with zero store contact. Selling costs for the first pair would increase
given the time spent on measuring and fitting each customer. But, this happens only
once (until body dimensions change.). The inventory and retail store investment
decrease substantially with only an offset of CCTC investment in computers
and software.
Overall, the CCTC opportunity is very enticing with very high ROIC. Given
that this model did not have an established record in retail merchandising, a test
phase was probably the wise choice for Levi’s.
The Profit Impact of Value Chain Reconfiguration 53

ASSESSING THIS CHANGE IN


THE SUPPLY CHAIN
The Personal Pair™ kiosks were very popular almost immediately. The experiment
was extended to seven stores by the Summer of 1995. About one half the sales
were repeat orders which greatly simplify the point of sale process. In October of
1996, Heidi LeBaron-Leupp, marketing director for the Personal Pair™ program,
declared it a “phenomenal success.” For the styles affected, unit sales were up 49%.
Two years into the program (Fall, 1994 to Fall, 1996), the company’s experience
was that “Personal Pair” resulted in no change in raw material or conversion cost
(per pair), but virtually eliminated distribution costs and distribution investment.
Non-material manufacturing and distribution costs are cut by an amazing 47%
(from $15 to $8). Approximately eight months of inventory has been almost
completely eliminated (except for raw material). Warehouses, intermediate
handling, insurance, shrinkage, transportation logistics and vehicle depreciation
and maintenance are now only memories of the past with this channel. Still, the
CCTC stage of the chain does add some infrastructure costs and investment.
Although the kiosks are more labor intensive for the first pair sold, the in-store
cost is substantially less per pair sold after allowing for repeat orders at zero
in-store cost. The net result, given the increase in price to $60, is a 467% increase
in pre-tax profit (from $6 to $34 per pair) which is truly remarkable.
Equally as remarkable is the impact on asset investment. Inventory of $12 per pair
sold (reflecting the eight months of average “pipeline”) is reduced to $1 (reflecting
only raw material requirements). Accounts receivable is now negative, since these
jeans are prepaid for the period between sale and delivery. Non-manufacturing
PP&E is reduced by 40%, from $22 per pair in Fig. 4 to $13 in Fig. 5.
When the impact on the numerator and the denominator of ROIC are combined,
the promises of lean thinking are fulfilled. A kiosk yields a greater than ten fold
increase in profitability over an Original Levi’s store (from 16% ROIC to 200%),
while still accomplishing the strategic purpose of the store – a research lab for
putting Levi’s in closer touch with the end-use customer.
By 1999, there were 60 Personal Pair™ kiosks across the U.S. and Canada, one
in each of the Original Levi’s stores. The program in 1997 was responsible for
25% of all women’s jeans sales in the 30 U.S. company-owned stores.
Delivery was averaging only 3–4 days. The “promise date” was cut from three
weeks to two. All orders are shipped via Fed Ex which picks up daily at the
factory located near the Fed Ex hub in Memphis.
Levi Strauss acquired CCTC for more than $2 million in October of 1995.
The acquisition insured that CCTC would continue to work with Levi’s,
and only with Levi’s, to expand and improve this niche segment which
54 JOHN K. SHANK ET AL.

is based on computer-based custom fit, custom manufacturing, and direct


distribution.
In the Fall of 1996, Levi’s introduced Personal Pair™ in two stores in London
as the first overseas locations. The price was £19 higher than the regular £46
price. Manufacturing was still in the U.S. with distribution via Federal Express.
During 1997, the Personal Pair™ cutting and sewing operations were both
moved into the main Levi’s production plant and are now side-by-side with the
still-traditional bulk manufacturing business. Levi’s has made a decision not to
roll out the Personal Pair™ concept to its independent distributors. Evaluating the
rational for that choice is beyond the scope of this paper.
We believe that Levi’s roll out of the Personal Pair™ concept demonstrates
successful use of the lean thinking management principles. Women’s jeans was
a $2 billion industry with only 24% customer satisfaction. The principles of lean
thinking helped Levi’s seize this opportunity. CCTC, not burdened by the “push”
focus of the existing Levi’s value chain, was able to design a system much leaner
and “pull”-based, initiated by customer response. CCTC was able to create a
process flow that reflected value from the customer’s perspective. But lean think-
ing theory, alone, does not directly address profitability. For that, an APL model
is needed that draws on SCM concepts to encompass the entire supply chain.
Conceptually, we argue that an SCM-based APL model is the way to demonstrate
profitability impact. The example here provides one preliminary validation of the
SCM/APL theory.

CONCLUSION

Clearly, value chain reconfiguration is a central topic for many firms today in many
industries. Although value chain analysis can be framed in accounting terms as
the classic make/buy problem, the traditional accounting literature with its focus
on relevant cost analysis (RCA) does not provide much help with the strategic
aspects of the dilemma. The management literature is rich in discussing those
strategic issues, but is very thin on the related financial analysis. This literature’s
focus on transaction cost economics (TCE) is very appealing conceptually, but not
of much pragmatic help.
In this paper, we propose a new theoretical approach which extends conven-
tional RCA and TCE analysis to a full cost ROIC basis spanning the entire value
chain. This approach disaggregates the level of analysis from the firm as a whole
to an individual product sold to a particular customer segment. It couples the
SCM framework with an APL model to explicitly address the profit impact of
the managerial actions at each stage along the supply chain. We apply this new
The Profit Impact of Value Chain Reconfiguration 55

theoretical model to one particular decision context to demonstrate its practical,


as well as conceptual, usefulness.
Although our financial comparison across the wholesale channel, the Original
Levi’s channel and the Personal Pair™ kiosk is based upon estimates, limited
public information and best judgment, we feel that it is directionally correct and
approximately accurate.
We believe the SCM/APL framework is a very useful way to study changes
in value delivery systems, particularly when the unit of analysis can be framed
in end-use customer terms. This paper is intended as a first step in establishing
the new approach. We believe the next steps are further studies to demonstrate
its applicability in other supply chain configuration contexts. We encourage such
further research.

REFERENCES
Anthony, R. N., & Welsch, G. (1977). Foundamentals of management accounting. Irwin.
Ask, J. A., & Laseter, T. M. (1998), Cost modeling: A foundation purchasing skill. Strategy and
Business, 10. Booz Allen & Hamilton.
Atkinson, A. A., Banker, R. D., Kaplan, R. S., & Young, S. M. (1997). Management accounting.
Englewood Cliffs, NJ: Prentice-Hall.
Beamish, P. W., & Killing, P. J. (Eds) (1997). Cooperative strategies. European perspectives. San
Francisco: New Lexington Press.
Carr, L. P., & Ittner, C. D. (1992). Measuring the cost of ownership. Journal of Cost Management, Fall.
Clark, K. B., & Fujimoto, T. (1991). Product development performance. Boston, MA: Harvard Business
School Press.
Coase, R. H. (1937). The nature of the firm. Economica, 4.
D’Aveni, R. A. (1993). Hypercompetition. New York, NY: Free Press.
D’Aveni, R. A., & Ilinitch, A. V. (1992). Complex patterns of vertical integration in the forest products
industry: Systematic and bankruptcy risk. Academy of Management Journal, 35.
D’Aveni, R. A., & Ravenscraft, D. J. (1994). Economies of integration vs. bureaucracy costs: Does
vertical integration improve performance? Academy of Management Journal, 37.
Epstein, M. J., Kumar, P., & Westbrook, R. A. (2000). The drivers of customer and corporate profitabil-
ity: Modeling, measuring and managing the causal relationships. Advances in Management
Accounting, 9.
Gadiesh, O., & Gilbert, J. L. (1998). Profit pools: A fresh look at strategy. Harvard Business Review,
May–June.
Gulati, R. (1995). Does familiarity breed trust? The implications of repeated ties for contractual choice
in alliances. Academy of Management Journal, 38.
Hagedoorn, J. (1993). Understanding the rationale of strategic technology partnering: Interorganiza-
tional modes of cooperation and sectoral differences. Strategic Management Journal, 14(5).
Harrigan, K. R. (1983). Strategies for vertical integration. Lexington, MA: Heath & Lexington Books.
Hennart, J. F. (1988). Upstream vertical integration in the aluminum and tin industry. Journal of
Economic Behavior and Organization, 9.
56 JOHN K. SHANK ET AL.

Hergert, M., & Morris, D. (1989). Accounting data for value, chain analysis. Strategic Management
Journal, 10.
Horngren, C. T., Foster, G., & Datar, S. (1997). Cost accounting: A managerial emphasis. Englewood
Cliffs, NJ: Prentice-Hall.
Johnson, T. H. (1992). Relevance regained. New York, NY: Free Press.
Johnson, T. H., & Kaplan, R. S. (1987). Relevance lost. The rise and fall of management accounting.
Cambridge, MA: Harvard Business School Press.
Kaplan, R. S., & Atkinsons, A. A. (1989). Advanced management accounting. Englewood Cliffs, NJ:
Prentice-Hall.
Kaplan, R. S., & Cooper, R. (1998). Cost & effect. Using integrated cost systems to drive profitability
and performance. Boston, MA: Harvard Business School Press.
Khanna, T. (1998). The scope of alliances. Organization Science, May–June, Special Issue.
Kogut, B. (1988). Joint ventures: Theoretical and empirical perspectives. Strategic Management Jour-
nal, 9(4).
Kogut, B. (1991). Joint ventures and the option to expand and acquire. Management Science, 37.
Lorenzoni, G., & Baden Fuller, C. (1995). Creating a Strategic center to manage a web of partners.
California Management Review, 37.
Mowery, D. C. (Ed.) (1988). International collaborative ventures in U.S. manufacturing. Cambridge,
MA: Ballinger.
Nohria, N., & Eccles, R. G. (Eds) (1992). Networks and organizations: Structure, form, and action.
Boston, MA: Harvard Business School Press.
Normann, R., & Ramirez, R. (1994). Design interactive strategy. From value chain to value constella-
tion. Chichester, UK: Wiley.
Porter, M. E. (1985). Competitive advantage. New York, NY: Free Press.
Prahalad, C. K., & Hamel, G. (1994). Competing for the future. Boston, MA: Harvard Business School
Press.
Quinn, J. B. (1992). Intelligent enterprise. A knowledge and service based paradigm for industry.
New York, NY: Free Press.
Quinn, J. B., Doorley, T. L., & Paquette, P. C. (1990). Technologies in services: Rethinking strategic
focus. Sloan Management Review, 3(1).
Rackharn, N., Friedman, L., & Ruff, R. (1996). Getting partnering right. How market leaders creating
long term competitive advantage. New York, NY: McGraw-Hill.
Shank, J. K., & Govindarajan, V. (1992). Strategic cost analysis: The value chain perspective. Journal
of Management Accounting Research.
Shank, J. K., & Govindarajan, V. (1993). Strategic cost management: The new tool for competitive
advantage. New York, NY: Free Press.
Shank, J. K., Spiegel, E. A., & Escher, A. (1998). Strategic value analysis for competitive advan-
tage. An illustration from the petroleum industry. Strategy and Business, 10. Booz Allen
& Hamilton.
Shillinglaw, G. (1982). Managerial cost accounting. Homewood, IL: Irwin.
Slywotzky, A. J. (1996). Value migration. How to think several moves ahead of the competition. Boston,
MA: Harvard Business School Press.
Slywotzky, A. J., & Morrison, D. J. (1997). The profit zone. How strategic business design will lead
you to tomorrow’s profits. New York, NY: Times Business.
Westney, D. E. (1988). Domestic and foreign learning curves in managing international cooperative
strategies. In: F. J. Contractor & P. Lorange (Eds), Cooperative Strategies in International
Business. Lexington, MA: Lexington Books.
The Profit Impact of Value Chain Reconfiguration 57

Williamson, O. E. (1975). Markets and hierarchies. New York, NY: Free Press.
Williamson, O. E. (1985). The economic institutions of capitalism. New York, NY: Free
Press.
Williamson, O. E. (1986). Economic organization. Brighton: Wheatsheaf Books.
Womack, J. P., & Jones, D. T. (1996). Lean thinking. New York, NY: Simon & Schuster.
THE MEASUREMENT GAP IN PAYING
FOR PERFORMANCE: ACTUAL AND
PREFERRED MEASURES

Jeffrey F. Shields and Lourdes Ferreira White

ABSTRACT
What is measured gets managed – especially if rewards depend on it. For
this reason many companies (over 70% in this survey) have upgraded
their performance measurement systems so as to include a mix of financial
and non-financial metrics. This study compares how companies currently
measure performance for compensation purposes with how their managers
think performance should be measured. We find significant measurement
gaps between actual and preferred measures, and we find that larger
measurement gaps are related to lower overall performance. The choice
of performance measures for compensation purposes is also related to the
attitudes of managers towards manipulation of reported results.

INTRODUCTION
Performance measures are powerful means of conveying which aspects of perfor-
mance are important to a company and which areas a manager needs to focus on to
be evaluated as a top performer. Managers direct their attention to those measures
that most strongly influence their compensation. Recognizing the motivational
effects of performance measures, many companies have implemented major
changes to improve their performance measurement systems. According to the

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 59–83
© 2004 Published by Elsevier Ltd.
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12003-0
59
60 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

2001 survey of the Cost Management Group of the Institute of Management


Accountants, 80% of the respondents reported that their organizations invested
in significant changes in their performance measures in the last three years;
33% of the respondents described those changes as a “major overhaul” or “new
performance measurement system” (Frigo, 2001).
For many companies revising their performance measurement systems, there
has been an increase in the relative weight placed on non-financial, strategic
measures in incentive compensation contracts. But just as the percentage of
companies that link pay to financial and non-financial performance consistently
increases, there is little agreement on how to measure performance effectively.
Even within organizations, managers in different hierarchical levels often disagree
as to how performance should be measured for compensation purposes.
These issues motivated us to conduct a survey, among managers, concerning
actual company practices and their managers’ preferences regarding performance
measures. While other surveys have provided general evidence regarding the
performance measures that companies are currently using, we designed this
survey to address four research questions. First, which performance measures
are currently being used for the specific purpose of incentive compensation?
Second, which measures do managers think should be used to calculate their own
compensation? Third, is the choice of performance measures related to overall
performance (especially in cases where actual measurement practices deviate
from what managers would prefer)? Finally, what is the connection between the
choice of performance measures and the likelihood that manages will manipulate
the results?

LITERATURE REVIEW
Recent surveys on performance measurement have documented managers’
widespread dissatisfaction with current performance measures. For example, the
Institute of Management Accountants’ annual surveys on performance measure-
ment practices have consistently shown, since the 1990s, that more than half of
the respondents rate their companies’ performance measurement systems as poor
or, at best, adequate (see summary of the recent survey in Frigo, 2002). The IMA
survey in 2001 indicated that non-financial metrics related to customers, internal
processes and learning and growth (three perspectives proposed in the balanced
scorecard framework developed by Kaplan & Norton, 1992, 1996, 2001) received
lower ratings than financial metrics.
A large-scale study conducted by the American Institute of Certified Public
Accountants showed that only 35% of the respondents regarded their company’s
The Measurement Gap in Paying for Performance 61

performance measurement system as effective (AICPA & Maisel, 2001). Conse-


quently, about a third of the respondents stated that their organizations have imple-
mented changes in their performance measurement system in the last two years.
Both the IMA and AICPA surveys revealed that managers believed that more
extensive use of non-financial measures would improve their organizations’
ability to assess and enhance performance in strategically critical areas such as
customer performance, product innovation and employee capabilities. In the 2001
IMA survey, 87% of the respondents argued that non-financial metrics should be
used more extensively in their businesses. A common theme in these surveys is
the assumption that non-financial performance measures are more future-oriented
than traditional financial measures, so they should assist managers in making
decisions that will benefit their organizations in the long run. Non-financial
measures are often deemed to be better leading indicators or drivers of future
performance, while financial measures serve the purpose of providing lagging
information about past performance (Epstein et al., 2000; Ittner & Larcker, 1998a;
Kaplan & Norton, 1996, 2001). In addition, non-financial metrics tend to capture
input or process measures, as opposed to outcomes (Simons, 2000).
A balanced use of financial and non-financial measures, carefully linked to
strategic objectives and integrated through cause-and-effect relationships, should
facilitate strategy implementation and motivate superior performance, as argued
by the increasing number of supporters of the balanced scorecard approach orig-
inally developed by Kaplan and Norton (1992). Yet, the literature on the linkages
between the choice of performance measures and actual overall performance is
only beginning to produce affirmative results (e.g. Banker et al., 2000b; Malina
& Selto, 2001). For example, recent studies have shown that sales performance
is positively driven by customer satisfaction (Ittner & Larcker, 1998b), on-time
delivery (Nagar & Rajan, 2001) and employee satisfaction (Banker et al., 2000a).
Some causal relationships among sets of performance measures are beginning
to surface: for example, Rucci et al. (1998) provide evidence that an increase
in employee attitude drives an increase in customer satisfaction which, in turn,
drives an increase in revenue growth. So far, however, despite many calls for
changes in performance measurement systems, there is little systematic evidence
on how financial and non-financial performance metrics have been integrated in
practice in the design of incentive compensation plans, and even less evidence on
their impact on organizational performance.
This study addresses this issue, and proposes a new potentially fruitful area
of research: we investigate the gap or lack of fit between current performance
measurement systems and managers’ preferred performance measures, and we
explore whether this fit is related to managerial performance and attitudes towards
manipulating results to achieve desired performance targets.
62 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

THEORETICAL DEVELOPMENT
The person-organization fit literature is well-established in the human resource
management field (e.g. Chatman, 1989; O’Reilly et al., 1991; Posner, 1992). Fit
is defined in that literature as “the congruence between the norms and values of
organizations and the values of persons” (Chatman, 1989). Going beyond simply
measuring fit, organizational researchers have studied how individual values
interact with situations (e.g. incentives) to affect the attitudes and behaviors of
people in the workplace (O’Reilly et al., 1991). Person-organization fit has been
shown to influence vocational and job choices, job satisfaction, work adjustment,
organizational commitment and climate, and employee turnover (Chatman,
1989, 1991; O’Reilly et al., 1991). The importance of person-organization fit
for the ethical work climate has also received much attention by researchers in
business ethics (e.g. Sims & Kroeck, 1994). Yet, in the accounting literature,
person-organization fit has received only limited attention.
Management accounting researchers have mainly focused on fit from a macro
perspective, using either contingency theory (e.g. Merchant, 1984) or national
culture (Chow et al., 1996, 1999) to examine the effectiveness of actual control
system characteristics. Those studies have contributed additional evidence to
suggest that lack of fit can potentially increase the costs of attracting and retaining
employees, and may induce behaviors contrary to the firm’s interests.
In this study we attempt to extend prior research on management control
system fit in three main ways. First, we take a more focused perspective and
study fit at the level of interactions between individual managers and the specific
performance measurement systems that govern their work. Second, instead of
assuming what management preferences might be, we actually asked managers
to state their preferences for seventeen performance metrics commonly used
in performance-based incentive plans. This allowed us to directly quantify the
measurement gap or lack of fit. Third, we address a common criticism of previous
performance measurement research by investigating the empirical relationships
between selection of performance measures and actual performance.
We expect that existing performance measurement systems vary in how closely
they match managers’ preferences for performance metrics. When organizations
make decisions about which control system practices they will adopt, their choices
reflect primarily the values and preferences of those in charge of designing control
systems. There is little guarantee that those choices will be similarly valued by
all managers subject to such control systems, as demonstrated in previous studies
of the person-organization fit in the performance measurement and incentive
compensation areas (Bento & Ferreira, 1992; Bento & White, 1998). While
selection, socialization and turnover are all powerful mechanisms to improve
The Measurement Gap in Paying for Performance 63

person-organization fit, significant disagreements between individual managers’


preferences and actual performance measurement system characteristics may still
prevail and are thus worth investigating.
We also expect, based on person-organization interactional research (Chatman,
1989) that organizational performance and gaming attitudes will depend not only
on situational variables (the characteristics of extant performance measurement
systems implemented by companies) but also on personal preferences (based on
the values held by individuals working in those organizations). As performance
measurement systems more closely match managers’ preferences, a series of
positive effects (increased motivation, reduced job-related tension, and enhanced
organizational commitment) will take place, leading to improved performance.
By contrast, when a conflict is established by the lack of fit between managers’
preferences and existing performance measurement systems, there will be an
increased chance that managers will respond by engaging in various gaming
behaviors, in order to reach short-term performance targets without pursuing the
actions that the performance metrics were designed to induce. These expectations
are summarized by the following research proposition:
The magnitude of fit between a manager’s preference for performance measures and his or her
company’s actual use of such measures for incentive pay purposes is significantly associated
with performance and the ethical climate in the organization.

While the purpose of this exploratory study is not to provide a direct and
exhaustive test of this proposition, the results from this survey do provide initial
empirical evidence on the relationships among performance metrics, fit, and
organizational performance.

METHOD

Sample

We obtained support for this study from a sample of 100 managers in the mid-
Atlantic area. We conducted initial interviews with the managers to explain the
nature of the project, to ensure participation, and to verify that their positions in
their firms included budget responsibility.
The sample covered a wide range of industries (30% in manufacturing, 70%
in the service sector). To ensure confidentiality, we asked each participating
manager to complete the survey questionnaire and mail it to us anonymously. We
received a total of 64 completed questionnaires, yielding a 64% response rate.
We attribute this unusually high response rate (relative to other survey studies) to
64 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

our efforts to ensure participation prior to distribution of the questionnaires. The


managers in our survey are responsible for units with average budgeted revenues
of $40,000,000 and have been in their position for an average of five years.

Measures

Performance Measures
The survey questionnaire included nine financial measures (sales growth, volume
of orders, shipments, expense control, profitability, receivables management, in-
ventory management, efficiency gains, cash flow) and eight non-financial measures
(quality of products or services, customer satisfaction, new product introduction,
on-time delivery, accomplishment of strategic objectives, market share, and
employee satisfaction) selected from the literature on performance measurement.

Gap Measure
We asked the managers to rate, for each financial and non-financial performance
measure, the extent to which it actually affected his or her compensation (actual
measures). We also asked the managers to rate, for each measure, the extent
to which he or she would like that measure to affect his or her compensation
(preferred measures). Both types of ratings used a Likert-type scale ranging
from “1” (very little) to “7” (very much). Consistent with the literature on
person-organization fit (e.g. Chow et al., 1999), we computed the “measurement
gap” or lack of fit by subtracting actual use from preferred use of performance
measures for compensation purposes. A positive measurement gap indicates
that managers would prefer more emphasis on those measures. A negative gap
indicates that managers would prefer less emphasis on those measures.

Performance
We measured performance using a nine-item instrument that has been widely
applied by management accounting researchers (e.g. Brownell & Hirst, 1986;
Kren, 1992; Nouri et al., 1995). This instrument, while relatively subjective,
was employed because more objective performance indicators are usually not
available for lower-level responsibility centers. Matching self-evaluations with
supervisor evaluations was not possible given that the scope of this study (which
included earnings management issues) required that the various responsibility
center managers could rely on absolute survey confidentiality within their firms.

Gaming
We described various scenarios in which managers decide to improve reported
profits in the short-term, and asked the respondents to rate the likelihood that they
The Measurement Gap in Paying for Performance 65

would make such decisions. We adapted the scenarios from previous studies on
earnings management (Bruns & Merchant, 1989, 1990). For example, the scenarios
described decisions directed at either increasing sales (e.g. shipping earlier than
scheduled in order to meet a budget target), or reducing expenses (e.g. postponing
discretionary expenses to another budgeting period).

PERFORMANCE MEASURES FOR


COMPENSATION PURPOSES
Table 1 presents the distributions of all performance measures (actual and pre-
ferred). All 17 measures included in the survey were used in the incentive plans
of at least some of the managers surveyed. Based on the response scales ranging
from 1 (very little) to 7 (very much), we calculated the percentage of respondents
who reported high use or high preference for each performance measure as the
cumulative percentage of respondents who rated actual usage or preference for
that metric as 5 or higher. This convention is used in Fig. 1 through 4.

Actual Measures

Figures 1 and 2 show the financial and non-financial measures the respondents
reported as the most commonly used for compensation purposes. At least half
of the managers reported that their compensation is influenced largely by their
performance measured by sales, receivables management, volume of orders, and
profitability. Over 70% of the respondents reported that non-financial measures
play a major role in determining their compensation. The three most widely
used non-financial measures are new product introduction, customer satisfaction,
and achievement of strategic objectives. This result is similar to the findings of
the AICPA survey (AICPA & Maisel, 2001) and the Conference Board’s survey
(Gates, 1999).

Preferred Measures

When asked what performance factors they would like to see being used to affect
their own compensation, over half of the managers reported preferences for
efficiency gains and profitability for financial measures (Fig. 3), while employee
satisfaction, achievement of strategic objectives and quality are the three most
preferred non-financial measures (Fig. 4). Their preferences may be explained
by several different factors. Managers may feel that these are the aspects of
66
Table 1. Distributions of Actual and Preferred Performance Measures.
Measures Actual and Preferred Actual Measures Preferred Measures
Theoretical Range Actual Range Mean Standard Actual Mean Standard
Range Deviation Range Deviation

JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE


Sales growth 1–7 1–7 6.00 1.44 1–7 3.15 1.89
Volume of orders 1–7 1–7 4.48 1.96 1–7 3.32 2.01
Shipments 1–7 1–7 3.80 2.10 1–7 3.14 2.02
Expense control 1–7 1–7 3.33 2.15 1–7 3.66 1.84
Profitability 1–7 1–7 4.09 1.78 1–7 5.00 1.75
Receivables management 1–7 1–7 5.23 1.66 1–7 2.14 1.53
Inventory management 1–7 1–7 2.64 1.87 1–7 2.59 1.84
Efficiency gains 1–7 1–7 3.56 1.80 1–7 4.92 1.56
Cash flow 1–7 1–7 3.33 1.99 1–7 2.84 1.75
Quality of products or services 1–7 1–7 3.17 2.07 1–7 4.06 2.00
Customer satisfaction 1–7 1–7 4.95 1.86 1–7 4.06 2.02
New product introduction 1–7 1–7 5.19 1.82 1–7 2.84 2.02
On-time delivery 1–7 1–7 3.38 1.96 1–7 3.63 2.04
Accomplishment of project milestones 1–7 1–7 4.64 1.85 1–7 3.83 2.02
Achievement of strategic objectives 1–7 1–7 4.92 1.49 1–7 4.20 2.01
Market share 1–7 1–7 2.45 1.70 1–7 3.25 2.04
Employee satisfaction 1–7 1–7 3.11 1.90 1–7 4.70 1.60

Note: Extent to which the performance measure affects the manager’s compensation (1 = very little, 7 = very much).
The Measurement Gap in Paying for Performance
Fig. 1. Actual Use of Financial Performance Measures for Compensation Purposes.

67
68 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

Fig. 2. Actual Use of Non-Financial Performance Measures for Compensation Purposes.

performance that they can most directly control or they may consider that
these factors most closely reflect the decisions that they make on a day-to-day
basis. Preferences for particular performance measures may also be driven
by the managers’ ability in those areas, so that managers seek a performance
measurement system that closely matches their skill set. Interestingly enough,
two of the most commonly used measures are reported among the least preferred:
receivables management and new product introduction.

Measurement Gap

We performed some additional analysis to calculate the level of fit between actual
practices and managerial preferences with respect to performance measures
affecting compensation. As explained in the method section above, the gap mea-
sure was obtained by subtracting actual use from preferred use of performance

Fig. 3. Preferred Use of Finanacial Performance Measures for Compensation Purposes.


The Measurement Gap in Paying for Performance 69

Fig. 4. Preferred Use of Non-Financial Performance Measures for Compensation


Purposes.

measures for compensation purposes. Figure 5 shows the measurement gap for
financial measures, while the gap for non-financial measures appears in Fig. 6.
Considering both financial and non-financial performance measures combined,
the measurement gap is smallest for inventory management, expense controls,
and on-time delivery; it is largest for receivables management, new product
introduction, and sales. On average, there is greater disagreement (in absolute
terms) with the use of non-financial than financial measurements when companies
evaluate how much managers will get paid. The relative signs of the gap measure
provide additional information: the respondents reported that they would rather
have less emphasis on receivables management, new product introduction, and
sales for compensation purposes. One possible interpretation is that managers
would prefer less emphasis on these measures because they do not believe that
these measures adequately capture their performance. Conversely, the managers
responded that they would rather have more emphasis on employee satisfaction and
efficiency gains.

Balancing Financial and Non-financial Measures

In this survey we found that most companies are, in fact, trying to balance out
the use of financial and non-financial performance measures. As the correlation
70 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

Fig. 5. Financial Performance Measurement Gap: Preferred Use Minus Actual Use.

matrix of Table 2 shows, there are some significant correlations among financial
and non-financial metrics.
For example, the respondents who reported that their compensation is largely
influenced by sales also reported that their pay is contingent on achieving strategic
objectives. Similarly, managers who are paid based on how well they perform with
respect to efficiency gains also tend to be the ones who reported that employee
satisfaction plays a major role in determining their rewards. Managers in those
situations would have incentives to cut costs, but not in ways that would hurt
employee morale so much that the short-term cost savings would end up hurting
long-term profits and growth. Likewise, we found a strong correlation between the
emphasis on cash flows and quality for compensation purposes. This combination
encourages managers to invest in quality without losing sight of the need to
The Measurement Gap in Paying for Performance 71

Fig. 6. Non-Financial Performance Measurement Gap: Preferred Use Minus Actual Use.

generate cash flows. Those results are consistent with existing evidence that com-
panies are striving to get a more balanced picture of performance in order to avoid
distortions caused by a concern directed exclusively toward short-term, historic-
based financial measures (see evidence summarized by Malina & Selto, 2001).
This balance depends on the availability of non-financial indicators that can
deliver accurate, relevant and timely information on how well managers are doing
in those areas. But in many companies those measures are not readily available.
Besides measurement difficulties, companies also face serious constraints with
respect to what their information systems can deliver. According to a survey by
The Conference Board and the international consulting firm A. T. Kearney, Inc.,
57% of the respondents said that their companies’ information technology limited
their ability to implement the necessary changes in their current performance
72
Table 2. Correlation Matrix for Actual Financial and Non-Financial Performance Measures Used.
Measures 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

1 Sales growth
2 Volume of orders 0.37***
3 Shipments 0.20 0.58***
4 Expense control 0.07 0.23* 0.55***
5 Profitability 0.10 0.25** 0.15 0.49***
6 Receivables man- 0.28** 0.55*** 0.27** 0.30** 0.44***

JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE


agement
7 Inventory manage- 0.07 0.32*** 0.36*** 0.44*** 0.26** 0.45***
ment
8 Efficiency gains 0.19 0.06 −0.03 0.05 0.05 0.02 0.02
9 Cash flow −0.03 0.06 0.28** 0.59*** 0.30** 0.17 0.55*** −0.03
**
10 Quality of prod- 0.08 0.09 0.28 0.60*** 0.27 **
0.32*** 0.59*** 0.02 0.61***
ucts or services
11 Customer satisfac- 0.35*** 0.26** 0.44*** 0.38*** 0.28** 0.24** 0.16 0.22* 0.33*** 0.23*
tion
12 New product intro- 0.30** 0.23* 0.40*** 0.30** 0.26** 0.27** 0.26** 0.18 0.11 0.18 0.53***
duction
13 On-Time delivery 0.34*** 0.57*** 0.40*** 0.15 0.19 0.26** 0.10 0.19 −0.01 0.07 0.37*** 0.36***
14 Accomplishment 0.25** 0.13 0.26** 0.38*** 0.20 0.13 0.31** 0.29** 0.50*** 0.43*** 0.44*** 0.44*** 0.29**
of project mile-
stone
15 Achievement of 0.75*** 0.28** 0.24* 0.20 0.18 0.14 −0.06 0.11 −0.04 0.10 0.37*** 0.31** 0.29** 0.30**
strategic objective
16 Market share 0.39*** 0.40*** 0.22* 0.22* 0.23* 0.18 0.16 0.51*** 0.03 0.18 0.30** 0.20 0.46*** 0.20 0.38***
17 Employee satis- 0.37*** 0.10 0.05 0.05 −0.08 0.00 −0.00 0.65*** −0.08 −0.06 0.33*** 0.28** 0.33*** 0.21* 0.28** 0.50***
faction

p < 0.10.
∗∗
p < 0.05.
∗∗∗
p < 0.01.
The Measurement Gap in Paying for Performance 73

measurement systems (Gates, 1999). Similarly, more than half of the respondents
to the AICPA survey anticipated technology changes in their organizations in
the next year to 18 months, and 79% rated the quality of information in their
performance measurement systems as poor to adequate (AICPA & Maisel, 2001).

IMPACT ON PERFORMANCE
Firms invest large amounts of resources in designing, implementing and
maintaining performance measurement systems. It is thus critical to assess
whether the choice of performance measures has had any significant impact on
actual overall performance and on managerial decisions. In this study we used
Pearson correlation coefficients to evaluate the relationship between performance
measures and managerial performance.

Table 3. Correlations Between Actual Use of Performance Measures and


Performance.
Measures Correlation Coefficient 2-Tailed Significance
with Performance (p Values)

Panel A: Correlation between actual financial measures used for compensation purposes and
performance
Sales growth 0.13 (0.319)
Volume of orders 0.04 (0.782)
Shipments 0.17 (0.178)
Expense control 0.15 (0.224)
Profitability 0.17 (0.169)
Receivables management −0.03 (0.801)
Inventory management 0 (0.971)
Efficiency gains 0.26 (0.039)*
Cash flow −0.09 (0.483)
Panel B: Correlation between actual non-financial measures used for compensation purposes and
performance
Quality of products or services 0.16 (0.207)
Customer satisfaction 0.19 (0.128)
New product introduction 0.23 (0.063)*
On-time delivery 0.16 (0.198)
Accomplishment of project milestones 0.14 (0.258)
Achievement of strategic objectives 0.37 (0.003)*
Market share 0.33 (0.008)*
Employee satisfaction 0.04 (0.736)
∗ Significant at p < 0.10.
74 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

Panels A and B of Table 3 show that managers who have their compensation
primarily tied to efficiency gains, new product introduction, achievement of
strategic objectives, and market share tend to be the ones who perform the best
overall. This result is consistent with Kaplan and Norton’s central argument
that individual performance measures must be directly tied to strategy (Kaplan
& Norton, 2001). This evidence also confirms what many companies have
learned, albeit the hard way: there has to be a strong link between financial and
non-financial performance measurements, lest the two may work against each
other. For example, a company that may be doing well in terms of achieving
certain strategic objectives, introducing new products and controlling a significant
market share may still face financial difficulties, if efficiency is not properly
considered when evaluating and rewarding its managers.
In Panels A and B of Table 4 the correlations between managerial preferences
for performance measures and overall performance are reported. The three

Table 4. Correlations Between Preferred Use of Performance Measures and


Performance.
Measures Correlation Coefficient 2-Tailed Significance
with Performance (p Values)

Panel A: Correlation between preferred financial measures used for compensation purposes and
performance
Sales growth 0.25 (0.049)*
Volume of orders 0.32 (0.009)*
Shipments 0.14 (0.283)
Expense control 0.27 (0.028)*
Profitability 0.15 (0.244)
Receivables management 0.08 (0.506)
Inventory management −0.05 (0.696)
Efficiency gains −0.04 (0.738)
Cash flow 0.31 (0.014)*
Panel B: Correlation between preferred non-financial measures used for compensation purposes and
performance
Quality of products or services 0.28 (0.023)*
Customer satisfaction 0.30 (0.018)*
New product introduction 0.14 (0.254)
On-time delivery 0.28 (0.024)*
Accomplishment of project milestones 0.47 (0.000)*
Achievement of strategic objectives 0.36 (0.003)*
Market share 0.11 (0.371)
Employee satisfaction −0.06 (0.654)
∗ Significant at p < 0.10.
The Measurement Gap in Paying for Performance 75

strongest correlations between preferred performance measures and performance


are found in the accomplishment of project milestones, the achievement of strate-
gic objectives, and the volume of customer orders. In other words, top-performing
managers tend to prefer these measures when choosing which factors should
be used to determine their compensation. Again, we observe the relevance of
non-financial measures for the identification of superior performance.

Consensus and Performance

One major challenge in trying to implement changes in the performance measure-


ment system is to ensure buy-in from business-unit managers and other employees
(AICPA & Maisel, 2001; Kaplan & Norton, 1996; Leahy, 2000). But this challenge
does not necessarily mean that companies should select the performance measures

Table 5. Correlations Between Measurement Gap (Absolute Differences


Between Preferred and Actual Use of Performance Measures) and Performance.
Measures Correlation Coefficient 2-Tailed Significance
with Performance (p Values)

Panel A: Correlation between measurement gap in financial measures used for compensation
purposes and performance
Sales growth −0.25 (0.044)*
Volume of orders −0.42 (0.000)*
Shipments −0.03 (0.827)
Expense control −0.06 (0.662)
Profitability −0.14 (0.255)
Receivables management −0.11 (0.401)
Inventory management −0.15 (0.226)
Efficiency gains −0.21 (0.092)*
Cash flow −0.08 (0.506)
Panel B: Correlation between measurement gap in non-financial measures used for compensation
purposes and performance
Quality of products or services 0.02 (0.860)
Customer satisfaction −0.19 (0.123)
New product introduction −0.18 (0.148)
On-time delivery 0.05 (0.707)
Accomplishment of project milestones 0.06 (0.630)
Achievement of strategic objectives −0.35 (0.004)*
Market share −0.15 (0.233)
Employee satisfaction −0.09 (0.461)
∗ Significant at p < 0.10.
76 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

that managers want to see as factors influencing their compensation. Some experts
argue that striving too hard to achieve consensus may paralyze an organization’s
effort to implement a new performance measurement system. Moreover, an ex-
cessive concern about consensus may lead to an incoherent measurement system,
one that uses diverse measures to please groups with different interests but has no
clear link to overall strategy.
We explored the issue of consensus by conducting further correlation analysis
on the relationship between the level of disagreement with the performance
measurement system and performance. The results in Table 5 show that the
magnitude of the disagreements do matter. Disagreements with the use of
volume of customer orders, strategic objectives, sales, and efficiency gains for
compensation purposes are associated with lower performance. One possible
explanation is that managers who disagree strongly about having order volume,
strategic objectives, sales, and efficiency gains influence their compensation lack
the motivation necessary to achieve high performance levels. These measurement

Table 6. Correlations Between Actual Use of Performance Measures and


Gaming.
Measures Correlation Coefficient 2-Tailed Significance
with Gaming (p Values)

Panel A: Correlation between actual financial measures used for compensation purposes and gaming
Sales growth 0.18 (0.145)
Volume of orders 0.02 (0.862)
Shipments 0.01 (0.929)
Expense control 0.06 (0.638)
Profitability −0.05 (0.666)
Receivables management 0.12 (0.356)
Inventory management −0.04 (0.738)
Efficiency gains 0.16 (0.220)
Cash flow −0.02 (0.875)
Panel B: Correlation between actual non-financial measures used for compensation purposes and
gaming
Quality of products or services 0.05 (0.694)
Customer satisfaction −0.10 (0.422)
New product introduction 0.01 (0.933)
On-time delivery −0.12 (0.329)
Accomplishment of project milestones 0.08 (0.526)
Achievement of strategic objectives 0.30 (0.018)*
Market share 0.21 (0.096)*
Employee satisfaction 0.11 (0.398)
∗ Significant at p < 0.10.
The Measurement Gap in Paying for Performance 77

gaps and their relationship with performance suggest that companies need to
devote more attention to the preferences of participants in pay-for-performance
plans, and involve them in the process of selecting performance measures to
be used for compensation purposes. This result is in sharp contrast with one
finding in the Conference Board survey mentioned above: while recognizing the
potential resistance to change in performance measurement systems, only 9% of
the respondents said they would “consider identifying key stakeholder reasons
for resisting the strategic performance measurement effort” (Gates, 1999, p. 6).

EARNINGS MANAGEMENT
Critics of non-financial performance measures often point out that these measures
are more easily manipulated than financial measures, since they are neither audited

Table 7. Correlations Between Preferred Use of Performance Measures and


Gaming.
Measures Correlation Coefficient 2-Tailed Significance
with Gaming (p Values)

Panel A: Correlation between preferred financial measures used for compensation purposes and
gaming
Sales growth 0.13 (0.313)
Volume of orders 0.18 (0.158)
Shipments 0.07 (0.607)
Expense control −0.13 (0.299)
Profitability 0.10 (0.417)
Receivables management 0.03 (0.794)
Inventory management −0.04 (0.751)
Efficiency gains 0.06 (0.664)
Cash flow 0.21 (0.090)*
Panel B: Correlation between preferred non-financial measures used for compensation purposes and
gaming
Quality of products or services −0.00 (0.984)
Customer satisfaction 0.21 (0.089)*
New product introduction 0.38 (0.766)
On-time delivery 0.20 (0.108)
Accomplishment of project milestones 0.38 (0.002)*
Achievement of strategic objectives 0.28 (0.027)*
Market share 0.08 (0.516)
Employee satisfaction 0.06 (0.650)
∗ Significant at p < 0.10.
78 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

nor subject to generally accepted accounting principles. Hence we used Pearson


correlation coefficients to examine the extent to which the type of performance
measure used for compensation purposes is associated with managers engaging in
earnings-management practices to achieve short-term results.
Tables 6 and 7 show the correlations between the managers’ responses
concerning performance measures used for compensation purposes and the
likelihood that they would engage in gaming the results. In Tables 6 and 7, a
positive correlation coefficient means that a greater emphasis on those measures
is associated with a greater likelihood of gaming. As shown in Table 6, managers
whose compensation actually depends largely on the achievement of strategic
objectives and market share reported a higher likelihood that they would engage
in earnings-management practices to manipulate the results. Interestingly, these
same two measures were significantly correlated with performance (see Table 3).
In addition, Table 7 shows that managers who reported a higher preference for

Table 8. Correlations Between Measurement Gap (Preferred Minus Actual Use


of Performance Measures) and Gaming.
Measures Correlation Coefficient 2-Tailed Significance
with Gaming (p Values)

Panel A: Correlation between measurement gap in financial measures used for compensation
purposes and gaming
Sales growth −0.01 (0.931)
Volume of orders 0.15 (0.225)
Shipments 0.06 (0.665)
Expense control 0.15 (0.228)
Profitability 0.14 (0.269)
Receivables management −0.08 (0.535)
Inventory management 0.00 (0.983)
Efficiency gains −0.10 (0.425)
Cash flow 0.19 (0.138)
Panel B: Correlation between measurement gap in non-financial measures used for compensation
purposes and gaming
Quality of products or services −0.04 (0.759)
Customer satisfaction 0.30 (0.018)*
New product introduction 0.02 (0.854)
On-time delivery 0.27 (0.029)*
Accomplishment of project milestones 0.28 (0.025)*
Achievement of strategic objectives 0.06 (0.666)
Market share −0.13 (0.306)
Employee satisfaction −0.07 (0.589)
∗ Significant at p < 0.10.
The Measurement Gap in Paying for Performance 79

the use of project milestones, strategic objectives, customer satisfaction, and


cash flow for compensation purposes also reported that they are more likely
to manage earnings. This result suggests that some managers may prefer those
measures precisely because they believe that they can manipulate their reported
performance in those areas. Again, preferences for all four of these measures
were found to be significantly related to performance (see Table 4).
Furthermore, the results in Table 8 show that disagreements regarding the use
of customer satisfaction, project milestones, and on-time delivery as non-financial
performance factors that influence compensation are associated with a higher
likelihood of gaming. These correlations suggest that a larger measurement gap in
the use of these non-financial measures is positively related to managers reporting
that they are more likely to manipulate results. Disagreements related to the use
of financial measures were not found to be associated with gaming (see Panel
A of Table 8). One possible explanation for this surprising result is that financial
measures are perceived as less susceptible to manipulation; yet, recent corporate
scandals involving deceptive financial reporting have proven this explanation
less plausible.

SUMMARY AND RELEVANCE OF THE FINDINGS


Recent studies show that companies are implementing significant changes in their
performance measurement systems. Those systems are becoming more complex as
organizations try to balance several measures, both financial and non-financial, to
figure out who the top performers are, and to motivate and reward their continued
performance. Over 70% of the companies that we surveyed place a strong emphasis
on non-financial performance measures when calculating how much to pay their
managers.
Our results indicate that several non-financial measures are significantly related
to superior overall performance. In particular, we find that the achievement of
strategic objectives is a metric significantly correlated with performance. This
measure is one of the three most commonly used non-financial measures, and one
of the three most preferred measures.
Managers who are most critical of the degree of emphasis placed on volume of
customer orders, strategic objectives, efficiency gains, or sales for compensation
purposes show the lowest performance. Given that those measures are among the
ones that most often influence managerial compensation, companies should pay
special attention on how they are used, or their intended incentive effects may
never materialize. Instead, managers’ dissatisfaction with the use of those mea-
sures may actually hamper performance. We also find that managers who disagree
80 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

the most with the degree of importance given to customer satisfaction, on-time
delivery, and project milestones are the ones most likely to manipulate reported
performance.
The results we obtained relating the measurement gap (the difference between
preferred and actual measures) to overall performance and gaming suggest that
the process of designing a measurement system may be at least as important as
the actual measures selected in the end. Companies need to involve those who will
participate in pay-for-performance plans in the process of selecting appropriate
performance measures. Teams in charge of designing measurement systems could
benefit from an awareness of managers’ preferences, as those preferences may
reveal the measures that most closely reflect what managers can control.
A well-managed design process should increase motivation and commitment,
promote improved performance, and reduce the incidence of a gaming attitude.
This is particularly true in companies that value participation and consensus. The
design process should encourage open, honest dialogue on strategic objectives, and
it should involve as many incentive plan participants as possible. When designers
and participants reach an agreement about strategic objectives, then they can
select specific measures based on those objectives without risking a loss of focus.
Managing this design process presents quite a challenge for an organization’s
executive team, for this team must strive to stimulate open discussion while
maintaining a coherent vision. The design process may be seriously constrained
by company politics. This process may become even more difficult when conflicts
arise concerning how non-financial measures should be defined. In our survey we
found that managers on average disagree more with their company’s use of non-
financial measures than with their use of financial measures. Such disagreement
results at least in part from inexperience with non-financial measurements.
Our finding of a high use of non-financial measures in pay-for-performance
plans presents an important challenge to the management accounting profession.
While most management accountants have been trained to produce, analyze and
communicate financial information, few are ready for the technical and human
adjustments necessary to deal with non-financial information that is useful for
managers. Not surprisingly, the AICPA survey anticipated that the level of effort
finance professionals will be devoting to performance measurement will increase
in the near future (AICPA & Maisel, 2001). Furthermore, the need to develop and
implement non-financial measures brings additional pressure to bear on informa-
tion systems to provide reliable information on how well managers are doing in
a wide range of areas. In this information age, it is increasingly important to have
information systems capable of supporting the complexities of cutting-edge perfor-
mance measurement systems. Current surveys show that such information systems
are in short supply.
The Measurement Gap in Paying for Performance 81

Future research into the effectiveness of performance measurement systems


should benefit from examining management’s preferences regarding the choice of
performance metrics, as we introduced in this study. In relating performance met-
rics to managerial performance or other outcome variables, management account-
ing researchers need to pay adequate attention to the “measurement gap” variable
we proposed in this study. As suggested by the results from this survey, managers’
disagreements with the performance measures chosen to determine their pay sig-
nificantly influence performance and gaming. Omission of the measurement gap
variable may well explain some of the conflicting evidence in the existing literature
on the performance effects of performance metrics included in incentive plans.
The results of this study also indicate that caution is in order before inclusion of
non-financial performance measures in incentive plans. We found that managers
who reported a high preference for certain non-financial measures also reported
they would most likely engage in games to manipulate the short-term results and
thereby increase their pay. Hence designers of performance measurement systems
should carefully define these measures in order to ensure that they are as clear and
objective as possible, and that all of the parties involved understand and accept
them. This increased awareness and communication should also help ensure that,
as companies change their performance measurement systems for compensation
purposes, they will be able to narrow the measurement gap.

ACKNOWLEDGMENT
Research support from a grant by the Merrick School of Business is gratefully
acknowledged.

REFERENCES
American Institute of Certified Public Accountants & Maisel, L. (2001). Performance measurement
practices survey results. Jersey City, NJ: AICPA.
Banker, R., Konstans, C., & Mashruwala, R. (2000). A contextual study of links between employee
satisfaction, employee turnover, customer satisfaction and financial performance. Working
Paper. University of Texas at Dallas.
Banker, R., Potter, G., & Srinivasan, D. (2000). An empirical investigation of an incentive plan that
includes nonfinancial performance measures. The Accounting Review (January), 65–92.
Bento, R., & Ferreira, L. (1992). Incentive pay and organizational culture. In: W. Bruns (Ed.),
Performance Measurement, Evaluation and Incentives (pp. 157–180). Boston: Harvard
Business School Press.
Bento, R., & White, L. (1998). Participant values and incentive plans. Human Resource Management
Journal (Spring), 47–59.
82 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE

Brownell, P., & Hirst, M. (1986). Reliance on accounting information, budgetary participation,
and task uncertainty: Tests of a three-way interaction. Journal of Accounting Research,
241–249.
Bruns, W., & Merchant, K. (1989). Ethics test for everyday managers. Harvard Business Review
(March–April), 220–221.
Bruns, W., & Merchant, K. (1990). The dangerous morality of managing earnings. Management
Accounting (August), 22–25.
Chatman, J. (1989). Improving interactional organizational research: A model of person-organization
fit. Academy of Management Review, 333–349.
Chatman, J. (1991). Matching people and organizations: Selection and socialization in public
accounting firms. Administrative Science Quarterly, 36, 459–484.
Chow, C., Kato, Y., & Merchant, K. (1996). The use of organizational controls and their effects
on data manipulation and management myopia: A Japan vs. U.S. comparison. Accounting,
Organizations and Society, 21, 175–192.
Chow, C., Shields, M., & Wu, A. (1999). The importance of national culture in the design of management
controls for multi-national operations. Accounting, Organizations and Society, 24, 441–461.
Epstein, M. J., Kumar, P., & Westbrook, R. A. (2000). The drivers of customer and corporate
profitability: Modeling, measuring, and managing the casual relationships. Advances in
Management Accounting, 9, 43–72.
Frigo, M. (2001). 2001 Cost management group survey on performance measurement. Montvale, NJ:
Institute of Management Accountants.
Frigo, M. (2002). Nonfinancial performance measures and strategy execution. Strategic Finance
(August), 6–9.
Gates, S. (1999). Aligning strategic performance measures and results. New York: Conference Board.
Ittner, C., & Larcker, D. (1998a). Innovations in performance measurement: Trends and research
implications. Journal of Management Accounting Research, 10, 205–238.
Ittner, C., & Larcker, D. (1998b). Are nonfinancial measures leading indicators of financial per-
formance? An analysis of customer satisfaction. Journal of Accounting Research (Suppl.),
1–35.
Kaplan, R., & Norton, D. (1992). The balanced scorecard: Measures that drive performance. Harvard
Business Review (January–February), 71–79.
Kaplan, R., & Norton, D. (1996). The balanced scorecard. Boston: Harvard Business School Press.
Kaplan, R., & Norton, D. P. (2001). The strategy-focused organization. Boston: Harvard Business Press.
Kren, L. (1992). Budgetary participation and managerial performance. The Accounting Review,
511–526.
Leahy, T. (2000). All the right moves. Business Finance (April), 27–32.
Malina, M., & Selto, F. (2001). Communicating and controlling strategy: An empirical study of the
effectiveness of the balanced scorecard. Journal of Management Accounting Research, 47–90.
Merchant, K. (1984). Influences on departmental budgeting: An empirical examination of a
contingency model. Accounting, Organization and Society, 9, 291–307.
Nagar, V., & Rajan, M. (2001). The revenue implications of financial and operational measures of
product quality. The Accounting Review (October), 495–513.
Nouri, H., Blau, G., & Shahid, A. (1995). The effect of socially desirable responding (SDR) on
the relation between budgetary participation and self-reported job performance. Advances in
Management Accounting, 163–177.
O’Reilly, C., Chatman, J., & Caldwell, D. (1991). People and organizational culture: A profile compar-
ison approach to assessing person-organization fit. Academy of Management Journal, 487–516.
The Measurement Gap in Paying for Performance 83

Posner, B. (1992). Person-organization values congruence: No support for individual differences as a


moderating influence. Human Relations, 351–361.
Rucci, A., Kirn, S., & Quinn, R. (1998). The employee-customer-profit chain at Sears. Harvard
Business Review (January–February), 83–97.
Simons, R. (2000). Performance measurement & control systems for implementing strategy: Text and
cases. Upper Saddle River: Prentice-Hall.
Sims, R., & Kroeck, K. (1994). The influence of ethical fit on employee satisfaction, commitment and
turnover. Journal of Business Ethics, 13, 939–947.
AN EMPIRICAL EXAMINATION OF
COST ACCOUNTING PRACTICES USED
IN ADVANCED MANUFACTURING
ENVIRONMENTS

Rosemary R. Fullerton and Cheryl S. McWatters

ABSTRACT
Despite arguments that traditional product costing and variance analysis
operate contrary to the strategic goals of advanced manufacturing practices
such as just in time (JIT), total quality management (TQM), and Six Sigma, lit-
tle empirical evidence exists that cost accounting practices (CAP) are chang-
ing in the era of continuous improvement and waste reduction. This research
supplies some of the first evidence of what CAP are employed to support
the information needs of a world-class manufacturing environment. Using
survey data obtained from executives of 121 U.S. manufacturing firms, the
study examines the relationship between the use of JIT, TQM, and Six Sigma
with various forms of traditional and non-traditional CAP. Analysis of vari-
ance tests (ANOVA) indicate that most traditional CAP continue to be used
in all manufacturing environments, but a significant portion of world-class
manufacturers supplement their internal management accounting system with
non-traditional management accounting techniques.

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 85–113
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12004-2
85
86 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

INTRODUCTION
Firms competing in a global arena and adopting sophisticated manufacturing
technologies, such as total quality management (TQM) and just-in-time (JIT), re-
quire a complementary management accounting system (MAS) (Sillince & Sykes,
1995; Welfe & Keltyka, 2000). The MAS should support advanced manufacturing
technologies by providing integrated information to interpret and to assess
activities that have an impact on strategic priorities. The adoption of advanced
manufacturing practices suggests a shift away from a short-term, financially-
oriented product focus towards a modified, more non-financial, process-oriented
focus that fits operations strategies (Daniel & Reitsperger, 1991) and integrates
activities with strategic priorities (Chenhall & Langfield-Smith, 1998b).
Previous studies have reported that organizations using more efficient produc-
tion practices make greater use of non-traditional information and reward systems
(Banker et al., 1993a, b; Callen et al., 2002; Fullerton & McWatters, 2002; Ittner
& Larcker, 1995; Patell, 1987); yet, little empirical evidence exists that cost
accounting practices (CAP) of a firm’s MAS also are changing. The objectives
of traditional product costing and variance analysis seemingly operate contrary to
the strategic goals of continuous improvement and waste reduction embodied in
advanced manufacturing production processes. It is argued that the benefits from
JIT and TQM implementation would be captured and reflected more clearly by
the parallel adoption of more simplified, non-traditional CAP. However, minimal
evidence exists to support the assessment that current accounting practices are
harmful to the improvement of manufacturing technology. In fact, most studies
that have examined this issue find that companies continue to rely on conventional
accounting information, even in sophisticated manufacturing environments (e.g.
Baker et al., 1994; Cobb, 1992; McNair et al., 1989). Zimmerman (2003, p. 11)
suggests that managers must derive some hidden benefits from continuing to
use “presumably inferior accounting information” in their decision making. For
example, management control over operations provided by the existing MAS
may outweigh the benefits of other systems that are better for decision-making
purposes. The objective of this study is to explore whether specific CAP are
changing to meet the information needs of advanced manufacturing environments.
To examine the CAP used in advanced manufacturing environments, a survey
instrument was sent to executives representing 182 U.S. manufacturing firms.
Data from the 121 survey responses were analyzed to determine whether the
use of non-traditional CAP is linked to the implementation of JIT, TQM, and
Six Sigma. The results show that there have been minimal changes in the use
of traditional CAP. However, evidence exists that rather than replacing the
traditional, internal-accounting practices, supplementary measures have been
An Empirical Examination of Cost Accounting Practices 87

added to provide more timely and accurate information for internal planning and
control. Perhaps much of the criticism of CAP is unfounded, and the emergence of
supplemental financial and non-financial information, combined with traditional
accounting techniques, equips management with the appropriate decision-making
and control tools for an advanced manufacturing environment. This paper provides
some of the first empirical evidence of what CAP actually are being used in
conjunction with JIT, TQM, and Six Sigma.
The remainder of this paper is organized as follows: The following section exam-
ines the prior literature related to advanced manufacturing practices and CAP, and
identifies the research question. The next section describes the research method.
The following two sections present and discuss the empirical results. The final
section briefly summarizes the study and provides direction for future research.

RESEARCH QUESTION DEVELOPMENT

Information Needs for Advanced Manufacturing Environments

Advanced Manufacturing Environments Defined


World class manufacturing (WCM) was defined initially by Hayes and
Wheelwright (1984) and Schonberger (1986) as a competitive strategy employing
the best practices in quality, lean production, and concurrent engineering. Voss
(1995, p. 10) defines “Best Practices” as the most recent of three accepted
paradigms of manufacturing strategy. Encompassed within this paradigm are
the world-class elements of “JIT manufacturing which has evolved into lean
production, TQM, and concurrent engineering.” According to McNair et al. (1989,
p. 9), the “manufacturing competitiveness of the future is dependent upon a rapid
development of JIT and other advanced manufacturing technology principles.”
Both TQM and JIT have been described as organization-wide philosophies
that focus on continuous improvement. They often are linked as complementary
manufacturing strategies (see Banker et al., 1993a, b; Dean & Snell, 1996; Flynn
et al., 1995; Kristensen et al., 1999; Vuppalapati et al., 1995). TQM concentrates
on “systematically and continuously improving the quality of products, processes,
and services.” JIT’s emphasis on waste elimination and productivity improve-
ment is expected to increase organizational efficiency and effectiveness (Yasin
et al., 1997) through better quality, lower inventory, and shorter lead times. Six
Sigma is a more recent manufacturing technique that also has been described as “an
organization methodology for achieving total quality throughout the company”
(Witt, 2001, p. 10). Like TQM, Six Sigma’s focus is on the customer’s definition
of quality, as it establishes objectives for reductions in defect rates (Linderman
88 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

et al., 2003). In comparison to TQM, Six Sigma is a rigorous measurement


tool that depends on full commitment of top management, and evaluates its
effectiveness in terms of bottom-line results (Ellis, 2001; Hendricks & Kelbaugh,
1998). Its success is dependent upon the adoption of and commitment to lean
production practices (Voelkel, 2002).

Inadequacies of Traditional CAP


Considerable literature asserts that existing CAP do not support a WCM envi-
ronment (i.e. Drury, 1999; Green et al., 1992; Harris, 1990; Hedin & Russell,
1992; Johnson, 1992; Kaplan, 1983, 1984; McNair et al., 1989, 1990; Sillince &
Sykes, 1995; Wisner & Fawcett, 1991). Holzer and Norreklit (1991) state that the
“greatest impact on cost accounting in recent years has come about through rapid
advances in manufacturing technology and the adoption of a JIT management
philosophy.” The classical model of cost accounting was designed for a different
environment and encourages inappropriate behavior (Holzer & Norreklit, 1991;
Howell & Soucy, 1987). Cobb (1992) warns that if management accountants do
not respond to the information needs of the new manufacturing environment, they
will be relegated to a historical recording role, and another function within the
organization will be the forefront provider of internal information. Daniel and
Reitsperger (1996, p. 116) reiterated this view:
The skill-specific isolation of management accountants seems to be a significant constraint in
adjusting management control systems in the U.S. Our evidence shows that progressive man-
agement accountants are scrambling for a ‘new purpose’ in a rapidly changing manufacturing
environment, while traditionalists in the profession desperately seek to maintain systems that
are incompatible with the dynamism of JIT manufacturing.

Research studies support the increasing lack of reliance upon management


accounting. For example, case studies report that operations managers prepare
their own cost accounting data to support the implementation of new manufac-
turing practices, with little communication or respect for the internal accountants
(Chenhall & Langfield-Smith, 1998a; Sillince & Sykes, 1995).
More than a decade ago, standard costing and variance analysis were declared
inadequate and their continued use potentially contributing to lost competitiveness
and other dysfunctional consequences. Standards often encompass waste and
encourage mediocrity, focusing on conformance to standards, rather than contin-
uous improvement (Hendricks, 1994). Unless updated and reported frequently,
standards lack timeliness.
Achieving standard direct cost ‘efficiency’ targets leads to larger batches, longer production
runs, more scrap, more rework, and less communication across process . . . In any plant striv-
ing to achieve manufacturing excellence, standard cost performance systems are anathema –
An Empirical Examination of Cost Accounting Practices 89

especially if incentive compensation is geared to controlling standard-to-actual variances


(Johnson, 1992, p. 49).

Research Question
The benefits that firms reap from implementing advanced manufacturing
techniques appear to be enhanced by complementary changes in their internal
accounting measures (Ahmed et al., 1991; Ansari & Modarress, 1986; Barney,
1986; Ittner & Larcker, 1995; Milgrom & Roberts, 1995). “Quality improvement
advocates argue that the organizational changes needed for effective TQM
require new approaches to management accounting and control,” with more
comprehensive distribution of new types of information that measure quality
and team performance (Ittner & Larcker, 1995, p. 3). This study examines the
association between the adoption of advanced manufacturing techniques and
specific CAP in terms of the following research question:
Do firms that implement advanced manufacturing techniques such as JIT, TQM,
and Six Sigma use more non-traditional cost accounting practices?

RESEARCH METHOD
Data Collection

Survey Instrument
To explore the research question, a detailed survey instrument was used to
collect specific information about the manufacturing operations, product costing
methods, information and incentive systems, advanced manufacturing practices
employed, and characteristics of the respondent firms.1 The majority of the
questions on the survey instrument were either categorical or interval Likert
scales. Factor analysis combined the Likert-scaled questions into independent
measures to test the research question. To evaluate the survey instrument for
readability, completeness, and clarity, a limited pretest was conducted by soliciting
feedback from business professors and managers from four manufacturing firms
that were familiar with advanced manufacturing practices. Appropriate changes
were made to reflect their comments and suggestions.

Sample Firms
The initial sample firms for this study constituted 253 pre-identified executives
from manufacturing firms that had responded to a similar survey study in 1997
(see Fullerton & McWatters, 2001, 2002). Of the original 253 firms, 66 (26%)
were no longer viable, independent businesses. Over half of the initial respondents
90 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

in the 187 remaining firms were no longer with the company. Replacements
were contacted in all but the five firms that declined to participate. Thus, 182
manufacturing executives were contacted a maximum of three times via e-mail,
fax, or mail.2 One hundred twenty-one usable responses were received, for an
overall response rate of 66%. The majority of the respondents had titles equivalent
to the Vice President of Operations, the Director of Manufacturing, or the Plant
Manager. They had an average of 19 years of management experience, including
12 years in management with their current firm.
The respondent firms had a primary two-digit SIC code within the manufac-
turing ranges of 20 and 39. As shown on Table 1, the majority (64%) of the
respondent firms is from three industries: industrial machinery (SIC-35, 16%),
electronics (SIC-36, 28%), and instrumentation (SIC-38, 20%).3

Advanced Manufacturing Practices

Discrete Measures for JIT, TQM, and Six Sigma


The advanced manufacturing production processes examined in this study are
JIT, TQM, and Six Sigma. Survey respondents were asked to indicate “yes” or
“no,” whether or not they had formally implemented JIT and TQM, and whether
or not they were using Six Sigma. Thirty-seven of the firms (31%) indicated
that they were not using any of the three techniques. As Table 1 demonstrates,
many of the firms had implemented a combination of these practices. Of the 121
respondents, only 15, 13, and 5 firms had exclusively implemented JIT, TQM,
or Six Sigma, respectively. Almost half of the firms had adopted TQM, 43%
identified themselves as JIT firms, while 35% indicated they were using Six
Sigma techniques. Adopting JIT and TQM together (one quarter of the firms) was
the most robust combination of advanced manufacturing processes. Only 16% of
the sample firms are using all three techniques.

Survey Measures for Level of JIT Implementation


In addition, respondents were asked to indicate the level to which their firms
were using JIT methods as measured by lean manufacturing practices, quality
improvements, and kanban systems. These data enabled the analysis of a broader
perspective of JIT implementation, rather than as an either/or proposition.
Although a universal set of JIT elements remains to be specified within the
research literature (Davy et al., 1992; White & Ruch, 1990), different practices
deemed important to successful JIT adoption are suggested in several studies
(Koufteros et al., 1998; Mehra & Inman, 1992; Moshavi, 1990; Sakakibara et al.,
1993; Spencer & Guide, 1995; Yasin et al., 1997). White and Ruch (1990) found
An Empirical Examination of Cost Accounting Practices
Table 1. Distribution of Sample Firms by Production Processes and Two-Digit SIC Codes.a
Industry JIT TQM Six Sigma TQM & TQM, JIT, & No JIT, TQM, or Total
Firms Firms Firms JIT Firms Six Sigma Firms Six Sigma Firms Sample

20 – Food 0 0 1 0 0 2 3
22 – Textiles 0 0 0 0 0 1 1
25 – Furniture & fixtures 5 3 2 3 2 0 5
26 – Paper & allied products 0 1 1 0 0 0 2
27 – Printing/publishing 0 1 0 0 0 0 1
28 – Chemicals & allied products 0 4 0 0 0 5 9
30 – Rubber products 1 1 0 1 0 0 1
32 – Nonmetallic mineral products 0 1 1 0 0 0 1
33 – Primary metals 3 4 5 2 2 3 9
34 – Fabricated metals 3 3 1 2 1 2 6
35 – Industrial machinery 11 10 5 8 2 6 19
36 – Electronics 14 16 16 7 7 9 34
37 – Motor vehicles & accessories 2 3 2 2 2 1 4
38 – Instrumentation 12 11 7 7 2 7 24
39 – Other manufacture 1 1 1 1 1 1 2
Totals 52 59 42 33 19 37 121

Supplemental information: 15 firms implemented JIT exclusively; 13 firms implemented TQM exclusively; 5 firms implemented six sigma exclusively;
13 firms implemented only TQM and six sigma; 5 firms implemented only JIT and six sigma.
a Classification of production processes were self-identified by survey respondents.

91
92 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

ten consensus JIT elements identified in the work of established JIT authors (e.g.
Hall et al.). These consensus elements used by White (1993), White et al. (1999),
White and Prybutok (2001), Fullerton and McWatters (2001, 2002) and Fullerton
et al. (2003) as JIT indicators are designated as follows: focused factory, group
technology, reduced setup times, total productive maintenance, multi-function
employees, uniform workload, kanban, JIT purchasing, total quality control, and
quality circles.4

Factor Analysis
Using the above-noted JIT indicators, eleven survey questions asked respondents
to identify their firm’s level of JIT implementation on the basis of a six-point Likert
scale, ranging from “no intention” of implementing the identified JIT practice to its
being “fully implemented.”5 Using the principal components method, these items
were subjected to a factor analysis. Three components of JIT with eigenvalues
greater than 1.0 were extracted from the analysis, representing 61% of the total
variance in the data.6 The first factor is a manufacturing component that explains the
extent to which companies have implemented general manufacturing techniques
associated with JIT, such as focused factory, group technology, uniform work loads,
and multi-function employees. The second JIT factor is a quality component that
examines the extent to which companies have implemented procedures for improv-
ing product and process quality. The third JIT factor identified is one of uniquely
JIT practices that describe the extent to which companies have implemented JIT
purchasing and kanban. For the results of the factor analysis, which are similar to
those of previous studies by Fullerton and McWatters (2001, 2002), see Table 2.

Cost Accounting Practices

McNair et al. (1989) identified three trends that differentiate traditional and
non-traditional MAS: (1) a preference for process costing; (2) the use of actual
costs instead of standard costs; and (3) a greater focus on traceability of costs.
Traditional variance reports support maximum capacity utilization, which
contradicts the JIT objective to produce only what is needed, when it is needed.
Traditional CAP encourage production of goods with a high contribution margin
and ignore constraints or bottlenecks. For performance evaluation, shop-floor
cost accounting measures emphasize efficiency, and encourage large batches and
unnecessary inventory (Thorne & Smith, 1997). Cooper (1995) suggests that the
missing piece to the puzzle of Japanese cost superiority in lean enterprises is
the role of cost management systems. Western enterprises use cost accounting
systems, rather than cost management systems, which have different objectives.
An Empirical Examination of Cost Accounting Practices 93

Table 2. Factor Analysis (VARIMAX Rotation) Factor Loadings for JIT


Variables.a
Factor 1 Factor 2 Factor 3
JITMANUFb JITQLTYc JITUNIQUEd

Focused factory 0.700


Group technology 0.761
Reduced setup times 0.618
Productive maintenance 0.727
Multi-function employees 0.509
Uniform work load 0.637
Product quality improvement 0.887
Process quality improvement 0.848
Kanban system 0.738
JIT purchasing 0.787

Notes: n = 121. All loadings in excess of 0.400 are shown.


a Respondents were asked to indicate the extent to which their firm had implemented the individual JIT

elements. Possible responses were: No intention = 1; Considering = 2; Beginning = 3; Partially = 4;


Substantially = 5; Fully = 6.
b Cronbach’s Alpha – 0794.
c Correlation Coefficients significant at p < 0.001: 0.772.
d Correlation Coefficients significant at p < 0.001: 0.438.

Cost accounting systems report distorted product costs without helping firms
manage costs. Cost management systems control costs by designing them out of
products through such techniques as target costing and value engineering.
Process costing (PROCESS), which simplifies inventory accounting procedures
by reducing the need to track inventory, is generally considered to be better suited
to the JIT environment than job-order costing. When Hewlett-Packard introduced
JIT, all products were produced and accounted for in batches. JIT gradually
transformed the manufacturing to a continuous process. The MAS changed to
process costing, and CAP were simplified and streamlined (Patell, 1987). Over
half of the sample firms in Swenson and Cassidy (1993) switched from job-order
costing to process costing after JIT implementation.
Related to the physical flow of inventory through the system is the parallel
recording to account for it. In a process environment, the traditional costing
methods have tracked work in process inventory per equivalent units. In an
advanced manufacturing environment where both raw materials and work in
process inventory is minimized, detailed tracking of inventory can be unnecessary
(Scarlett, 1996). The simplification of product costing and inventory valuation
in a JIT environment calls for backflush accounting (BCKFL), where inventory
costs are determined at the completion of the production cycle (Haldane, 1998).
94 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

Although still widely used, standard costing (STDRD) was developed in a differ-
ent business environment than currently exists. Standards in a traditional standard
costing system can incorporate slack, waste, and downtime, without encouraging
improvement. This system also allows for manipulation and mediocrity and may
not be appropriate for advanced manufacturing environments (McNair et al., 1989).
Rolling averages of actual performance, as benchmarks to monitor performance,
are preferred to estimates of what is expected (Green et al., 1992; Hendricks, 1994).
Haldane (1998) claimed that some uses of standard costing were “pernicious” and
actually “enshrine waste.” However, standard costing may be a good tool if it is
used properly to monitor trends for continuous improvement (Drury, 1999). Case
studies have shown that lean-manufacturing Japanese firms use standard costs,
but often adapt them continually to include Kaisen improvements (Cooper, 1996).
Related to standard costing is the use of variance analysis (VARAN). Variances
identify results that differed from initial expectations, not what caused the
deviation to occur. Avoidance of negative variances actually can impede the
implementation of lean manufacturing practices (Najarian, 1993). The use of
the traditional labor and machine utilization as volume and efficiency criteria
encourages overproduction and excess inventories (Drury, 1999; Fisher, 1992;
Haldane, 1998; Hendricks, 1994; Johnson, 1992; McNair et al., 1989; Wisner
& Fawcett, 1991). Standard-costing data used in traditional variance analysis
lack relevancy and can lead to defects before variances are noted and problems
corrected (Hendricks, 1994). In addition, the actual collection of variance
information is a non-value-added activity that increases cycle time (Bragg, 1996).
It is natural to conclude from this literature that the relevance of cost accounting
information would increase if managers spent more time checking the accuracy
of product costs, rather than reading variance reports.
Use of the absorption method (ABSORP) for inventory costing is required in
many jurisdictions to meet GAAP financial reporting requirements. Absorption
costing encourages inventory building by attaching all manufacturing overhead
costs to inventory and postponing the recording of the expense until the product is
sold. Building and storing inventory is contrary to lean manufacturing objectives,
yet can enhance net income. Traditional overhead allocation focuses on “overhead
absorption,” rather than on “overhead minimization” (Gagne & Discenza, 1992).
A suggested alternative for restraining the motivation to build inventory is the
replacement of absorption costing with variable (direct) or actual costing for
internal reporting of inventories (McWatters et al., 2001).
Since the mid-1980s, activity-based-costing (ABC) has been cited as a remedy
for the deficiencies of traditional cost accounting in advanced manufacturing
environments (Anderson, 1995; Cooper, 1994; Shields & Young, 1989). ABC
promotes decisions that are consistent with a lean manufacturing environment
An Empirical Examination of Cost Accounting Practices 95

through an enhanced focus on tracing and measuring costs of activities that


consume resources (Cooper & Kaplan, 1991; Gagne & Discenza, 1992).
The balanced scorecard (BSC) approach for measuring all key aspects of a
business, both financial and non-financial, was developed by Kaplan and Norton.
Atkinson et al. (1997) referred to it as one of the most significant developments in
management accounting. Examining measures from four perspectives, financial,
customer, internal process, and learning and growth, the scorecard can provide
managers with an integrative framework for controlling management operations
(Clinton & Hsu, 1997; Hoque & James, 2000). The internal business perspective
of the balanced scorecard is especially conducive to evaluating JIT and TQM
environments. Clinton and Hsu (1997, p. 19) pointed out that the implementation
of JIT is a major change in manufacturing control that needs the support of a
change in management control to avoid an “incongruent state that results in
inconsistent performance evaluation and dysfunctional behavior.” Frigo and
Litman (2001) explained that the downfall of companies is often their failure
to execute their business strategy, rather than the business strategy itself. They
propose the implementation of Six Sigma, lean manufacturing, and balanced
scorecard as practices that will help align business strategy with its execution.
Target costing (TGCST) for new product development concentrates on waste
and cost reduction in the initial planning stages of a product. The greater emphasis
by Japanese firms on cost management, rather than cost accounting is reflected
in their focus on target costing over product costing (Cooper, 1995, 1996). Lean
enterprises use target costing to encourage lower production costs, with higher
quality and faster response times (Cooper, 1996). Case studies have indicated
that simplification of the product-costing system and better integration of costing
with strategic objectives can occur when firms rely on target costing (Mackey &
Hughes, 1993; Sillince & Sykes, 1995).
Economic Value Added (EVA) is a variant of residual income. Developed by
Stern Stewart and Company,7 EVA adjusts the financial-reporting net-income
figure to take into account costs such as research and development and marketing
that are treated as period costs according to GAAP. It also deducts the weighted-
average cost of capital investment from the traditional net income figure to capture
economic value that is represented on both the income statement and the balance
sheet. The resulting calculation is said to capture the true value of shareholder
wealth (Tully, 1993). Pettit (2000) states that EVA is “an integrated performance
measurement, management, and reward system for business decision making.”
He demonstrates how EVA measures can gauge the effectiveness of Six Sigma
and lean production initiatives.
Life-cycle costing (LCC) is described as the tracking and accumulation of
product costs from a product’s inception to its abandonment. Proponents claim
96 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

that it better matches revenues and expenses, because it defers and allocates to
future periods all initial costs of research, marketing, and start-up to the period
in which the actual production of the units occurs and the benefits from these
prior activities are expected to be received. Activities are expensed based on the
number of units expected to be sold. Similar to management accounting in the
world-class Japanese manufacturing firms, the MAS should be integrated with
corporate strategy, and LCC should be integrated with the MAS (Ferrara, 1990).
LCC provides better decision-making information, because it is more reflective
of today’s advanced manufacturing environments (Holzer & Norreklit, 1991;
Peavey, 1990). In fact, prediction of life-cycle costs is a requirement for the
quality-initiative, ISO 9000 certification process (Rogers, 1998).
Life-cycle costing and value chain analysis (VCA) are related concepts. Value
chain analysis focuses on all business functions for a product, from research
and development to customer service, whether it is in the same firm or different
organizations (Horngren et al., 1997, p. 14). For example, TQM, business process
re-engineering, and JIT may not be as successful as anticipated, because the
necessary changes to support these processes have not been replicated for all of
the firms along the supply chain. It is not effective to try to optimize each piece
of the supply chain. All successive firms in the manufacturing process, beginning
with the customer order, back to the purchase of raw materials, must be evaluated
and integrated. Using “lean logistics,” activities should be organized to move in
an uninterrupted flow within a pull production system (Jones et al., 1997).

Survey Measures for CAP


The 11 CAP categorical variables as outlined above (PROCESS, BCKFL, STDRD,
VARAN, ABSORP, ABC, BSC, TGCST, EVA, LCC, and VCA) are evaluated from
questionnaire responses. All of the variables except for PROCESS and ABSORP
were answered on the questionnaire as a “yes” or “no” with respect to their use.
For PROCESS, respondents indicated that they used either a process costing or
job-order costing system. For ABSORP, respondents chose between the use of
absorption or direct (variable) costing for internal estimation of inventory cost. All
of the variables are coded as “0” or “1,” with “1” representing the cost accounting
“preferred choice” in an advanced manufacturing environment.

Contextual Variables

Four contextual variables are also examined. Firm size (SIZE) affects most aspects
of a firm’s strategy and success; therefore, each sample firm’s net sales figure,
as obtained from COMPUSTAT data, is used to examine firm size. Whether
a firm follows a more innovative strategy can affect its willingness to make
An Empirical Examination of Cost Accounting Practices 97

changes. Innovation (INNOV) is measured by a firm’s response to the five-point


Likert-scaled question on the survey instrument as to whether the firm is a leader
or a follower in product technology, product design, and process design (Fullerton
& McWatters, 2002; Ittner & Larcker, 1997). Top management commitment has
been discussed as a necessary ingredient for successful implementation of JIT,
TQM, and Six Sigma. Several descriptive survey studies found the lack of support
from top management to be a serious problem in the JIT implementation process
(Ansari & Modarress, 1986; Celley et al., 1986; Im, 1989; Lee, 1997). For Six
Sigma to be successful, it must become a part of the organizational culture with
the unwavering support of top management (Ellis, 2001; Hendricks & Kelbaugh,
1998; Witt, 2002). Top management commitment (COMMIT) is measured by the
responses to three five-point Likert-scaled questions on the survey instrument
that ask how supportive (from indifferent to highly supportive) top management
is in initiating change programs, implementing lean manufacturing practices, and
providing training for new production strategies. The last contextual variable asks
the respondents to identify how satisfied they are with their firm’s management
accounting system (MAS), from “1” not at all, to “5” extremely.

RESEARCH RESULTS
ANOVA Results for CAP and Advanced Manufacturing Practices

On the survey instrument, respondents indicated whether or not (yes or no) their
firm had formally implemented the advanced manufacturing techniques of JIT,
TQM and Six Sigma. In addition, they were asked to rate their manufacturing
operations on a scale from 1 to 5, with 1 being traditional and 5 world class.
The sample was separated into JIT and non-JIT, TQM and non-TQM, and Six
Sigma and non-Six Sigma users to compare the firm mean differences in the use
of specific CAP practices. The sample was further segregated into firms using
both JIT and TQM or neither, as well as firms that had implemented one of the
three WCM methods in comparison to firms that had none of the three methods in
place. The ANOVA results for the five different classifications are fairly similar,
which may indicate that the same measurement and information tools are used to
support most advanced manufacturing practices (Tables 3–7).
Of the 11 CAP practices examined, three of them consistently show significantly
more use in advanced manufacturing environments than in traditional manufactur-
ing operations. These are EVA, VCA, and LCC. There is limited and inconsistent
significant means differences among the remaining eight CAP practices evaluated.
Process costing, as opposed to job-order costing, is used more in JIT firms than in
non-JIT firms, along with backflush costing. TQM and Six Sigma environments
98 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

Table 3. Comparison of CAP, Contextual Factors, and Production Processes


Means Between JIT Firms and Non-JIT Firms.
Full JIT Non-JIT ANOVA Sig.
Sample Firms Firms F-Value F
Means Means Means
n = 121 n = 46 n = 75

Cost accounting practices (CAP)


Process or backflush costing (1), PROCESS 0.624 0.726 0.546 4.044 0.047
Job-order costing (0)
Backflush costinga BCKFL 0.231 0.373 0.121 11.028 0.001
Direct (variable) costing (1), ABSORP 0.609 0.615 0.600 0.028 0.868
Absorption costing (0)
Standard costingb STDRD 0.076 0.058 0.091 0.450 0.504
Balanced scorecarda BSC 0.218 0.297 0.146 2.621 0.110
Target costinga TGCST 0.835 0.854 0.820 0.228 0.634
Variance analysisb VARAN 0.096 0.058 0.127 1.575 0.212
Activity based costinga ABC 0.571 0.532 0.603 0.535 0.466
Economic value addeda EVA 0.382 0.476 0.298 3.022 0.086
Value chain analysisa VCA 0.348 0.477 0.222 6.713 0.011
Life cycle costinga LCC 0.242 0.378 0.120 9.230 0.003
Contextual factors
Management commitment∗ COMMIT 3.712 4.135 3.379 17.272 0.000
Innovation strategy∗∗ INNOV 3.657 3.853 3.508 2.796 0.097
MAS Satisfaction∗∗∗ MAS 3.111 3.157 3.076 0.267 0.606
Firm sizec SIZE 961.240 1612.635 480.980 7.204 0.008
Production processes
Total quality managementa TQM 0.496 0.635 0.388 7.445 0.007
Six Sigmaa 6-SIG 0.342 0.453 0.254 5.361 0.022
World class manufacturing∗∗∗∗ WCM 3.202 3.558 2.925 17.238 0.000

Notes: Survey possible responses:


∗ How supportive is top management?

Indifferent = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Highly Supportive.


∗∗ What represents your firm’s strategy related to product and process innovation?

Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗ How satisfied are you with your management accounting system?

1 = not at all; 2 = little; 3 = somewhat, 4 = considerably, 5 = extremely.


∗∗∗∗ How would you rate your manufacturing operations?

Traditional = 1 . . . 2 . . . 3 . . . 4 . . . 5 = World Class.


a yes = 1; no = 0.
b yes = 0; no = 1.
c Information provided from COMPUSTAT database (net sales in millions of dollars).
An Empirical Examination of Cost Accounting Practices 99

Table 4. Comparison of CAP, Contextual Factors, and Production Processes


Means Between TQM Firms and Non-TQM Firms.
Full TQM Non- ANOVA Sig.
Sample Firms TQM F-Value F
Means Means Firms
n = 121 n = 59 Means
n = 62

Cost accounting practices (CAP)


Process or backflush costing (1), PROCESS 0.624 0.621 0.633 0.020 0.888
Job-order costing (0)
Backflush costinga BCKFLSH 0.231 0.293 0.167 2.687 0.104
Direct (variable) costing (1), ABSORP 0.609 0.579 0.644 0.511 0.476
Absorption costing (0)
Standard costingb STDRD 0.076 0.069 0.082 0.071 0.791
Balanced scorecarda BSC 0.218 0.250 0.180 0.571 0.452
Target costinga TARGET 0.835 0.907 0.750 4.892 0.029
Variance analysisb VARAN 0.096 0.070 0.121 0.839 0.362
Activity based costinga ABC 0.571 0.654 0.482 3.242 0.075
Economic value addeda EVA 0.382 0.500 0.271 5.182 0.025
Value chain analysisa VCA 0.348 0.475 0.240 5.656 0.020
Life cycle costinga LCC 0.242 0.370 0.120 8.798 0.004
Contextual factors
Management commitment∗ COMMIT 3.712 3.977 3.472 7.209 0.008
Innovation strategy∗∗ INNOV 3.657 3.922 3.383 7.122 0.009
MAS satisfaction∗∗∗ MAS 3.111 3.086 3.133 0.093 0.761
Firm size c SIZE 961.240 1504.484 432.472 6.172 0.015
Production processes
Just-in-timea JIT 0.437 0.559 0.317 7.445 0.007
Six-sigmaa 6-SIG 0.342 0.542 0.164 22.032 0.000
World class manufacturing∗∗∗∗ WCM 3.202 3.466 2.934 11.941 0.001

Notes: Survey possible responses:


∗ How supportive is top management?

Indifferent = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Highly Supportive.


∗∗ What represents your firm’s strategy related to product and process innovation?

Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗ How satisfied are you with your management accounting system?

1 = not at all; 2 = little; 3 = somewhat, 4 = considerably, 5 = extremely.


∗∗∗∗ How would you rate your manufacturing operations?

Traditional = 1 . . . 2 . . . 3 . . . 4 . . . 5 = World Class.


a yes = 1; no = 0.
b yes = 0; no = 1.
c Information provided from COMPUSTAT database (net sales in millions of dollars).
100 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

Table 5. Comparison of CAP, Contextual Factors, and Production Processes


Means Between Six Sigma Firms and Non-Six Sigma Firms.
Full Six Non-Six ANOVA Sig.
Sample Sigma Sigma F-Value F
Means Firms Firms
n = 121 Means Means
n = 42 n = 79

Cost accounting practices (CAP)


Process or backflush costing (1), PROCESS 0.624 0.610 0.636 0.080 0.788
Job-order costing (0)
Backflush costinga BCKFLSH 0.231 0.268 0.208 0.548 0.461
Direct (variable) costing (1), ABSORP 0.609 0.575 0.632 0.348 0.556
Absorption costing (0)
Standard costingb STDRD 0.076 0.073 0.077 0.005 0.942
Balanced scorecarda BSC 0.218 0.480 0.093 18.325 0.000
Target costinga TARGET 0.835 0.921 0.778 3.626 0.060
Variance analysisb VARAN 0.096 0.100 0.092 0.019 0.891
Activity based costinga ABC 0.571 0.649 0.522 1.573 0.213
Economic value addeda EVA 0.382 0.556 0.302 5.381 0.023
Value chain analysisa VCA 0.348 0.556 0.254 8.132 0.005
Life cycle costinga LCC 0.242 0.364 0.175 4.352 0.040
Contextual factors
Management commitment∗ COMMIT 3.712 4.133 3.506 10.338 0.002
Innovation strategy∗∗ INNOV 3.657 3.683 3.628 0.064 0.801
MAS Satisfaction∗∗∗ MAS 3.111 3.244 3.039 1.620 0.206
Firm sizec SIZE 961.240 1758.545 540.151 7.390 0.008
Production processes
Total quality managementa TQM 0.496 0.762 0.346 22.032 0.000
Just-in-timea JIT 0.442 0.585 0.3967 5.361 0.022
World class manufacturing∗∗∗∗ WCM 3.202 3.44 3.076 4.791 0.031

Notes: Survey possible responses:


∗ How supportive is top management?

Indifferent = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Highly Supportive.


∗∗ What represents your firm’s strategy related to product and process innovation?

Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗ How satisfied are you with your management accounting system?

1 = not at all; 2 = little; 3 = somewhat, 4 = considerably, 5 = extremely.


∗∗∗∗ How would you rate your manufacturing operations?

Traditional = 1 . . . 2 . . . 3 . . . 4 . . . 5 = World Class.


a yes = 1; no = 0.
b yes = 0; no = 1.
c Information provided from COMPUSTAT database (net sales in millions of dollars).
An Empirical Examination of Cost Accounting Practices 101

Table 6. Comparison of CAP, Contextual Factors, and Production Processes


Means Between Firms that have Implemented both TQM and JIT and Firms that
have not Implemented Either.
Full JIT & Non-JIT ANOVA Sig. F
Sample TQM & Non-TQM F-Value
Means Firms Firms
n = 121 Means Means
n = 29 n = 92

Cost accounting practices (CAP)


Process or backflush costing (1), PROCESS 0.624 0.688 0.605 0.677 0.412
Job-order costing (0)
Backflush costinga BCKFLSH 0.231 0.438 0.151 11.730 0.001
Direct (variable) costing (1), ABSORP 0.609 0.645 0.600 0.192 0.662
Absorption costing (0)
Standard costingb STDRD 0.076 0.063 0.085 0.106 0.745
Balanced scorecarda BSC 0.218 0.292 0.182 1.181 0.280
Target costinga TARGET 0.835 0.967 0.775 5.803 0.018
Variance analysisb VARAN 0.096 0.031 0.119 2.082 0.152
Activity based costinga ABC 0.571 0.621 0.546 0.478 0.491
Economic value addeda EVA 0.382 0.560 0.308 5.056 0.027
Value chain analysisa VCA 0.348 0.577 0.250 9.478 0.003
Life cycle costinga LCC 0.242 0.444 0.159 9.313 0.003
Contextual factors
Management commitment COMMIT 3.712 4.400 3.477 20.823 0.000
Innovation strategy INNOV 3.657 3.984 3.528 3.879 0.051
MAS satisfaction MAS 3.111 3.063 3.128 0.142 0.707
Firm sizec SIZE 961.240 2249.886 526.790 12.902 0.000
Production processes
Six-sigmaa 6-SIG 0.342 0.594 0.258 12.712 0.001
World class manufacturing WCM 3.202 3.677 3.034 13.764 0.000
Notes: Survey possible responses:

How supportive is top management?
Indifferent = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Highly Supportive.
∗∗
What represents your firm’s strategy related to product and process innovation?
Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗
How satisfied are you with your management accounting system?
1 = not at all; 2 = little; 3 = somewhat, 4 = considerably, 5 = extremely.
∗∗∗∗
How would you rate your manufacturing operations?
Traditional = 1 . . . 2 . . . 3 . . . 4 . . . 5 = World Class.
a
yes = 1; no = 0.
b
yes = 0; no = 1.
c
Information provided from COMPUSTAT database (net sales in millions of dollars).

employ target costing. Firms using the Six Sigma approach demonstrate a strong
preference for using the balance scorecard measures over firms that do not use
Six Sigma. Little difference exists in the sample firms’ use of a standard costing
system, with over 90% of all the classifications indicating that standard costs
were used internally for estimating product costs. Also, around 90% of the three
102 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

Table 7. Comparison of CAP and Contextual Factors Means between Firms


with no Advanced Manufacturing (AM) Processes and Firms with some AM
Processes.
Full No AM Some AM ANOVA Sig. F
Sample Processes Processes F-Value
Means Firms Firmsc
n = 121 Means Means
n = 39 n = 82

Cost accounting practices (CAP)


Process or backflush costing (1), PROCESS 0.624 0.568 0.654 0.809 0.370
Job-order costing (0)
Backflush costinga BCKFLSH 0.229 0.108 0.284 4.547 0.035
Direct (variable) costing (1), ABSORP 0.609 0.722 0.563 2.683 0.104
Absorption costing (0)
Standard costingb STDRD 0.076 0.108 0.061 0.802 0.372
Balanced scorecarda BSC 0.218 0.083 0.273 3.622 0.061
Target costinga TARGET 0.835 0.794 0.842 0.373 0.543
Variance analysisb VARAN 0.096 0.114 0.086 0.218 0.642
Activity based costinga ABC 0.571 0.531 0.581 0.222 0.683
Economic value addeda EVA 0.382 0.241 0.443 3.441 0.067
Value chain analysisa VCA 0.348 0.207 0.410 3.651 0.059
Life cycle costinga LCC 0.242 0.034 0.328 10.438 0.002
Contextual factors
Management commitment∗ COMMIT 3.712 3.297 3.811 9.294 0.003
Innovation strategy∗∗ INNOV 3.657 3.284 3.811 5.876 0.017
MAS satisfaction∗∗∗ MAS 3.111 3.081 3.124 0.065 0.799
Firm sized SIZE 961.240 195.007 1278.641 5.312 0.023
Production processes
World class manufacturing∗∗∗∗ WCM 3.202 2.676 3.434 22.686 0.000
Notes: Survey possible responses:

How supportive is top management?
Indifferent = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Highly Supportive.
∗∗
What represents your firm’s strategy related to product and process innovation?
Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗
How satisfied are you with your management accounting system?
1 = not at all; 2 = little; 3 = somewhat, 4 = considerably, 5 = extremely.
∗∗∗∗
How would you rate your manufacturing operations?
Traditional = 1 . . . 2 . . . 3 . . . 4 . . . 5 = World Class.
a
yes = 1; no = 0.
b
yes = 0; no = 1.
c
Firms that have implemented JIT, TQM, or Six Sigma exclusively or some combination of them.
d
Information provided from COMPUSTAT database (net sales in millions of dollars).

advanced manufacturing production processes support their standard costing


system with some type of efficiency, price, or volume variance analysis. Direct
costing is used similarly by the majority of the sample firms in all manufacturing
environments to estimate internal inventory cost. Slightly over half of the firms
use an activity-based costing system, but there is only a marginally significant
An Empirical Examination of Cost Accounting Practices 103

difference in its use in TQM firms compared to non-TQM firms. In fact, although
not a significant difference, more non-JIT firms than JIT firms are using ABC. This
agrees with the study by Abernethy et al. (2001) that concluded firms adopting
advanced manufacturing technologies may not be well served by a sophisticated,
hierarchical ABC system.
The results support the argument that most firms identified as world-class manu-
facturers will adopt a combination of advanced manufacturing techniques. Each of
the advanced manufacturing processes (JIT, TQM, and Six Sigma) is implemented
significantly more in all of the advanced manufacturing environments examined.
In addition, the sample firms’ self-evaluated ratings as a world-class manufacturer
are significantly higher in JIT, TQM, and Six Sigma environments.
The contextual environments for firms practicing advanced manufacturing
techniques are very similar. These firms are larger in size, and the respondents
perceive their firms to be leaders in process and product development. They
also report significantly more support from top management in initiating and
implementing change, as well as providing necessary training in new production
strategies. All of the firms appear to be “somewhat satisfied” with their MAS. No
evidence exists that user satisfaction is dependent upon the type of manufacturing
environment in which the MAS is operating.

ANOVA Results for CAP and the Level of JIT Implemented

To more broadly examine the JIT manufacturing environment, comparisons were


made between CAP and the level of implementation of specific JIT practices. As
a result of the factor analysis, these practices were identified as a manufacturing
component, a quality component, and a uniquely JIT component. In addition, the
three factors were combined to represent a comprehensive JIT component.8 Each
of the CAP shows some significant relationship with the level of implementation
of at least one of the JIT components, except for ABC. However, the use of
absorption and standard costing systems is contrary to the expected relationships.
Those sample firms that have implemented higher levels of JIT practices are using
a standard costing system, and absorption costing more than direct (variable)
costing for internal decision making. Although the differences of the means
shown in Table 2 between the JIT and non-JIT firms for the STDRD and
ABSORP variables are not significant, the means of the preferred cost accounting
practice in a JIT environment is higher for the non-JIT firms. Similar to the
JIT/non-JIT results, the accounting practices of VCA and LCC consistently
demonstrate the most significant differences between lower and higher levels of
JIT implementation. The results also reinforce the relationship of JIT to the other
advanced manufacturing practices of TQM and Six Sigma (Table 8).
104 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

Table 8. ANOVA Comparison of CAP Means with Level of JIT


Implementation.
n = 121 JIT JIT JIT JIT
Manufacturing Quality Unique Combined

Full sample means 3.241 4.889 3.560 3.897


Cost accounting practices (CAP)
Process or backflush costing PROCESS 3.405 4.932 3.590 3.976
Job-order costing 2.938* 4.810 3.466 3.739
Backflush costing
Yes BCKFL 3.615 5.241 4.482 4.099
No 3.111* 4.781** 3.258*** 3.422***
Direct (variable) costing ABSORP 3.083 4.725 3.364 3.718
Absorption costing 3.500 5.211** 3.886 4.205**
Standard costing
Yes STDRD 3.306 4.940 3.638 3.973
No 2.444* 4.167* 2.438* 2.840**
Balanced scorecard
Yes BSC 3.719 5.250 4.125 4.348
No 3.156 4.903 3.298* 3.801*
Target costing
Yes TGCST 3.301 4.933 3.681 3.990
No 2.816 4.842 3.079 3.579*
Variance analysis
Yes VARAN 3.347 4.885 3.617 3.947
No 2.303** 4.900 3.227 3.439
Activity based costing
Yes ABC 3.333 4.975 3.441 3.927
No 3.085 4.733 3.739 3.850
Economic value added
Yes EVA 3.581 5.303 3.956 4.290
No 3.062* 4.777** 3.491 3.789**
Value chain analysis
Yes VCA 3.962 5.317 4.450 4.579
No 2.936*** 4.776** 3.339*** 3.680***
Life cycle costing
Yes LCC 3.971 5.386 4.250 4.537
No 3.064*** 4.847* 3.562* 3.829***
Production processes
Just-in-time
Yes JIT 3.706 5.226 4.480 4.458
No 2.865*** 4.609*** 2.849*** 3.445***
An Empirical Examination of Cost Accounting Practices 105

Table 8. (Continued )
n = 121 JIT JIT JIT JIT
Manufacturing Quality Unique Combined

Total quality management


Yes TQM 3.515 5.219 3.711 4.148
No 2.962** 4.558*** 3.408 3.645**
Six Sigma
Yes 6-SIG 3.421 5.088 4.150 4.222
No 3.136 4.776 3.263** 3.727**

Notes: Respondents were asked to indicate the extent to which their firm had implemented the indi-
vidual JIT elements. Possible responses were:
No intention = 1; Considering = 2; Beginning = 3; Partially = 4; Substantially = 5;
Fully = 6.
∗ p < 0.05.
∗∗ p < 0.01.
∗∗∗ p < 0.001.

DISCUSSION OF THE RESULTS


This study seeks to determine if firms that adopt innovative manufacturing tech-
nologies also adopt complementary, innovative internal cost accounting practices.
The results of a survey of New Zealand manufacturing firms (Durden et al., 1999)
indicated no differences between the CAP of JIT and non-JIT firms. Moreover,
Yasin et al. (1997) reported that management accountants had the least impact of
all the plant departments examined in encouraging JIT implementation. Similar
to the findings of Lillis (2002), our research shows that world-class manufacturers
continue to use traditional CAP practices, but unlike their competitors, they are
supplementing and expanding their MAS. Despite criticisms, standard costing and
variance analysis still are widely used in advanced manufacturing environments.
In their recent study on the design of control systems for a changing man-
ufacturing system, Banker et al. (2000) hypothesized that the monitoring of
direct labor variances would be negatively associated with the adoption of new
manufacturing processes. However, their research did not support this hypothesis.
Their sample plants continued to use direct labor standards and variance analysis
while adopting advanced manufacturing techniques. Drury (1999) points out that
standard costing to monitor trends can be consistent with the lean concept of
continuous improvement. If used appropriately, it continues to be an effective
measurement tool. Cooper (1996) found that variance analysis often was used in
relatively traditional ways for operational control in JIT firms. In addition, some
Japanese firms embedded Kaisen improvements into their standards and used
106 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

variance analysis to monitor Kaisen objectives. Cooper also found that despite
Japanese firms’ strong emphasis on cost management, their cost systems were
relatively traditional, rather than technically advanced.
Traditional accounting techniques may be more inadequate than outdated, and
need to be supplemented with new planning and control tools, such as target
costing, life-cycle costing, and value chain analysis. The research results suggest
that advanced manufacturing environments require information beyond what is
provided by traditional production-costing methods. For proper planning and
control, firms need to understand the full gamut of costs from product inception
to disposal, including costs for research and development, change initiatives, and
marketing. Advanced manufacturing systems, such as JIT, must be highly flexible
to respond to customer demand. In order to effectively operate a lean, pull system
that focuses on continuous improvement, firms must integrate and coordinate
their operations with both their suppliers and customers and have assurance that
similar quality initiatives will be exercised and supportive all along the value chain
(Kalagnanam & Lindsay, 1998). In evaluating their financial success, advanced
manufacturing firms want to examine the net added value to their firms using
measurement techniques such as EVA and make strategic decisions accordingly.
Life-cycle costing and value chain analysis perspectives are supported by
the tenets of target costing. According to this study’s results, target costing is
being used for new product development in TQM and Six Sigma environments.
Target-costing principles, which assist in the planning and control of costs during
the initial stages of product and process design, are considered an important
Japanese accounting tool (Ansari et al., 1997; Sakurai & Scarbrough, 1997), as
demonstrated by Cooper’s (1996) study of Japanese lean manufacturers.
An interesting result is the strong correlation between the use of Six Sigma and
the balanced scorecard. Six Sigma is a data-driven process that is highly tied to the
bottom line, but it is also much broader in its application than the measurement
of profitability alone. It supports continuous improvement through management,
financial, and methodological tools that improve both products and processes
(Voelkel, 2002). A balanced scorecard analysis would assist in evaluation of Six
Sigma efforts by providing information not only about profitability measures,
but also about internal manufacturing operations, customer satisfaction, and
employee contributions and retention.
As earlier studies have reported, top management support is key to making
changes and successfully implementing lean strategies (Ahmed et al., 1991;
Ansari & Modarress, 1986; Celley et al., 1986; Im, 1989; Willis & Suter, 1989).
The survey results support this idea, as those sample firms that have adopted
JIT, TQM, and/or Six Sigma have a significantly higher level of support from
top management for change initiatives and new strategies compared to firms that
An Empirical Examination of Cost Accounting Practices 107

have not implemented these techniques. In addition, larger firms that have more
resources to allocate to these practices are more successful in adopting them.
An innovative environment that would provide flexibility and empowerment also
appears to facilitate the adoption of advanced production processes.
The results indicate that world-class manufacturers integrate a combination of
advanced manufacturing techniques. Six Sigma is implemented to support the use
of TQM and JIT. The results support evidence argued by Vuppalapati et al. (1995)
that JIT and TQM should be viewed as integrated, rather than isolated strategies.
The two are complementary in their focus, “making production more efficient and
effective through continuous improvement and elimination of waste strategies.”
Research further shows that the integration of TQM, JIT, and TPM leads to
higher performance than does the exclusive implementation of each technique
(Cua et al., 2001).

Research Limitations

Specific research limitations might reduce the generalizability and applicability of


the findings. As in all survey research, a necessary assumption in data collection
is that the respondents had sufficient knowledge to answer the items and that they
answered the questions conscientiously and truthfully. Respondents might have
been unfamiliar with some of the questionnaire terms, and reluctant to take the
necessary time to examine the attached glossary explaining the terminology. In
addition, the sample firms are a subset of a previous research sample. Thus, the
sampling approach was not random. The diversion from random sample selection
may limit the generalizability of the results to other U.S. manufacturing firms.

SUMMARY
Strategic use of information resources help in customer service, product differen-
tiation, and cost competition (Narasimhan & Jayaram, 1998). Although the MAS
should support organizational operations and strategies, evidence in this study, as
in similar research (Banker et al., 2000; Clinton & Hsu, 1997; Durden et al., 1999;
Yasin et al., 1997), indicates that CAP are not changing substantially to support
lean practices. However, the results demonstrate that world-class manufacturing
firms are integrating additional, non-traditional information techniques into their
MAS, such as EVA, life-cycle costing, and value chain analysis.
In a survey of 670 UK firms, Bright et al. (1992) cited the following reasons
given for the lack of substantial change in cost accounting systems: (1) The
108 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

benefits do not appear to outweigh the costs. (2) New techniques have not proven
to provide better information. (3) There is already too much change and change
is difficult; they are comfortable with what they have. (4) The current system
is adequate; it just needs to be better utilized. (5) There is a lack of integration
between the factory and accounting information; thus, non-accountants do not use
accounting information for decision making. Also, there is a low expectation of
what accountants can offer. “If companies have simplistic cost systems, it may well
be that there is no need for a better system or that an existing need has not yet been
recognized. In most cases, systems will be improved before poor cost information
leads to consistently poor decisions” (Holzer & Norreklit, 1991, p. 12).
When surviving firms retain the same procedures over time, it is implicit that
the benefits derived therefrom exceed the costs. Moreover, the MAS has many
uses. It is plausible that control aspects of the system yield benefits that are
overlooked by those who decry the system’s inadequacy for decision making.
Further research is needed to determine the extent to which the implementation
of advanced manufacturing production processes motivates changes in a firm’s
internal accounting practices, as well as the impact of more extensive CAP on
firm profitability and competitiveness.
While this study provides evidence that some change is taking place in the
CAP of WCM firms, it appears to be from an expansion of traditional information
techniques, rather than from their replacement. Advanced manufacturing firms
must be experiencing benefits from the continued use of existing internal
accounting measures. The alleged limitations of these methods might stem more
from their application and not from the methods themselves. Rather than abandon
practices that have endured through decades, what is needed is greater acceptance
of management accounting’s role to support organizational strategy.

NOTES

1. The survey instrument was either available over the Internet, or hard copies were
faxed or mailed. The executives contacted were asked to choose which alternative they
preferred for responding to the questionnaire. Initially, 128 of the sample firms were
contacted via the Internet of which 97 responded. Forty-two were faxed and 12 were
mailed the questionnaire initially, with 17 and 6 respondents, respectively.
2. To check for non-response bias, the analyses were performed on the late responders.
No significant differences were found in the results.
3. The industry distribution for the respondent firms is similar to the total sample
industry distribution. Sixty-two percent of the firms sampled were from these same three
industries: industrial machinery, electronics, and instrumentation.
4. A definition of these terms was supplied with the questionnaire and can be found in
Fullerton et al. (2003).
An Empirical Examination of Cost Accounting Practices 109

5. Total quality control is represented by two questions on the survey: one is related to
process quality and the other to product quality.
6. All of the 11 elements loaded greater than 0.50 onto one of the three constructs except
for number 11, asking about the use of “quality circles.” Thus, this question was eliminated
from further testing representing JIT.
7. EVA is a registered trademark of Stern Stewart & Company.
8. Cronbach’s alpha (1951) for the combined measure and the three individual JIT factors
exceed the standard of 0.70 for established constructs (Nunnally, 1978).

ACKNOWLEDGMENTS
This research was made possible through a Summer Research Grant provided
by Utah State University. We gratefully acknowledge comments received from
participants at the AIMA Conference on Management Accounting Research in
Monterey, California (May 2003).

REFERENCES
Abernethy, M. A., Lillis, A. M., Brownell, P., & Carter, P. (2001). Product diversity and costing system
design choice: Field study evidence. Management Accounting Research, 12(1), 1–20.
Ahmed, N. U., Runc, E. A., & Montagno, R. V. (1991). A comparative study of U.S. manufacturing
firms at various stages of just-in-time implementation. International Journal of Production
Research, 29(4), 787–802.
Anderson, S. W. (1995). A framework for assessing cost management system changes: The case of
activity-based costing at General Motors, 1986–1993. Journal of Management Accounting
Research (Fall), 1–51.
Ansari, A., & Modarress, B. (1986). Just-in-time purchasing: Problems and solutions. Journal of
Purchasing and Materials Management (Summer), 11–15.
Ansari, S. L., Bell, J. E., & CAM-I Target Cost Core Group (1997). Target costing: The next frontier
in strategic cost management. Burr Ridge, IL: Irwin.
Atkinson, A. A., Balakrishnan, R., Booth, P., Cote, J. M., Groot, T., Malmi, T., Roberts, H., Uliana, E.,
& Wu, A. (1997). New directions in management accounting research. Journal of Management
Accounting Research, 9, 80–108.
Baker, W. M., Fry, T. D., & Karwan, K. (1994). The rise and fall of time-based manufacturing.
Management Accounting (June), 56–59.
Banker, R., Potter, G., & Schroeder, R. (1993a). Reporting manufacturing performance measures to
workers: An empirical study. Journal of Management Accounting Research, 5(Fall), 33–53.
Banker, R., Potter, G., & Schroeder, R. (1993b). Manufacturing performance reporting for continuous
quality improvement. Management International Review, 33(Special Issue), 69–85.
Banker, R., Potter, G., & Schroeder, R. (2000). New manufacturing practices and the design of control
systems. Proceedings of the Management Accounting Research Conference.
Barney, J. (1986). Strategic factor markets: Expectation, luck, and business strategy. Management
Science (October), 1231–1241.
110 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

Bragg, S. M. (1996). Just-in-time accounting: How to decrease costs and increase efficiency. New
York: Wiley.
Bright, J., Davies, R. E., Downes, C. A., & Sweeting, R. C. (1992). The deployment of costing
techniques and practices: A UK study. Management Accounting Research, 3, 201–211.
Callen, J. L., Fader, C., & Morel, M. (2002). The performance consequences of in-house productivity
measures: An empirical analysis. Working Paper, University of Toronto, Ontario.
Celley, A. F., Clegg, W. H., Smith, A. W., & Vonderembse, M. A. (1986). Implementation of JIT in
the United States. Journal of Purchasing and Materials Management (Winter), 9–15.
Chenhall, R. H., & Langfield-Smith, K. (1998a). Adoption and benefits of management accounting
practices: An Australian study. Management Accounting Research, 9, 1–19.
Chenhall, R. H., & Langfield-Smith, K. (1998b). Factors influencing the role of management ac-
counting in the development of performance measures within organizational change programs.
Management Accounting Research, 9, 361–386.
Clinton, B. D., & Hsu, K. C. (1997). JIT and the balanced scorecard: Linking manufacturing control
to management control. Management Accounting, 79(September), 18–24.
Cobb, I. (1992). JIT and the management accountant. Management Accounting, 70(February), 42–49.
Cooper, R. (1994). The role of activity-based systems in supporting the transition to the lean enterprise.
Advances in Management Accounting, 1–23.
Cooper, R. (1995). When lean enterprises collide. Boston, MA: Harvard Business School Press.
Cooper, R. (1996). Costing techniques to support corporate strategy: Evidence from Japan.
Management Accounting Research, 7, 219–246.
Cooper, R., & Kaplan, R. S. (1991). The design of cost management systems. Englewood Cliffs, NJ:
Prentice-Hall.
Cronbach, L. J. (1951). Construct validity in psychological tests. Psychological Bulletin, 297–334.
Cua, K. O., McKone, K. E., & Schroeder, R. G. (2001). Relationships between implementation of
TQM, JIT, and TPM manufacturing performance. Journal of Operations Management, 19(6),
675–694.
Daniel, S. J., & Reitsperger, W. D. (1991). Linking quality strategy with management control systems:
Empirical evidence from Japanese industry. Accounting, Organizations and Society, 16(7),
601–618.
Daniel, S. J., & Reitsperger, W. D. (1996). Linking JIT strategies and control systems: A comparison
of the United States and Japan. The International Executive, 38(January/February), 95–121.
Davy, J., White, R., Merritt, N., & Gritzmacher, K. (1992). A derivation of the underlying constructs
of just-in-time management systems. Academy of Management Journal, 35(3), 653–670.
Dean, J. W., Jr., & Snell, S. A. (1996). The strategic use of integrated manufacturing: An empirical
examination. Strategic Management Journal, 17(June), 459–480.
Drury, C. (1999). Standard costing: A technique at variance with modern management? Management
Accounting, 77(10), 56–58.
Durden, C. H., Hassel, L. G., & Upton, D. R. (1999). Cost accounting and performance measurement
in a just-in-time production environment. Asia Pacific Journal of Management, 16, 111–125.
Ellis, K. (2001). Mastering six sigma. Training, 38(12), 30–35.
Ferrara, W. L. (1990). The new cost/management accounting: More questions than answers.
Management Accounting (October), 48–52.
Fisher, J. (1992). Use of non-financial performance measures. Cost Management (Spring), 31–38.
Flynn, B. B., Sakakibara, S., & Schroeder, R. G. (1995). Relationship between JIT and TQM: Practices
and performance. Academy of Management Journal, 38(October), 1325–1353.
Frigo, M. L., & Litman, J. (2001). What is strategic management? Strategic Finance, 83(6), 8–10.
An Empirical Examination of Cost Accounting Practices 111

Fullerton, R. R., & McWatters, C. S. (2001). The production performance benefits from JIT. Journal
of Operations Management, 19, 81–96.
Fullerton, R. R., & McWatters, C. S. (2002). The role of performance measures and incentive systems
in relation to the degree of JIT implementation. Accounting, Organizations and Society, 27(8),
711–735.
Fullerton, R. R., McWatters, C. S., & Fawson, C. (2003). An examination of the relationships between
JIT and financial performance. Journal of Operations Management, 21, 383–404.
Gagne, M. L., & Discenza, R. (1992). Accurate product costing in a JIT environment. International
Journal of Purchasing and Materials Management (Fall), 28–31.
Green F. B., Amenkhienan, F., & Johnson, G. (1992). Performance measures and JIT. Management
Accounting (October), 32–36.
Haldane, G. (1998). Accounting for change. Accountancy, 122(December), 64–65.
Harris, E. (1990). The impact of JIT production on product costing information systems. Production
and Inventory Management Journal, 31(1), 44–48.
Hayes, R. H., & Wheelwright, S. C. (1984). Restoring our competitive edge. New York: Collier
Macmillan.
Hedin, S. R., & Russell, G. R. (1992). JIT implementation: Interaction between the production and
cost-accounting functions. Production and Inventory Management Journal (Third Quarter),
68–73.
Hendricks, C. A., & Kelbaugh, R. L. (1998). Implementing six sigma at GE. The Journal for Quality
& Participation (July/August), 48–53.
Hendricks, J. A. (1994). Performance measures for a JIT manufacturer. IIE Solutions, 26(January),
26–36.
Holzer, H. P., & Norreklit, H. (1991). Some thoughts on cost accounting developments in the United
States. Management Accounting Research, 2, 3–13.
Hoque, A., & James, W. (2000). Linking balanced scorecard measures to size and market factors:
Impact on organizational performance. Journal of Management Accounting Research, 12,
1–17.
Horngren, C. T., Foster, G., Datar, S. M., & Teall, H. D. (1997). Cost accounting. Scarborough, Ont.:
Prentice-Hall.
Howell, R. A., & Soucy, S. R. (1987). Cost accounting in the new manufacturing environment.
Management Accounting (August), 42–48.
Im, J. H. (1989). How does kanban work in American companies? Production and Inventory
Management Journal (Fourth Quarter), 22–24.
Ittner, C. D., & Larcker, D. F. (1995). Total quality management and the choice of information and
reward systems. Journal of Accounting Research (Supplement), 1–34.
Ittner, C. D., & Larcker, D. F. (1997). The performance effects of process management techniques.
Management Science, 43(4), 522–534.
Johnson, H. T. (1992). Relevance regained: From top-down control to bottom-up empowerment. New
York: Free Press.
Jones, D. T., Hines, P., & Rich, N. (1997). Lean logistics. International Journal of Physical Distribution
and Logistics Management, 27(3/4), 153–163.
Kalagnanam, S. S., & Lindsay, R. M. (1998). The use of organic models of control in JIT firms:
Generalising woodward’s findings to modern manufacturing practices. Accounting, Organiza-
tions and Society, 24(1), 1–30.
Kaplan, R. S. (1983). Measuring manufacturing performance: A new challenge for managerial
accounting research. The Accounting Review (October), 686–705.
112 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS

Kaplan, R. S. (1984). Yesterday’s accounting undermines production. Harvard Business Review, 62,
96–102.
Koufteros, X. A., Vonderembse, M. A., & Doll, W. J. (1998). Developing measures of time-based
manufacturing. Journal of Operations Management, 16, 21–41.
Kristensen, K., Dahlgaard, J. J., Kanji, G. K., & Juhl, H. J. (1999). Some consequences of just-in-time:
Results from a comparison between the Nordic countries and east Asia. Total Quality
Management, 10(1), 61–71.
Lillis, A. M. (2002). Managing multiple dimensions of manufacturing performance – An exploratory
study. Accounting, Organizations and Society, 27(6), 497–529.
Linderman, K., Schroeder, R. G., Zaheer, A., & Choo, A. S. (2003). Six Sigma: A goal-theoretic
perspective. Journal of Operations Management, 21, 193–203.
Mackey, J., & Hughes, V. H. (1993). Decision-focused costing at Kenco. Management Accounting,
74(11), 22–32.
McNair, C. J., Lynch, R. L., & Cross, K. F. (1990). Do financial and non-financial performance
measures have to agree? Management Accounting (November), 28–36.
McNair, C. J., Mosconi, W., & Norris, T. (1989). Beyond the bottom line: Measuring world class
performance. Homewood, IL: Irwin.
McWatters, C. S., Morse, D. C., & Zimmerman, J. L. (2001). Management accounting: Analysis and
interpretation (2nd ed.). New York: McGraw-Hill.
Mehra, S., & Inman, R. A. (1992). Determining the critical elements of just-in-time implementation.
Decision Sciences, 23, 160–174.
Milgrom, P., & Roberts, J. (1995). Complementarities and fit: Strategy, structure and organizational
change in manufacturing. Journal of Accounting and Economics, 19, 179–208.
Moshavi, S. (1990). Well made in America: Lessons from Harley-Davidson on being the best. New
York: McGraw-Hill.
Najarian, G. (1993). Performance measurement: Measure the right things. Manufacturing Systems,
11(September), 54–58.
Narasimhan, R., & Jayaram, J. (1998). An empirical investigation of the antecedents and consequences
of manufacturing goal achievement in North American, European, and Pan Pacific firms.
Journal of Operations Management, 16, 159–179.
Nunnally, J. (1978). Psychometric theory. New York: McGraw-Hill.
Patell, J. M. (1987). Cost accounting, process control, and product design: A case study of the
Hewlett-Packard personal office computer division. The Accounting Review, 62(4), 808–839.
Peavey, D. E. (1990). Battle at the GAAP? It’s time for a change. Management Accounting (February),
31–35.
Pettit, J. (2000). EVA and production strategy. Industrial Management (November/December), 6–13.
Rogers, B. (1998). Paperwork with a payoff. Manufacturing Engineering, 121(3), 16.
Sakakibara, S., Flynn, B. B., & Schroeder, R. G. (1993). A framework and measurement instrument
for just-in-time manufacturing. Production and Operations Management, 2(3), 177–194.
Sakurai, M., & Scarbrough, D. P. (1997). Japanese cost management. Menlo Park, CA: Crisp
Publications.
Scarlett, B. (1996). In defence of management accounting applications. Management Accounting,
74(1), 46–52.
Schonberger, R. J. (1986). World class manufacturing casebook: The lesson of simplicity applied.
New York: Free Press.
Shields, M. D., & Young, S. M. (1989). A behavioral model for implementing cost management
systems. Journal of Cost Management (Winter), 17–27.
An Empirical Examination of Cost Accounting Practices 113

Sillince, J. A. A., & Sykes, G. M. H. (1995). The role of accountants in improving manufacturing
technology. Management Accounting Research, 6(June), 103–124.
Spencer, M. S., & Guide, V. D. (1995). An exploration of the components of JIT. International Journal
of Operations and Production Management, 15(5), 72–83.
Swenson, D. W., & Cassidy, J. (1993). The effect of JIT on management accounting. Cost Management
(Spring), 39–47.
Thorne, K., & Smith, M. (1997). Synchronous manufacturing: Back to basics. Management
Accounting, 75(11), 58–60.
Tully, S. (1993). The real key to creating wealth. Fortune (September 20), 38–50.
Voelkel, J. G. (2002). Something’s missing: An education in statistical methods will make employees
more valuable to Six Sigma corporations. Quality Progress (May), 98–101.
Voss, C. A. (1995). Alternative paradigms for manufacturing strategy. International Journal of
Production Management, 15(4), 5–16.
Vuppalapati, K., Ahire, S. L., & Gupta, R. (1995). JIT and TQM: A case for joint implementation.
International Journal of Operations and Production Management, 15(5), 84–94.
Welfe, B., & Keltyka, P. (2000). Global competition: The new challenge for management accountants.
The Ohio CPA Journal (January–March), 30–36.
White, R. E. (1993). An empirical assessment of JIT in U.S. manufacturers. Production and Inventory
Management, 34(Second Quarter), 38–42.
White, R. E., Pearson, J. N., & Wilson, J. R. (1999). JIT manufacturing: A survey of implementations
in small and large U.S. manufacturers. Management Science, 45(January), 1–15.
White, R. E., & Prybutok, V. (2001). The relationship between JIT practices and type of production
system. Omega, 29, 113–124.
White, R. E., & Ruch, W. A. (1990). The composition and scope of JIT. Operations Management
Review, 7(3–4), 9–18.
Willis, T. H., & Suter, W. C., Jr. (1989). The five M’s of manufacturing: A JIT conversion life cycle.
Production and Inventory Management Journal (January), 53–56.
Wisner, J. D., & Fawcett, S. E. (1991). Linking firm strategy to operating decisions through performance
measurement. Production and Inventory Management Journal (Third Quarter), 5–11.
Witt, E. (2002). Achieving six sigma logistics. Material Handling Management, 57(5), 10–13.
Yasin, M. M., Small, M., & Wafa, M. (1997). An empirical investigation of JIT effectiveness: An
organizational perspective. Omega, 25(4), 461–471.
Zimmerman, J. L. (2003). Accounting for decision making and control (4th ed.). Burr Ridge, IL:
McGraw-Hill.
THE INTERACTION EFFECTS OF LEAN
PRODUCTION MANUFACTURING
PRACTICES, COMPENSATION, AND
INFORMATION SYSTEMS ON
PRODUCTION COSTS: A RECURSIVE
PARTITIONING MODEL

Hian Chye Koh, Khim Ling Sim and Larry N. Killough

ABSTRACT
The study re-examines if lean production manufacturing practices (i.e.
TQM and JIT) interact with the compensation system (incentive vs. fixed
compensation plans) and information system (i.e. attention directing goals
and performance feedback) to reduce production costs (in terms of manufac-
turing and warranty costs) using a recursive partitioning model. Decision
trees (i.e. recursive partitioning algorithm using Chi-square Automatic
Interaction Detection or CHAID) are constructed on data from 77 U.S.
manufacturing firms in the electronics industry. Overall, the “decision tree”
results show significant interaction effects. In particular, the study found
that better manufacturing performance (i.e. lower production costs) can be
achieved when lean production manufacturing practices such as TQM and
JIT are used along with incentive compensation plans. Also, synergies do

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 115–135
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12005-4
115
116 HIAN CHYE KOH ET AL.

result from combining TQM/JIT with more frequent performance feedback


along with attention directing goals. These findings suggest that if organisa-
tional infrastructure and management control systems are not aligned with
manufacturing practices, then the potential benefits of lean manufacturing
(i.e. TQM and JIT) may not be fully realised.

INTRODUCTION
In a US$5 million 5-year study on the future of the automobile, Womack et al.
(1990) made an excellent summary of manufacturing practices since the late
1800s. At the beginning of the industrial age (and in fact even before that), manu-
facturing was dominated by craft production. It had the following characteristics:
(1) highly skilled craftspeople; (2) very decentralised organisations; (3) general-
purpose machine tools; and (4) very low production volume. Although craft
production had worked very well then, it had several drawbacks. These included
high production costs (regardless of volume) and low consistency and reliability.
The first revolution in manufacturing practices came after World War I in
the early 1900s when Henry Ford introduced new manufacturing practices
that could reduce production costs drastically while increasing product quality.
This innovative system of manufacturing practices was called mass production
(vis-à-vis craft production). The defining feature of mass production was the
complete interchangeability of parts and the simplicity of attaching them to each
other. This led to the division of labour in the production process (to repetitive
single tasks) and the construction of moving assembly lines. Mass production
spurred a remarkable increase in productivity (and a corresponding remarkable
decrease in cost per unit output), a drastic improvement in product quality and
a significant reduction in capital requirements. Mass production was eventually
adopted in almost every industrial activity around the world.
Problems with mass production started to become prominent in the mid-1900s.
The minute division of labour removed the career path of workers and resulted
in dysfunctions (e.g. reduced job satisfaction). Also, mass production led to
standardised products that were not suited to all world markets. The production
system was inflexibility and time-consuming and expensive to change. Finally,
intense competition and unfavourable macro-economic developments further
eroded the advantages of mass production.
While mass production was declining in the mid-1900s, the second revolution in
manufacturing practices took root in Toyota in Japan. The Toyota Production Sys-
tem, also referred to by Womack et al. (1990) as lean production, was established
The Interaction Effects of Lean Production Manufacturing Practices 117

by the early 1960s and since then has been incorporated by many companies and
industries world-wide. Lean production brings together the advantages of craft
production and mass production by avoiding the former’s high cost and the latter’s
rigidity. It employs teams of multi-skilled workers at all levels of the organisation
and uses highly flexible and increasingly automated machines to produce outputs
in enormous variety as well as with high quality. Specifically, lean production
is characterised by the following focus: (1) cost reductions; (2) zero defects;
(3) zero inventories; (4) product variety; and (5) highly skilled, motivated and
empowered workers.
Lean production manufacturing practices have often been referred to by
other names, such as total quality management (TQM) and just-in-time (JIT).
In particular, these manufacturing practices (i.e. TQM and JIT) have been used
by manufacturing firms striving for continual improvement. To date, however,
mixed results have been reported – while some firms have excelled because of
their use of TQM and JIT (e.g. Xerox and Motorola), other firms do not seem
to have improved their manufacturing performance (see, for example, Harari,
1993; Ittner & Larcker, 1995). Although there is an expanding literature on lean
manufacturing such as TQM or JIT implementation, there is little empirical
evidence that provides reasons for these mixed results (Powell, 1995).
Since the middle part of the 1980s, there is a growing interest in how manage-
ment control systems could be modified to tailor to the needs of manufacturing
strategies (Buffa, 1984; Hayes et al., 1988; Kaplan, 1990; Schonberger, 1986).
Empirical evidence shows that higher organisational performance is often the
result of a match between an organisation’s environment, strategy and internal
structures or systems. By and large, most studies on management control focus
on senior management performance or overall performance at strategic business
unit levels (Govindarajan, 1988; Govindarajan & Gupta, 1985; Ittner & Larcker,
1997). Given that successes in manufacturing strategies are often influenced
by activities at the shop floor or operational level, empirical studies linking
management control policies to manufacturing performance at the operational
level may provide useful information on the mixed findings related to lean
manufacturing practices. Accordingly, the objective of this study is to examine
whether manufacturing firms using lean production manufacturing practices
such as TQM and/or JIT achieve higher manufacturing performance (i.e. lower
production costs) when they accompany these practices with contemporary
operational controls. More formally, the study investigates the performance effect
(i.e. reduction in production costs) of the match between lean production manu-
facturing practices and compensation and information systems using a recursive
partitioning model.
118 HIAN CHYE KOH ET AL.

RESEARCH FRAMEWORK
As mentioned earlier, TQM and JIT feature prominently in lean production. TQM
focuses on the continual improvement of manufacturing efficiency by eliminating
waste, scrap and rework while improving quality, developing skills and reducing
costs. Along similar lines, JIT emphasises manufacturing improvements via
reducing set-up and cycle times, lot sizes and inventories. These lean production
manufacturing practices require workers who are highly skilled, motivated
and empowered. In particular, workers are made responsible for improving
manufacturing capabilities and product and process quality (Siegel et al., 1997),
performing a variety of activities, and detecting non-conforming items. TQM
and JIT implementation are expected to improve manufacturing performance. In
addition, worker empowerment (which is an important part of TQM and JIT) is
expected to indirectly improve manufacturing performance via greater intrinsic
motivation (Hackman & Wageman, 1995). Among other things, improved
manufacturing performance translates to reduction in production costs.
Wruck and Jensen (1994) suggest that effective TQM implementation requires
major changes in organisational infrastructure such as the systems for allocating
decision rights, performance feedback and reward/punishment. Kalagananam
and Lindsay (1998), on the other hand, suggest that a fully developed JIT
system represent a significant departure from the traditional mass production
systems. They advocate that manufacturing firms adopting JIT must abandon a
mechanistic management control system and adopt an organic model of control.
Taken together, the literature suggests that management control systems in lean
manufacturing practices should be different from those under the traditional mass
production system.
Management control systems have been described as processes or tools for
influencing behaviour towards attainment of organisational goals or objectives.
A control system performs its function by controlling the flow of information,
establishing criteria for evaluation, and designing appropriate rewards and punish-
ment (Birnberg & Snodgrass, 1988; Flamhaultz et al., 1985). As such, this study
focuses on the compensation system (incentive vs. fixed compensation plans)
and the information system (i.e. attention-directing goals and manufacturing
performance feedback).

Compensation System

Much research in management control is based on economic models that


assume that workers are rational, self-interested and utility-maximising agents.
The Interaction Effects of Lean Production Manufacturing Practices 119

Consequently, without monitoring and sanctions, self-interested workers will be


risk averse and they will also exhibit shirking behaviour. Prior research findings
indicate that incentives are a major motivator in this context (Fama & Jensen,
1983; Holmstrom, 1979; Jensen & Meckling, 1976).
More generally, designing compensation systems to match the needs of lean
manufacturing practices is consistent with the strategic compensation literature
(see, for example, Gomez-Mejia & Balkin, 1992; Milkovich, 1988). While
incentive compensation plans by themselves can be effective in enhancing
manufacturing performance (Coopers & Lybrand, 1992; MacDuffie, 1995),
Ichniowski et al. (1997) reported that workers’ performance were substantially
higher when incentive plans were also coupled with supportive work practices.
More recently, Sim and Killough (1998) found synergistic effects between
TQM/JIT implementation and the use of incentive compensation plans.
Given the above, it is hypothesised in this study that although incentive com-
pensation plans can be effective independently of TQM and JIT implementation,
it is the match between lean production manufacturing practices (i.e. TQM and
JIT) and the compensation system that leads to higher synergistic manufacturing
performance (e.g. lower production costs). In other words:

H1. Lean production manufacturing practices (i.e. TQM/JIT) interact with the
use of compensation plans to enhance manufacturing performance (i.e. to lower
production costs).

Information System (Goals and Feedback)

Locke and Latham (1990) found that goals positively influence the attention,
effort and persistence of workers. This finding is consistent across many studies
(e.g. Latham & Lee, 1986; Locke et al., 1981). Thus, if an organisation wants its
workers to achieve particular goals, then prior research findings suggest that the
presence of such goals can motivate workers to accomplish them. In practice, to
help workers achieve better manufacturing performance, attention-directing goals
or targets are often provided via the firm’s information system. Examples of such
goals include customer satisfaction and complaints, on-time delivery, defect rate
and sales returns, cycle time performance . . . etc.
From a learning standpoint, providing performance feedback helps workers
develop effective task strategies. Alternatively, feedback which shows that
performance is below the target can increase the motivation to work harder (Locke
& Latham, 1990). As a result, the combination of both goals and feedback often
lead to better performance (Erez, 1977). Using a framework of management
120 HIAN CHYE KOH ET AL.

controls, Daniel and Reitsperger (1991) concluded that manufacturing plants


adhering to zero defect strategies are more likely to provide goals and frequent
manufacturing performance feedback than manufacturing plants supporting a
more traditional model. Similarly, Banker et al. (1993) found that the reporting
of manufacturing performance feedback to workers is positively associated with
TQM and JIT practices. Both studies, however, did not examine the impact of
goals or performance feedback on manufacturing performance.
The strategic literature shows that for manufacturing strategies to be effectively
implemented, they must be integrated with day-to-day operational planning and
control mechanisms. Unless this coupling exists, the strategic plan will become
irrelevant with time (Gage, 1982). Thus, it can be argued that to improve manu-
facturing performance, it is imperative that the firm’s information system provides
attention-directing goals and manufacturing performance feedback to workers.
Conversely, it is expected that attention-directing goals and performance feedback
will enhance manufacturing performance. In this aspect, Sarkar (1997) provided
evidence that process improvement in quality is enhanced when information
sharing is encouraged in the work place. It is further hypothesised in this study that
although goals or performance feedback can be effective independently of TQM
and JIT implementation, it is the match between lean production manufacturing
practices (i.e. TQM and JIT) and the information system (i.e. attention-directing
goals and manufacturing performance feedback) that leads to higher synergistic
manufacturing performance in terms of reduction in production costs (see also
Chenhall, 1997; Ittner & Larcker, 1995; Milgrom & Roberts, 1990, 1995). More
formally, the research hypothesis in the study can be stated as follows:
H2. Lean production manufacturing practices (i.e. TQM/JIT) interact with the
use of information system (i.e. attention-directing goals and/or manufacturing
performance feedback) to enhance manufacturing performance (i.e. to lower
production costs).

MATERIALS AND METHODS


To investigate the interaction effects between TQM/JIT and compensation
and information systems (i.e. goal setting and feedback) on manufacturing
performance, the following research materials and methods are employed.

Sample Selection

The electronics industry (SIC Code 36) was chosen as the primary industry for
the study for the following reasons. Unlike the TQM concept, the JIT concept
The Interaction Effects of Lean Production Manufacturing Practices 121

works well mainly in repetitive manufacturing such as automobiles, appliances


and electronic goods. Balakrishnan et al. (1996) reported that 68% of JIT firms
(i.e. those that have adopted the JIT concept for a substantial portion of their
operations) clustered within the three SIC codes 35, 36 and 38. The sample for
the study was drawn from the electronics industry (SIC code 36) since it has the
highest percentage of JIT firms.

Questionnaire

The questionnaire solicited information pertaining to manufacturing practices,


workplace practices, as well as several aspects of manufacturing performance.
Two stages were involved in a pilot test of the questionnaire. First, three production
engineers from a semiconductor plant were asked to fill out the questionnaire.
Since information provided was based on the same plant, the responses were
compared, and found to be consistent. Next, the questionnaire was reviewed by
four experts in the area of process improvement to check for relevancy or possible
ambiguity in the instrument. Feedback from the pilot test resulted in no major
changes, except for rephrasing of some statements. Appendix provides detailed
information about the questionnaire.
Letters requesting participation in the research study were sent to the directors of
manufacturing of 1,500 randomly selected firms located within the United States
with annual sales of ten million dollars and above. A total of 126 manufacturing
plants agreed to participate in the study and three firms wished to review the
research questionnaire prior to making a commitment to participate. As a result,
129 research questionnaires were mailed. About 50% of the plants replied within
four weeks. Six weeks after the initial mailing of the research questionnaires,
a status report together with a reminder was sent to all 129 plants. In total, 77
usable responses were received, giving a response rate of 59.7%. (For the analysis
involving warranty costs, only 76 responses are used because of missing data.)
The data collected pertain to a specific SBU and not the entire company.

Dependent Variable Measures

An effective management control system starts with defining the critical perfor-
mance measures. Firms using lean production manufacturing practices such as
TQM and JIT are expected to experience improved efficiencies such as lower
manufacturing costs. Also, they are expected to have improved product quality and
hence lower warranty costs. Thus, changes in manufacturing costs and warranty
costs (collectively termed as production costs here) were used as dependent
variables in the study. Respondents were requested to indicate the changes in
122 HIAN CHYE KOH ET AL.

these production costs in the last three years, anchoring on a scale of 1–5, with 1
denoting “decrease tremendously,” 3 “no change” and 5 “increase tremendously.”
It is noted that this study focuses only on one key aspect of lean production,
namely, the reduction in production costs. This sole key aspect, however, captures
a great (as well as important) part of lean production.

Independent Variable Measures

Five independent variables (namely, total quality management, just-in-time, com-


pensation system, goal setting and feedback [the last two representing information
system]) were included in the study. Where a variable consists of multiple items,
an average score across the items represents the score for that variable.

Total Quality Management (TQM)


Following Sim and Killough (1998), TQM was assessed using a modified Snell
and Dean (1992) TQM scale anchored on a 7-point Likert scale. Eight of the
ten original TQM items along with two additional items were used in the study.
Both the deleted items were related to “information system,” which is already an
independent variable in the study. The first deleted item relates to “information
feedback” while the second deleted item relates to “the ability of a plant to keep
track of quality cost.” The items added were “preventive maintenance to improve
quality” and “quality related training provided to employees.”

Just-in-Time (JIT)
The JIT scale used in the study was a modified scale from Snell and Dean (1992)
(see Sim & Killough, 1998). Snell and Dean (1992) developed a 10-item scale
anchored on a 7-point Likert scale to measure JIT adoption. In this study, only
eight of the ten original JIT items were used. The first omitted item relates to
the extent to which the accounting system reflects costs of manufacturing. This
item was loaded onto the TQM construct in the Snell and Dean (1992) study and
did not seem to reflect a JIT construct. The second omitted item asked whether
the plant was laid out by process or product. This item was also loaded onto the
TQM construct and was deleted from the final TQM scale in Snell and Dean
(1992). For this study, an item that represented “time spent to achieve a more
orderly engineering change by improving the stability of the production schedule”
was added.

Compensation System
The independent variable “compensation system” consisted of two categories,
namely fixed compensation plans and incentive compensation plans. Specifically,
The Interaction Effects of Lean Production Manufacturing Practices 123

firms using fixed compensation plans were coded as “0,” while firms using incentive
compensation plans were coded as “1.” That is, compensation system was measured
as a dichotomous variable.

Information System (Goal Setting and Feedback)


To enhance manufacturing performance, contemporary information systems
set goals/targets to achieve manufacturing performance as well as report man-
ufacturing performance measures to workers. In the study, respondents were
asked whether specific numeric targets and performance feedback were provided
on eight manufacturing performance measures that were related to customer
perceived quality, on-time delivery and waste. Also, following Daniel and
Reitsperger (1992), frequency of feedback was anchored on a 5-point Likert scale,
and goal setting was anchored between 0 and 1.

RESEARCH MODEL AND TESTING PROCEDURES


To investigate the interaction effects between lean production manufacturing
practices (i.e. TQM and JIT) and the compensation system and information system
(i.e. goal setting and feedback) on manufacturing performance (i.e. changes in
production costs in terms of manufacturing and warranty costs), decision trees are
constructed on the sample data. In particular, the recursive partitioning algorithm
using Chi-square Automatic Interaction Detection (CHAID) is employed. A
recursive partitioning model provides evidence on interactions or non-linearity
which otherwise may be difficult to capture using a conventional regression model
with interaction terms.
The objective of decision trees is prediction and/or classification by dividing
observations into mutually exclusive and exhaustive subgroups. The division is
based on the levels of particular independent variables that have the strongest
association with the dependent variable. In its basic form, the decision tree
approach begins by searching for the independent variable that divides the
sample in such a way that the difference with respect to the dependent variable
is greatest among the divided subgroups. At the next stage, each subgroup is
further divided into sub-subgroups by searching for the independent variable
that divides the subgroup in such a way that the difference with respect to the
dependent variable is greatest among the divided sub-subgroups. The independent
variable selected need not be the same for each subgroup. This process of
division (or splitting in decision trees terminology) usually continues until
either no further splitting can produce significant differences in the dependent
variable in the new subgroups or the subgroups are too small for any further
meaningful division.
124 HIAN CHYE KOH ET AL.

The subgroups and sub-subgroups are usually referred to as nodes. The end
product can be graphically represented by a tree-like structure. (See also Breiman,
1984, pp. 59–92; Ittner et al., 1999; Lehmann et al., 1998.) In the Chi-square
Automatic Interaction Detection (CHAID) algorithm, all possible splits of each
node for each independent variable are examined. The split that leads to the
most significant Chi-square statistic is selected. For the purpose of the study, the
dependent variables are dichotomised into two groups.

Results

Table 1 – Panel A provides the job title of the respondents. A review of the
respondents’ job titles shows that most respondents were closely associated with
manufacturing operations, suggesting that they are the appropriate persons for
providing shop floor information. Table 1 – Panel B shows descriptive statistics of
lean manufacturing practices and the type of reward systems. Except for 24 (30)
plants which did not have a formal TQM (JIT) program, respectively, the remain-
ing plants have implemented some form of lean manufacturing. Also, additional
analysis shows the majority of the sample plants had annual sales of between 10
and 15 million U.S. dollars. Further, manufacturing firms that made greater use of
lean production manufacturing practices such as TQM and JIT also made greater
use of incentive compensation plans, and more frequently set goals on operational
performance and more frequently provided performance feedback to their workers.
Results from the recursive partitioning algorithm are summarised in Fig. 1. As
shown, goal setting (p-value = 0.0003) is significantly associated with the change
in manufacturing costs. In particular, a higher level of goal setting is associated
with decreasing manufacturing costs. In addition, there is also a significant
interaction effect between goal setting and JIT (p-value = 0.0187). That is, a
higher level of goal setting coupled with a greater use of JIT is associated with
better manufacturing performance (in terms of decreases in manufacturing costs)
vis-à-vis a higher level of goal setting and a lower use of JIT.
The interaction effect of compensation plan can also be seen in Fig. 1. In partic-
ular, the combination of a fixed compensation plan with a high level of goal setting
and a moderately high use of JIT is associated with a lower probability of decreases
in manufacturing costs (p-value = 0.0362). Also, the combination of an incentive
compensation plan with a high level of goal setting but a low use of JIT and either
a very low use or a very high use of TQM (p-value = 0.0049) is associated with a
very low probability of decreases in manufacturing costs. It is noted that the latter
part of the findings is not consistent with conventional wisdom. Finally, there is
persuasive evidence (p-value = 0.1216) that a high level of goal setting, a relatively
The Interaction Effects of Lean Production Manufacturing Practices 125

Table 1.
Panel A: Job Title of Respondents

Job Title Used by Respondents Number of Percentage


Respondents

Plant manager, manufacturing manager, or 24 31.1


operations manager
VP of operations, VP of engineering, VP 22 28.6
of manufacturing, or VP of quality
Director of operations, director of 13 16.9
manufacturing, or director of
manufacturing and engineering
CEO, president and CEO, executive VP, 5 6.5
or president
Miscellaneous titles used – e.g. materials 7 9.1
manager, test manager, sourcing and
fabrication manager, or product
integrity manager
No information on job title 6 7.8
Total respondents 77 100
Panel B: Descriptive Statistics – Lean Manufacturing and Type of Compensation

Variables No Formal 1–2 Years 3–4 Years 4 Years


Program

Years of TQM 24 20 18 15
experience
Years of JIT 30 22 12 13
experience

Variables Fixed Pay Fixed + Non- Fixed + Individual- Fixed + Group-


Monetary Reward Based Cash Reward Based Cash Reward

Compensation 34 6 12 25
type

low use of JIT and a moderately high use of TQM, coupled with both an incentive
compensation plan and a high level of feedback is associated with decreases in
manufacturing costs. This result does not come as a “surprise” – for example, the
hallmark of lean manufacturing is efficient use of resources (see Womack et al.,
1990). This means less space, fewer inventories, less production time (key aspects
of JIT), less waste, better quality, and continuously striving for manufacturing
excellence (key aspects of TQM). The literature suggests that JIT and TQM often
126
HIAN CHYE KOH ET AL.
Fig. 1. Decision Tree Results for Manufacturing Costs.
The Interaction Effects of Lean Production Manufacturing Practices 127

complement each other; also, a low JIT can be complemented with a high TQM, or
vice versa.
Figure 2 summarises the decision tree results for warranty costs. As shown in
Fig. 2, a high level of TQM (p-value = 0.0139) is significantly associated with
the change in warranty costs. In particular, a high level of TQM is associated
with decreasing warranty costs. The results also show that a high use of JIT, even
in the presence of a low use of TQM, is associated with decreasing warranty
costs (p-value = 0.0237) (see earlier comments on the complementarity of
JIT and TQM).
Although not statistically significant, the effect of compensation plan can also
be seen in Fig. 2. In particular, with a moderately high use of TQM, an incentive
compensation plan seems to be more associated with decreasing warranty costs
than the case of a fixed compensation plan (p-value = 0.2023). Also, there is
a significant interaction effect between feedback and TQM and JIT practices
(p-value = 0.0156). That is, with low levels of TQM and JIT, the probability of de-
creases in warranty costs is higher for a low level of feedback than for a high level
of feedback. This finding illustrates that the best configurations of management
control systems are often contingent upon the type of production systems. For
example, for manufacturing plants which have not switched to lean manufacturing
practices, it is not necessary for them to reconfigure the accounting systems to pro-
vide more timely feedback. Finally, there is some persuasive evidence of interaction
effect between goal setting and TQM and compensation plan (p-value = 0.2069).
In particular, in the presence of a moderately high level of TQM, the use of an
incentive compensation plan coupled with a high level of goal setting appears to
be associated with decreasing warranty costs. (It can be argued that some results
are statistically insignificant primarily because of the relatively small sample
size in the relevant nodes.)
The above findings provide support for H1 and H2. Except for one unexpected
result, the findings are consistent with the underlying theory, i.e. the best
configurations of management control systems are contingent upon the type of
production systems.

Discussion

Several theoretical papers have motivated this study (see Hemmer, 1995; Ittner &
Kogut, 1995; Milgrom & Roberts, 1990, 1995). In particular, Milgrom and
Roberts (1990, 1995) provide a theoretical framework that attempts to address the
issue of how relationships among parts of a manufacturing system affect perfor-
mance. They suggest that organizations often experience a simultaneous shift in
128
HIAN CHYE KOH ET AL.
Fig. 2. Decision Tree Results for Warranty Costs.
The Interaction Effects of Lean Production Manufacturing Practices 129

competitive strategy along with various elements of organizational design when


they move from mass production to lean manufacturing (i.e. JIT/TQM) or modern
manufacturing. In other words, organization changes do not occur in isolation;
this implies that synergies or complementarities often arise within clusters of the
organizational design that improves manufacturing performance. For example,
they argue that organizations are more likely to perform better if manufacturing is
made-to-order. Such firms should keep a low level of inventory, employ production
equipment with both low setup time and cost, and have shorter production runs,
less waste and higher product quality (i.e. the notion of lean manufacturing).
Also, it is equally important to have highly skilled workers, a mechanism where
information flow and sharing are encouraged, and policies that support and reward
positive behavior. In essence, Milgrom and Roberts’ (1995) framework suggests
that successful implementation of lean manufacturing (TQM/JIT) requires com-
plementary management control systems. Similarly, Hemmer (1995) suggests that
characteristics of a production system and management control systems should be
simultaneously designed. Given the complexity of the complementarities between
manufacturing production systems and management control systems, it is expected
that a recursive partitioning model should provide better insight into this still
poorly understood issue.
This study investigates if lean production manufacturing practices interact with
compensation and information systems to enhance manufacturing performance
(i.e. to lower production costs). Overall, results from the recursive partitioning
model (i.e. CHAID) provide support for such interaction effects. In particular,
the study found that better manufacturing performance (i.e. lower production
costs) can be achieved when lean production manufacturing practices such as
TQM and JIT are used along with incentive compensation plans. Also, synergies
do result from combining TQM/JIT with more frequent performance feedback
along with attention directing goals. These findings suggest that if organisational
infrastructure and management control systems are not aligned with manufac-
turing practices, then the potential benefits of lean manufacturing (i.e. TQM and
JIT) may not be fully realised. The findings are consistent with prior research
that shows that poor performance of many modern manufacturing practices is
partially attributable to the continued reliance on traditional management control
systems that do not provide appropriate information or reward systems (Banker
et al., 1993; Johnson & Kaplan, 1987; Kaplan, 1983, 1990).
In interpreting the results of the study, the following limitations should be borne
in mind. First, the sample size for the study (77 responses) is relatively small.
This, however, is mitigated by the relatively high response rate of 59.7% in the
final sample. Second, the usual caveats associated with a self-report questionnaire
survey apply. Incidentally, results of sensitivity tests show no indication of
130 HIAN CHYE KOH ET AL.

non-response bias by geographical region and 4-digit SIC code. Third, the present
research design precludes inferences to be made with regards to the pattern of
changes in the warranty costs or manufacturing costs.
In this concluding section, it is appropriate to discuss some caveats to
recursive partitioning models such as CHAID. For example, the splitting of the
subgroups (or nodes) are results driven, i.e. they are not theory or decision driven.
Nevertheless, when interpreting the findings, a common approach is to use some
underlying theories to explain the results or patterns. Finally, since there is no
model-fit statistic, there exists a risk of over-fitting the model. Despite some
limitations, a recursive partitioning model such as CHAID is a potentially useful
tool when examining complex relationships in the real world and has proved to
be helpful in discovering meaningful patterns when large quantities of data are
available (see, for example, Berry & Linoff, 1997, p. 5).

REFERENCES
Balakrishnan, R., Linsmeier, T., & Venkatachalam, M. (1996). Financial benefits from JIT adoption:
Effects of customer concentration and cost structure. The Accounting Review, 71, 183–205.
Banker, R., Potter, G., & Schroeder, R. (1993). Reporting manufacturing performance measures to
workers: An empirical study. Journal of Management Accounting Research, 5, 33–55.
Berry, M., & Linoff, G. (1997). Data mining techniques – For marketing, sales, and customer support.
Wiley Computer Publishing.
Birnberg, J. G., & Snodgrass, C. (1988). Culture and control: A field study. Accounting, Organizations
and Society, 13, 447–464.
Breiman, L. (1984). Classification and regression trees. Belmont, CA: Wardsworth International Group.
Buffa, E. S. (1984). Meeting the competitive challenge. IL: Dow Jones-Irwin.
Chenhall, R. H. (1997). Reliance on manufacturing performance measures, total quality management
and organizational performance. Management Accounting Research, 8, 187–206.
Coopers & Lybrand (1992). Compensation planning for 1993. New York: Coopers & Lybrand.
Daniel, S., & Reitsperger, W. (1991). Linking quality strategy with management control systems:
Empirical evidence from Japanese industry. Accounting Organizations and Society, 17,
601–618.
Daniel, S., & Reitsperger, W. (1992). Management control systems for quality: An empirical
comparison of the U.S. and Japanese electronic industry. Journal of Management Accounting
Research, 4, 64–78.
Erez, M. (1977). Feedback: A necessary condition for the goal setting-performance relationship.
Journal of Applied Psychology, 62, 624–627.
Fama, E. F., & Jensen, M. C. (1983). Separation of ownership and control. Journal of Law and
Economics, 26, 301–325.
Flamhaultz, E. G., Das, T. K., & Tsui, A. (1985). Towards an integrative framework of organizational
control. Accounting, Organizations and Society, 10, 51–66.
Gage, G. H. (1982). On acceptance of strategic planning systems. In: P. Lorange (Ed.), Implementation
of Strategic Planning. NJ: Prentice-Hall.
The Interaction Effects of Lean Production Manufacturing Practices 131

Gomez-Mejia, L. R., & Balkin, D. B. (1992). Structure and process of diversification, compensation,
strategy and firm performance. Strategic Management Journal, 13, 381–387.
Govindarajan, V. (1988). A contingency approach to strategy implementation at the business-unit
level: Integrating administrative mechanisms with strategy. Academy of Management Journal,
31, 828–853.
Govindarajan, V., & Gupta, A. K. (1985). Linking control systems to business unit strategy: Impact
on performance. Accounting, Organizations and Society, 10, 51–66.
Hackman, J. R., & Wageman, R. (1995). Total quality management: Empirical, conceptual and
practical issues. Administrative Science Quarterly, 40, 309–342.
Harari, O. (1993). Ten reasons why TQM doesn’t work. Management Review, 82, 33–38.
Hayes, R. H., Wheelwright, S. C., & Clark, K. B. (1988). Restoring our competitive edge: Competing
through manufacturing. New York: Free Press.
Hemmer, T. (1995). On the interrelation between production technology, job design, and incentives.
Journal of Accounting and Economics, 19, 209–245.
Holmstrom, B. (1979). Moral hazards and observability. Bell Journal of Economics, 10, 74–91.
Ichniowski, C., Shaw, K., & Prennushi, G. (1997). The effects of human resource management
practices on productivity: A study of steel finishing lines. The American Economic Review, 87,
291–314.
Ittner, C., & Kogut, B. (1995). How control systems can support organizational flexibility. In: E.
Bowman & B. Kogut (Eds), Redesigning the Firm. New York: Oxford University Press.
Ittner, C., & Larcker, D. F. (1995). Total quality management and the choice of information and reward
systems. Journal for Accounting Research, 33(Suppl.), 1–34.
Ittner, C., & Larcker, D. F. (1997). The performance effects of process management techniques.
Management Science, 43, 534–552.
Ittner, C., Larcker, D. F., Nagar, V., & Rajan, M. (1999). Supplier selection, monitoring practices, and
firm performance. Journal of Accounting and Public Policy, 18, 253–281.
Jensen, M., & Meckling, W. (1976). Theory of the firm: Managerial behavior, agency costs, and
ownership structure. Journal of Financial Economics, 3, 305–360.
Johnson, H., & Kaplan, R. (1987). Relevance lost: The rise and fall of management accounting. MA:
Harvard Business School Press.
Kalagananam, S. S., & Lindsay, R. M. (1998). The use of organic models of control in JIT firms: Gen-
eralising Woodward’s findings to modern manufacturing practices. Accounting Organizations
and Society, 24, 1–30.
Kaplan, R. S. (1983). Measuring manufacturing performance: A new challenge for managerial
accounting research. The Accounting Review, 58, 686–705.
Kaplan, R. S. (1990). Measures for manufacturing excellence. MA: Harvard Business School Press.
Latham, G. P., & Lee, T. W. (1986). Goal setting. In: E. A. Locke (Ed.), Generalizing from Laboratory
to Field Settings. MA: Lexington Books.
Lehmann, D. R., Gupta, S., & Steckel, J. H. (1998). Marketing research. MA: Addison-Wesley.
Locke, E., & Latham, G. (1990). Goal setting theory and task performance. New York: Prentice-Hall.
Locke, E. A., Shaw, K. M., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance:
1969–1980. Psychological Bulletin, 90, 121–152.
MacDuffie, J. P. (1995). Human resource bundles and manufacturing performance: Organizational
logic and flexible production systems in the world auto industry. Industrial and Labor Relations
Review, 48, 197–221.
Milgrom, P., & Roberts, J. (1990). The economics of modern manufacturing: Technology, strategy
and organization. American Economic Review, 80, 511–528.
132 HIAN CHYE KOH ET AL.

Milgrom, P., & Roberts, J. (1995). Complementarities and fit strategy, structure, and organizational
change in manufacturing. Journal of Accounting and Economics, 19, 179–208.
Milkovich, G. T. (1988). A strategic perspective to compensation management. Research in Personnel
and Human Resources, 9, 263–288.
Powell, T. C. (1995). Total quality management as competitive advantage: A review and empirical
study. Strategic Management Journal, 16, 15–37.
Sarkar, R. G. (1997). Modern manufacturing practices: Information, incentives and implementation.
Harvard Business School Working Paper.
Schonberger, R. J. (1986). World-class manufacturing: The lessons of simplicity applied. New York:
Free Press.
Siegel, D. S., Waldman, D. A., & Youngdahl, W. E. (1997). The adoption of advanced manufacturing
technologies: Human resource management implications. IEEE Transactions on Engineering
Management, 44, 288–298.
Sim, K. L., & Killough, L. N. (1998). The performance effects of complementarities between
manufacturing practices and management accounting systems. Journal of Management
Accounting Research, 10, 325–346.
Snell, S., & Dean, J. (1992). Integrated manufacturing and human resource management: A human
capital perspective. Academy of Management Journal, 35, 467–504.
Womack, J. P., Jones, D. T., & Daniel, R. (1990). The machine that changed the world. New York:
Macmillan.
Wruck, K. H., & Jensen, M. C. (1994). Science, specific knowledge and total quality management.
Journal of Accounting and Economics, 18, 247–287.

APPENDIX: QUESTIONNAIRE
∗ Reverse Coding

Production Costs

In this section, we are interested to know the extent to which the following
performance attributes have changed during the past 3 years using the scale of
1–5 listed below: (1 = Decrease Tremendously, 3 = No Change, 5 = Increase
Tremendously).
Manufacturing Cost
Warranty Cost

Just in Time

(Anchored by 1 = Not at All or Very Little, 4 = To Some Extent, and 7 =


Completely or A Great Deal)
The Interaction Effects of Lean Production Manufacturing Practices 133

(1) Are products pulled through the plant by the final assembly schedule/master
production schedule?
(2) How much attention is devoted to minimizing set up time?
(3) How closely/consistent are predetermined preventive maintenance plans
adhered to?
(4) How much time is spent in achieving a more orderly engineering change by
improving the stability of the production schedule?
How much has each of the following changed in the past three years? (Anchored
by 1 = Large Decrease, 4 = Same, and 7 = Large Increase)
∗ (5) Number of your suppliers
∗ (6) Frequency of the deliveries
(7) Length of product runs
∗ (8) Amount of buffer stock
∗ (9) Number of total parts in Bill of Material

Total Quality Management

(Anchored by 1 = Very Little or None, 4 = Moderate, and 7 = A Great Deal or


Consistent Use)
(1) How much time does the plant management staff devote to quality improve-
ment?
(2) How much time is spent working with suppliers to improve their quality?
∗ (3) How would you describe your current approach to providing quality

products?

Built In Some of Each Post Production


Inspection

1 2 3 4 5 6 7

(4) How much effort (both time and cost) is spent in preventive maintenance to
improve quality?
(5) How much effort (both time and cost) is spent in providing quality related
training to the plant’s employees?
(6)a What percentage of the plant’s manufacturing processes are under statistical
quality control? %
134 HIAN CHYE KOH ET AL.

(7)a What percentage of the plant’s employees have quality as a major


responsibility? %
How would you describe the level of use within your plant of the follow-
ing quality improvement methodologies? (Anchored by 1 = Little or None,
4 = Moderate Use, and 7 = Consistent Use)
(8) Quality Function Deployment
(9) Taguchi Methods
(10) Continuous Process Improvement
a The numeric number reported was divided by 14.3 (i.e. 100/7 = 14.29%) in
order to convert the percent to a scale of 1–7.

Performance Feedback

In this section, we are interested in the availability and frequency of performance


feedback provided to the shop floor personnel. Please indicate the frequency of
feedback by circling the appropriate number from 1 to 5.
1 = Never 2 = Occasionally 3 = Monthly 4 = Weekly 5 = Daily

Customer Perception
 Customer perceived quality
 Customer complaints

Delivery
 On-time delivery

Quality
 Cost of scrap
 Rework
 Defect
 Warranty cost
 Sales return

Cycle Time
 Product development time
 Manufacturing lead time
 Work station setup time
The Interaction Effects of Lean Production Manufacturing Practices 135

Goals

Does your firm set specific numeric targets for the following performance
measures? (Anchoring on “Yes” or “No”)

Customer Perception
 Customer perceived quality
 Customer complaints

Delivery
 On-time delivery

Quality
 Cost of scrap
 Rework
 Defect
 Warranty cost
 Sales return

Cycle Time
 Product development time
 Manufacturing lead time
 Work station setup time

The Reward System

(1) How are plant workers currently being compensated? (please circle only one).
(a) Strictly individual fixed pay only
(b) Individual fixed pay + non-monetary reward
(c) Individual fixed pay + individual-based monetary incentive
(d) Individual fixed pay + group-based monetary incentive
COMPENSATION STRATEGY
AND ORGANIZATIONAL
PERFORMANCE: EVIDENCE
FROM THE BANKING INDUSTRY
IN AN EMERGING ECONOMY

C. Janie Chang, Chin S. Ou and Anne Wu

ABSTRACT
To survive in the turbulent, global business environment, companies must
apply strategies to increase their competitiveness. Expectancy theory
indicates that salary rewards can motivate employees to achieve company
objectives (Vroom, 1964). First, we develop an analytical model to predict
that companies using a high-reward strategy could outperform those using
a low-reward strategy. Then, we obtain archival data from banking firms
in Taiwan to test the proposed model empirically. We control the effects of
operating scale (firm size) and assets utilization efficiency (assets utilization
ratio). Empirical results show that salary levels and assets utilization
efficiency significantly affect banks’ profitability.

INTRODUCTION
The banking industry has played a critical role in many countries’ financial
operations. Since the early 1990s, many large banks in the world have been

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 137–150
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12006-6
137
138 C. JANIE CHANG ET AL.

involved in mergers and acquisitions to provide various services/products to their


clients so that they can be competitive in the global market. For example, Sanwa
Bank, Tokai Bank, and Asahi Bank, the three largest banks in Japan, have merged
and become the third largest bank in the world; and Deutsche Bank acquired
Dresdner Bank AG, becoming the largest bank in Germany (Cheng, 2000).
Most significantly, in 1999, President Clinton signed the Gramm-Leach-Bliley
Act that allows the operations of universal banking in the United States. Until
the Act became effective, U.S. banks had to establish subsidiaries, so-called
holding companies, to conduct their non-banking business activities such as
insurance and security investments. Since the Act’s passage, many countries in
emerging economies have passed similar laws to enhance the competitiveness of
their banks.
All these changes have reshaped the landscape of global banking industry. To
survive and prosper in the turbulent and competitive business environment, players
in commercial banking must develop strategies to attract highly qualified individ-
uals to join the industry. Expectancy theory suggests that employees are motivated
by the firm’s reward structure (Vroom, 1964). Kaplan and Atkinson (1998) state
that “pay-for-performance is an artifact that you want to motivate people to
pursue organization objectives.” Horngren et al. (2000) also indicate that reward
systems are critical in any performance evaluation model. Many prior studies have
examined the relationship between compensation strategy and organizational
performance at executive levels (e.g. Hadlock & Lumer, 1997; Jensen & Murphy,
1990; Lambert et al., 1993; Warner et al., 1988). However, empirical evidence
supporting this theory is sparse, when it is applied to the general employee level.
According to Kaplan and Norton (1996), one important approach towards trans-
lating strategies into actions is to assure that the employee salary/reward system
is closely linked to organizational performance. In this study, we explore whether
salary rewards can be an effective motivator for employees to act in ways that
promote their firms’ objectives.
Rapid economic development in the Asian-Pacific region and the fact that many
countries in that region have joined the World Trade Organization (WTO) mean
that financial resources can be moved with relative freedom from one country to
another. That is, emerging markets in this region have drastically increased their
interactions with the advanced economies. Since many Asian-Pacific countries
are at comparable stages of economic development and all have strong interests
in developing their financial markets, examining one of the emerging markets in
the Asian-Pacific region can provide insights into the other emerging markets in
the region. The purpose of this study is to investigate empirically whether reward
strategies may affect the organizational performance of the banking industry in an
Asian-Pacific emerging market, Taiwan.
Compensation Strategy and Organizational Performance 139

Based on prior literature, we first develop an analytical model to prove that


companies using a high-reward strategy could outperform those using a low-reward
strategy. Then, we obtain archival data of 232 observations from banking firms
in Taiwan to test the proposed model empirically. Our results fully support the
conjecture we develop in the analytical model. The remainder of this paper is
organized as follows. In the next section, we review the literature and develop the
analytical model. Then, after describing our samples and variables, we report our
empirical results. Finally, we discuss the findings, conclude the study and identify
future research directions.

BACKGROUND AND ANALYTICAL


MODEL DEVELOPMENT
Background

In general, there are two types of rewards for employees: intrinsic and extrinsic
(Deci & Ryan, 1985; Kohn, 1992; Pittmaan et al., 1982). Intrinsic rewards
relate to the environment within which the employee operates. Employees often
have high job satisfaction in a collegial organizational culture with positive
management styles. Extrinsic motivators include salaries, bonuses and financial
incentives, such as stock options. A number of recent studies have examined
whether different compensation schemes for top executives relate to corporate
profitability and other measures of organizational performance. The results are
quite consistent with Deci and Ryan’s prediction: there are only slight or even
negative correlations between compensation and performance (e.g. Hadlock &
Lumer, 1997; Jensen & Murphy, 1990; Lanen & Larcker, 1992). Conversely,
other studies have shown that rewards should be based on individual performance
and that the effects of such rewards can reflect on the company’s performance.
In a study on making decisions about pay increases, Fossum and Fitch (1985)
used three groups of subjects: college students, line managers, and compensation
managers. All three groups agreed that the largest consideration should be given to
individual performance – even over other relevant criteria, such as cost of living,
difficulties in replacing someone who leaves, seniority, and budget constraints.
In addition, in management accounting contexts, Kaplan and Atkinson (1998)
argue that it is the responsibility of accounting professionals to evaluate whether
employees’ rewards are appropriately associated with firm performance.
Recently, Fey et al. (2000) conducted a survey using both managers and non-
managers from 395 foreign firms operating in Russia. Their results show a direct
positive relationship between merit pay for both managers and non-managers and
140 C. JANIE CHANG ET AL.

the firm’s performance. However, they used only non-financial measures, such
as job security, internal promotion, employee training, and career planning, to
evaluate firm performance. Schuster (1985) conducted a survey with 66 randomly
sampled Boston-area high-tech firms; that survey’s results reveal a greater reliance
on special incentives (e.g. bonuses, stock options, and profit-sharing plans) in
financially successful high-tech companies than in unsuccessful ones.
Many prior studies have reported that reward/incentive systems are positively
related to firm performance (e.g. Arthur, 1992; Fey et al., 2000; Gerhart &
Milkovich, 1990; Pfeffer, 1994; Schuster, 1985). However, these studies used
either a non-financial measurement for firm performance or used a survey ques-
tionnaire and did not investigate actual firm profitability or financial performance.
Neither did they control essential factors such as operating scale (e.g. firm size)
and operating efficiency (e.g. asset utilization).
Martocchio (1998) states that two aspects need to be examined to determine
whether a firm’s compensation strategies are effective: in the short term, a
compensation strategy is effective if it motivates employees to behave the way
the firm expects them to; in the long term, the strategy should be able to boost the
firm’s financial performance. Hence, we develop an analytical model to examine
the impact of reward/incentive systems on a firm’s long-term performance.

Proposed Analytical Model

Similar to Ou and Lee (2001), we propose an analytical model to depict the asso-
ciation between a firm’s reward/salary strategy and its financial performance. We
assume that there are two types (t) of workers on the labor market: type h with high
skills/ability, and type l with low skills/ability (i.e. t = l, h). The productivity of
type t workers is denoted as xt and x h > x l > 0. The probability of finding workers
of types l and h is f and 1 − f, respectively. The above-mentioned information is
available on the market. We further use the following notations:

at : the effort put in by type t workers


Yt : the value of the service provided by type t workers
pt : the pays of type t workers when the value of the service they provide is Yt
␲t : the profit of the firm from the service provided by type t workers
a 2t /2: type t workers’ personal costs incurred with effort at
The value of service provided by a type t worker (Yt ) is determined by his/her
productivity (xt ) and effort (at ). Their relationship can be described by Y t = x t +
a t . While xt and et are a worker’s private information, the firm can observe Yt
after hiring the worker and will pay the worker pt based on the observed Yt .
Compensation Strategy and Organizational Performance 141

The a 2t /2 means that the cost to a worker increases with efforts in an increasing
rate. Accordingly, the firm’s corresponding profit ␲t is (Y t − p t ) or (x t + a t − p t ).
The objective function is to maximize the overall profit of the firm, which can be
formulated using the following equations:
Max f (x l + a l − p l ) + (1 − f )(x h + a h − p h ) (1)
eh ,el ,ph ,pl

s.t.
a 2l
pl − ≥0 (2)
2
a 2h [a l − (x h − x l )]2
ph − ≥ pl − (3)
2 2
Equation (2) states the constraint that type l workers will take any job when the wage
of the job is larger than or equal to the associated personal cost (a 2l /2). Equation (3)
indicates that type h workers will take any job that pays them properly. That is,
the personal benefit earned by a highly skilled worker taking ph is greater than
or equal to the benefit from taking pl . Note that when a highly skilled worker
earns ph , he/she must put in effort eh with personal costs of a 2h /2. We know that
Y l = x l + a l , Y l = x h + x l + a l − x h = x h + a l − (x h − x l ). Therefore, when a
highly skilled worker takes a low-paying job to produce Y l , the personal cost to the
highly skilled worker will be a l − (x h − x l ). Hence, the personal benefit a highly
skilled worker can have when he/she takes a low-wage job is p l − [a l − (x h −
x l )]2 /2. Using this model, we would like to prove that firms using high rewards
to attract high-skilled workers can outperform those using low rewards to attract
less-skilled workers.
We use the Lagrange multiplier to solve the above-mentioned objective
function (Eq. (1)). Let L represent the Lagrangian; ␭ and ␬ represent the Lagrange
multipliers. The Lagrangian can be shown as follows:
a2
L = f (x l + a l − p l ) + (1 − f )(x h + a h − p h ) + k(p l − l )
2
 
a 2
[a l − (x h − x l )] 2
+ ␭ ph − h − pl + (4)
2 2
By taking the derivatives of al , ah , pl , and, ph respectively, we get the following
four equations:
f − ␬a l + ␭[a l − (x h − x l ) = 0 (4a)
1 − f − ␭a h = 0 (4b)
142 C. JANIE CHANG ET AL.

−f + ␬−␭ = 0 (4c)
−(1 − f ) + ␭ = 0 (4d)
Besides, from Eq. (4), we know that:
a2
␬( p l − l ) = 0 (4e)
2
 
a 2h [a l − (x h − x l )]2
␭ ph − − pl + =0 (4f )
2 2

Solving Eqs (4a)–(4f), we get the following results:


␬=1, ␭ = 1 − f, and the values of pl , ph , al , ah are listed as Eqs (5)–(8).
a 2l
pl = (5)
2
1 a 2l (a l − x h + x l )2
ph = + − (6)
2 2 2
1−f
al = 1 − (x h − x l ) (7)
f
ah = 1 (8)
From Eqs (5) and (7), we know that the firm’s profit generated by a less-skilled
worker is ␲l , and
1−f a2
␲l = x l + a l − p l = x l + 1 − (x h − x l ) − l (9)
f 2
From Eqs (6) and (8), we know that the firm’s profit, ␲H , generated by a
highly-skilled worker is
 
1 a 2l (a l − x h + x l )2
␲h = x h + a h − p h = x h + 1 − + − (10)
2 2 2

Comparing Eqs (9) and (10), we get the following equation.


 
xh − xl 2
␲h − ␲l = >0 (11)
2f
According to Eq. (11), we prove that firms using high rewards to attract
highly-skilled workers can generate higher profits than those using low rewards
to attract less-skilled workers. This becomes our empirical hypothesis. We have
Compensation Strategy and Organizational Performance 143

empirically tested this hypothesis using banking firms in Taiwan. The following
section describes the sample and variables used in the empirical study.

SAMPLE AND VARIABLES

The Sample

The sample consists of 232 observations of banking firms listed on the Tai-
wan Stock Exchange or in the Taiwan Over-the-Counter market from 1991
to 1999. Table 1 provides the descriptive statistics of the sample firms. On
average, each firm has 1,645 employees. The means (standard deviations) of
net income and salary expenses are NT$1,543,126,000 (NT$1,979,132,000) and
NT$1,660,523,000 (NT$1,923,147,000), respectively.

Independent and Control Variables

The purpose of this study is to examine the relationship between salary rewards
and firm performance, especially profitability. The independent variable is a firm’s
salary level (SL), which is the mean salary expense per employee (SE/NE). We
include a couple of control variables in our empirical model. The well-known Du
Pont model decomposes a firm’s operating performance into two components:
profitability and efficiency. The efficiency component is the asset utilization
ratio (AUR), which measures a firm’s ability to generate sales from investment
in assets (Penman, 2001; Stickney & Brown, 1999). Bernstein and Wild (1998,
p. 30) state that “asset utilization ratios, relating sales to different asset categories,
are important determinants of return on investment.” One of the AURs suggested
by Bernstein and Wild (1998, p. 31) is net sales to total assets ratio (NS/TA).
Since our focus in the analytical model is firm profitability, we use this important
variable to control firm efficiency when analyzing our data. Table 2 defines all
the variables used in our empirical tests.
In addition, we include operating scale (Log(Total Assets)) as the control
variable in our model. Issues related to operating scale have continuously
generated much interest in the academic community (e.g. Altunbas, Evans &
Molyneux, 2001; Altunbas, Gardener, Molyneux & Moore, 2001; De Pinho,
2001). Theories of economies of scale suggest that the best efficiency and thus the
highest performance can be obtained when a firm operates at the optimal scale.
However, the operating scale (i.e. firm size) is difficult for individual employees
to influence, so we have decided to use it as a control variable for firm differences.
144
Table 1. Descriptive Statistics of the Sample.
Variables Mean Std. Dev. Maximum 3rd Quantile Median 1st Quantile Minimum

NS $16,995,306 $17,624,233 $80,714,132 $19,481,054 $10,560,396 $5,332,513 $568,302


OI $1,822,035 $2,380,777 $11,911,059 $2,414,852 $994,926 $506,836 $–4,950,170
PTI $1,825,293 $2,294,861 $11,911,059 $2,782,093 $1,042,781 $503,978 $–4,938,785
NI $1,543,126 $1,979,132 $11,432,059 $2,233,528 $868,631 $425,555 $–3,515,260
SE $1,660,523 $1,923,147 $8,432,252 $1,825,000 $916,080 $482,589 $93,308
TA $234,390,000 $252,430,000 $1,178,700,000 $249,680,000 $144,290,000 $77,872,764 $3,112,100
NE 1.645 1.540 6.662 2.003 1.167 0.605 0.237

Note: N = 232, Unit: 1000 (with NT$). NS = Net Sales, OI = Operating Income, PTI = Pretax Income, NI = Net Income, SE = Salary Expense,
TA = Total Assets, NE = Number of Employees.

C. JANIE CHANG ET AL.


Compensation Strategy and Organizational Performance 145

Table 2. Variable Definitions.


Name Definition

Dependent variables
PM1 TL/TD (loan-to-deposit ratio)
PM2 OI/NE (operating income per employee)
PM3 PTI/NE (pretax income per employee)
PM4 NI/NE (net income per employee)
Independent variable
SL SE/NE (salaries expense per employee)
Control variables
OS Log (TA) (log for total assets)
AUR NS/TA (net sales to total assets ratio)
AUR Assets utilization ratio
LDR Loan-to-deposit ratio
NE Number of employees
NI Net income
NS Net sales
OI Operating income
OS Operating scale
PM1 Performance measure of a bank’s potential profitability
PM2 First accounting-based profitability measure (per employee)
PM3 Second accounting-based profitability measure (per employee)
PM4 Third accounting-based profitability measure (per employee)
PTI Pretax income
SE Salary expenses
SL Salary level
TL Total loans
TD Total deposits
TA Total assets

Dependent Variables

To evaluate bank performance concerning profitability, we use the loan-to-deposit


ratio as one of the dependent variables. The traditional banking business generates
interest income from loans. A high loan-to-deposit ratio indicates high operating
performance in the banking industry. Banks often report increased earnings when
they have increased the loan volume (ABA Banking Journal, 1992). In practice,
the loan-to-deposit ratio has been used to monitor a bank’s potential profitability.
For example, the American Bankers Association’s Community Bankers Council
uses the loan-to-deposit ratio as one of several indicators to evaluate a bank’s loan
demand and thus its potential earnings (Steinborn, 1994). Fin and Frederick (1992)
146 C. JANIE CHANG ET AL.

specify that “Banks that want a strategic earning advantage must strive for a strong
loan-to-deposit ratio. They must cultivate loan business, maintain it, and attract
new business. Increasing the loan-to-deposit ratio by one percentage point will
likely add four or five basis points to net interest margins.” Hence, we use this
important indicator as one of our performance measures.
In addition, we use three accounting-based profitability measures as our
dependent variables: operating income per employee (OI/NE), pretax income
per employee (PTI/NE), and net income per employee (NI/NE). These measures
are commonly used by financial analysts to evaluate a firm’s performance.
Although prior research has suggested including market-based measures to
evaluate firm performance, the focus was to examine the relationship between
firm performance and executive compensations (Gorenstein, 1995; Jensen &
Murphy, 1990; McCarthy, 1995; Stock, 1994). Since the purpose of this study is
to explore a firm’s salary strategy on general employees, not on executives, we
focus on accounting-based performance measures.

EMPIRICAL RESULTS
Descriptive Statistics

Table 3 presents the descriptive statistics of all the variables, including the means,
standard deviations, maximum and minimum data values, and medians of the
sample’s loan-to-debt ratio, operating income per employee, pre-tax income per
employee, net income per employee, salary levels, assets utilization ratio, and op-
erating scales. To assess the collinearity among independent and control variables
in our regression models, we obtained the correlation matrix using Pearson and
Spearman correlation coefficients. According to the results, none of the correla-
tions is high (below 0.9). Thus, we do not have much concern about collinearity.

Regression Models and Results

Recall that we use four performance indicators to measure bank performance in


profitability: loan-to-deposit ratio (Total loans/Total deposits), operating income
per employee, pre-tax income per employee, and net income per employee. They
are denoted as PMi , i = 1–4. That is, we use four regressions to examine our data.
The regression model is as follows,
PMi = ␤0 + ␤1 SL + ␤2 AUR + ␤3 OS + ␧ (i = 1–4) (12)
where SL is salary level; AUR is assets utilization ratio; and OS is operating scale.
Compensation Strategy and Organizational Performance
Table 3. Descriptive Statistics of Variables.
Variables Mean Std. Dev. Maximum 3rd Quantile Median 1st Quantile Minimum

PM1 (TL/TD) 89 19 213 95 87 79 7


PM2 (OI/NE) 1,037 1,565 19,526 1,332 1,034 755 −4,705
PM3 (PTI/NE) 1,065 1,583 19,526 1,342 1,023 765 −4,611
PM4 (NI/NE) 900 1,489 18,741 1,143 853 642 −4,611
SL (SE/NE) 889 260 1,760 1,027 822 740 223
AUR (NS/TA) 0.0782 0.0437 0.4480 0.0781 0.0712 0.0676 0.0436
OS (Log(TA)) 18.7619 1.0819 20.8876 19.3356 18.7873 18.1705 14.9508

Note: N = 232. Unit: $1000 (with NT$).

147
148 C. JANIE CHANG ET AL.

Table 4. OLS Regression Results.


Independent and Dependent Variables
Control Variables
PM1 (TL/TD) PM2 (OI/NE) PM3 (PTI/NE) PM4 (NI/NE)

Intercept 28 (1.00) −1959 (−0.94) −1665 (−0.79) −1859 (−0.93)


SL (SE/NE) 0.000394*** (5.64) 1.92*** (4.01) 2.00*** (4.12) 1.74*** (3.78)
AUR (NS/TA) 0.03 (0.02) 10930*** (4.56) 11162*** (4.61) 9522*** (4.14)
OS (Log(TA)) 0.03 (1.56) 22.91 (0.19) 4.25 (0.04) 24.66 (0.21)
Adj-R2 0.25 0.13 0.13 0.11
F-Value 25.88 12.33 12.61 10.65
p-Value 0.0001 0.0001 0.0001 0.0001
N 221a 231 231 231
Note: Model: PMi = ␤0 + ␤1 SL + ␤2 AUR + ␤3 OS + ␧i (i = 1–4). t-Value in parentheses.
a
Due to missing values, there are only 221 samples in the model of PM1.
∗∗∗
Significant at 0.01.

According to the results in Table 4, a bank’s employee salary level is sig-


nificantly related to all the profitability measures at 1% level. Our empirical
results fully support the hypothesis we generated in the analytical model that
compensation strategy is a significant managerial variable in determining a firm’s
financial operating performance in the banking industry. In addition, the results
reveal that the asset utilization efficiency is also an important factor explaining
the variances in accounting-based profitability measures.

CONCLUSIONS

The global business environment has been extremely turbulent and competitive.
Companies must apply effective strategies to increase their competitiveness to
survive and prosper in such an environment. Expectancy theory indicates that salary
rewards can motivate employees to achieve company objectives. Accordingly, we
develop an analytical model to prove that companies using a high-reward strategy
could outperform those using a low-reward strategy. Then, we obtain archival data
from banking firms in Taiwan to empirically test the proposed model. Using four
different performance measures on profitability, we find that salary level and assets
utilization ratio significantly affect Taiwanese bank performance.
In this study, we have examined the salary strategies used by the banks in
Taiwan on their profitability performance. To generalize our findings, future
studies can look into this issue using firms in different industries and from
different countries. Also, does the relationship between employee compensation
strategies and firm performance depend on various firm characteristics, such as
Compensation Strategy and Organizational Performance 149

service vs. manufacturing firms or labor-intensive vs. technology-intensive firms?


It is also critical for firms to operate efficiently to be competitive. Hence, we
should explore whether strategies of compensation for general employees affect
firms’ operating efficiency as well as their profitability.
Furthermore, there are different types of compensation, such as fixed salaries,
variable commissions/bonuses, and stock options. Future studies can look into
the impact of different types of compensation on firm performance. For example,
employees’ fixed salaries may relate highly to a firm’s short-term performance,
and strategies on employees’ bonus and stock options may affect a firm’s
long-term performance. In addition, it will be beneficial to most firms if solutions
can be derived to find an optimal combination of fixed and variable compensation
for different types of employees (e.g. salespeople vs. engineers).
In this study, we have used regression models to investigate the effects of
the independent variables on several dependent variables. Most of the previous
studies have employed a similar approach to examine the relationships between
compensation variables (e.g. cash compensation or total compensation) and
a specific performance indicator (e.g. an accounting-based or market-based
performance measure). However, different performance measures should be
incorporated simultaneously into the model, rather than separately tested, to fully
understand their relationship to compensation strategies. Future studies can fill
the gap by examining a simultaneous relationship between a set of compensation
variables and a set of performance measures.

REFERENCES
Altunbas, Y., Evans, L., & Molyneux, P. (2001). Bank ownership and efficiency. Journal of Money,
Credit, and Banking, 33(4), 926–954.
Altunbas, Y., Gardener, E. P. M., Molyneux, P., & Moore, B. (2001). Efficiency in European banking.
European Economic Review, 45(10), 1931–1955.
Arthur, J. B. (1992). The link between business strategy and industrial relations systems in American
steel minimills. Industrial and Labor Relations Review, 45(3), 488–506.
Bernstein, L. A., & Wild, J. J. (1998). Financial statement analysis. Boston: McGraw-Hill.
Cheng, Y. R. (2000). The environmental change of money and banking system on the new development
of Taiwan banking. Co Today – Taiwan Cooperative Bank, 26(5), 44–54.
De Pinho, P. S. (2001). Using accounting data to measure efficiency in banking: An application to
Portugal. Applied Financial Economics, 11(5), 527–538.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior.
New York: Plenum Press.
Fey, C. F., Bjorkman, I., & Pavlovskaya, A. (2000). The effect of human resource management prac-
tices on firm performance in Russia. International Journal of Human Resources Management,
11(1), 1–18.
Fin, W. T., & Frederick, J. B. (1992). Managing the margin. ABA Banking Journal, 84(4), 50–53.
150 C. JANIE CHANG ET AL.

Fossum, J. A., & Fitch, M. K. (1985). The effects of individual and contextual attributes on the sizes
of recommended salary increases. Personnel Psychology, 38(Autumn), 587–602.
Gerhart, B., & Milkovich, G. T. (1990). Organizational differences in managerial compensation and
financial performance. Academy of Management Journal, 33(4), 663–691.
Gorenstein, J. (1995). How the executive bonus system works. The Philadelphia Inquirer (July 9).
Hadlock, C. J., & Lumer, G. B. (1997). Compensation, turnover, and top management incentives:
Historical evidence. Journal of Business, 70(2), 153–187.
Horngren, C. T., Foster, G., & Darter, S. M. (2000). Cost accounting: A managerial emphasis (10th
ed.). Englewood Cliffs, NJ: Prentice-Hall.
Jensen, M. C., & Murphy, K. J. (1990). Performance pay and top-management incentives. Journal of
Political Economy, 98(21), 225–255.
Kaplan, R. S., & Atkinson, A. A. (1998). Advanced management accounting. Englewood Cliffs, NJ:
Prentice-Hall.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action.
Harvard Business School Press, 73–84.
Kohn, A. (1992). No contest: The case against competition. Boston: Houghton Mifflin.
Lambert, R. A., Larcker, D. F., & Weigelt, K. (1993). The structure of organizational incentives.
Administrative Science Quarterly, 38, 438–461.
Lanen, W. N., & Larcker, D. F. (1992). Executive compensation contract adoption in the electric utility
industry. Journal of Accounting Research, 30(1), 70–93.
Martocchio, J. J. (1998). Strategic compensation – A human resource management approach.
Prentice-Hall.
McCarthy, M. J. (1995). Top 2 UAL officers got $17 million in 94 stock options. Wall Street Journal
(April 5).
Ou, C. S., & Lee, C. L. (2001). A study of the association between compensation strategy and labor
performance. Working Paper, National ChengChi University.
Penman, S. H. (2001). Financial statement analysis security valuation. McGraw-Hill and Irwin.
Pfeffer, J. (1994). Competitive advantage through people. Boston: Harvard Business School Press.
Pittmaan, T. S., Emery, J., & Boggiano, A. K. (1982). Intrinsic and extrinsic motivational orientations:
Reward-induced changes in preference for complexity. Journal of Personality and Social
Psychology (March).
Schuster, J. R. (1985). Compensation plan design: The power behind the best high-tech companies.
Management Review, 74(May), 21–25.
Steinborn, D. (1994). Earnings dip in first half. ABA Banking Journal, 86(9), 26–27.
Stickney, C. P., & Brown, P. R. (1999). Financial reporting and statement analysis. Fort Worth: Dryden.
Stock, C. (1994). Bottom lines: Did CEO earn his pay. The Philadelphia Inquirer (November 20).
Vroom, V. H. (1964). Work and motivation. New York: Wiley.
Warner, J., Watts, R. L., & Wruck, K. H. (1988). Stock prices and top management change. Journal
of Financial Economics, 20, 461–492.
ACCOUNTING FOR COST
INTERACTIONS IN DESIGNING
PRODUCTS

Mohamed E. Bayou and Alan Reinstein

ABSTRACT
Since quality cannot be manufactured or tested into a product but must
be designed in, effective product design is a prerequisite for effective
manufacturing. However, the concept of effective product design involves
a number of complexities. First, product design often overlaps with such
design types as engineering design, industrial design and assembly design.
Second, while costs are key variables in product design, costing issues often
arise that add more complexities to this concept.
The management accounting literature provides activity-based costing
(ABC) and target costing techniques to assist product design teams. However,
when applied to product design these techniques are often flawed. First, the
product “user” and “consumer” are not identical as often assumed in target
costing projects, and instead of activities driving up the costs, managers may
use budgeted costs to create activities to augment their managerial power by
bigger budgets and to protect their subordinates from being laid off. Second,
each of the two techniques has a limited costing focus, activity-based costing
(ABC) focusing on indirect costs and target costing on unit-level costs. Third,
neither technique accounts for resource interactions and cost associations.

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 151–170
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12007-8
151
152 MOHAMED E. BAYOU AND ALAN REINSTEIN

This paper applies the new method of associative costing (Bayou


& Reinstein, 2000) that does not contain these limitations. To simplify the in-
tricate procedures of this method, the paper outlines and illustrates nine steps
and applies them to a hypothetical scenario, a design of a laptop computer
intended for the college-student market. This method uses the well-known
statistical techniques of clustering, Full Factorial design and analysis-of-
variance. It concludes that in product design programs, the design team may
need to make tradeoff decisions on a continuum beginning with the design-
to-cost point and ending at the cost-to-design extreme, as when the best
perceived design and the acceptable cost level of this design are incongruent.

INTRODUCTION
Since quality cannot be manufactured or tested into a product but must be
designed in, effective product design is a prerequisite for effective manufacturing
(Cooper & Chew, 1996, p. 88; National Research Council, 1991, p. 7). But
designing and developing new products are a complex and risky process that must
be tackled systematically (Roozenburg & Eekels, 1995, p. xi). Ignoring this issue
can adversely affect the nation’s competitiveness (National Research Council,
1991, p. 1). In this process, cost is a primary driver (National Research Council,
1991, p. 15; Ruffa & Perozziello, 2000, p. 1). Over 70% of a product’s life-cycle
cost is determined during the design stage (Ansari et al., 1997, p. 13; National
Research Council, 1991, p. 1). Michaels and Wood (1989, p. 1) elevate cost to
“the same level of concern as performance and schedule, from the moment the
new product or service is conceived through its useful lifetime.”
Yet costing issues often add to the complexity of product design. For example, in
the defense industry, Ruffa and Perozziello (2000, p. 161) report that aircraft manu-
facturers recently stressed the importance of adopting improved design approaches
as a means to control product costs. However, cost advantages are often hard to
discover, as they (p. 161) state: “Only, when we attempted to quantify the specific
cost savings to which these [improved design approaches contributed], we were
often disappointed. While it intuitively seems that broader benefits should exist, we
found that they are not consistently visible.” How does the managerial accounting
literature help in reducing this product-design and costing complexities?
In managerial accounting, activity-based costing (ABC) and target costing are
often touted as valuable methods of accounting for product design. They help
management to develop cost strategies for product design programs to create
new products or improve existing ones. While value engineering, value analysis
and process analysis techniques identify, reduce or eliminate nonvalue-added
Accounting for Cost Interactions in Designing Products 153

activities during the product’s lifecycle, when applied to product design programs,
these methods have serious shortcomings in theory and application. This paper
explains the limitations of activity-based costing (ABC) and target costing in
product design, and applies a more recent technique, i.e. associative costing
(Bayou & Reinstein, 2000), to overcome such limitations. The first section of the
paper explains the nature of product design since this concept is vaguely described
in the engineering and accounting literatures. The second section discusses
the shortcomings of ABC and target costing when applied to product design
programs. The associative costing model is then applied to a product design
scenario in the third section. Finally, a summary and conclusions are presented.

NATURE OF PRODUCT DESIGN


Product design as discussed in the engineering literature needs clarification.
Roozenburg and Eekels (1995, pp. 3, 53) define product design as the process
of devising and laying down the needed plans for manufacturing a product. This
definition shows that product design overlaps with several other types of design.
Roozenburg and Eekels (1995, p. xi) consider product design a narrower concept
than engineering design, which also includes designing chemical and physical
processes. Conversely, product design is a wider concept than industrial design,
which focuses on the usage and external appearances of products (Roozenburg &
Eekels, 1995, p. xi). In Redford and Chal’s (1994, p. 77) framework, design for
manufacturing (DFM) integrates product design and process planning into one
common activity. A central element in DFM is the design for assembly (DFA),
which addresses product structure simplification, since “the total number of parts
in a product is a key indicator of product assembly quality.” Redford and Chal
(p. 29) explain that product design affects assembly methods and processes.
Since all parts have at least three operations associated with them – grasping,
moving and inserting – reducing the number of parts must reduce the number of
operations and must result in improved assembly.
Understanding the product design concept requires clarifying the word
“design,” which Borgmann (1995, p. 13) narrowly views as “the excellence of
material objects.” Buchanon and Margolin (1995, p. ix) broaden the subject of
design to include social, cultural and philosophic investigations. They argue
that no one functional group can authoritatively speak for design; instead, many
different disciplines can participate in developing design concepts and practices.
In this paper, we adopt Akiyama’s (1991, p. 20) definition, which states that
design is “the form and planning the content of some human-made object in
accordance with goals or purposes of the object to be made.” He (p. 20) explains
154 MOHAMED E. BAYOU AND ALAN REINSTEIN

that “design” is an activity that: (a) recognizes the goals or purposes of products
or systems; (b) shapes its objects – creates their forms – in accordance with the
goals or purposes of these objects; and (c) evaluates and determines the forms of
its objects and makes their contents universally comprehensible. Both form and
content are important in product and service design.
“Appearance” is a closely related concept to form, which Niebel and Draper
(1974, p. 21) assign a high value when they conclude: “Appearance must be
built into a product, not applied to a product.” Appearance then is an important
element for both product design (Niebel & Draper, 1974) and for industrial design
(Roozenburg & Eekels, 1995; Wood, 1993). These product design issues have im-
portant implications for target costing and ABC techniques, discussed as follows.

LIMITATIONS OF TARGET COSTING AND


ABC IN PRODUCT DESIGN
When applied to product design programs, target costing and ABC have serious
fundamental limitations, including the following.

Limiting Assumptions

Target costing focuses on a target product. But the nature of this target product
from the manufacturer’s viewpoint differs from that of its customers. For example,
Morello (1995, p. 69) differentiates between the “user” and the “consumer.” He
(p. 70) argues that “[b]oth user and consumer have an explicit or implicit ‘project’
to use with efficacy and efficiency . . . But the project of the user is a microproject,
defined by many specific occasions, while the project of the consumer is, relatively,
a macroproject, for every possible occasion of use.” He (p. 70) adds: “the only way
to design properly is to have the user in mind; and the role of marketing . . . is to have
in mind the true project of the consumer, which paradoxically, is not to consume
but to be put in the condition to use properly.” Morello’s argument echoes the points
made in 1947 by Lawrence D. Miles, the founder of modern value engineering
(quoted in Akiyama, 1991, p. 9):
Customer-use values and customer-esteem values translate themselves into functions as far as
the designer is concerned. The functions of the product . . . cause it to perform its use and to
provide the esteem values wanted by the customer.
The use values require functions that cause the product to perform, while the esteem values
require functions that cause the product to sell.

Therefore, a target-costing project cannot assume that the characteristics of “user”


and the “customer-use value” always coincide with those of the “customer” and
Accounting for Cost Interactions in Designing Products 155

“customer-esteem values.”1 Cost reduction focuses on driving this assumption in


practice, since when the user and customer are the same, and when the customer-
use value and customer-esteem value are identical, less need for function analysis,
design and manufacturing arises. This assumption is also not inconsistent with
such a view as: “Target costing is not a management method for cost control in a
traditional sense, but it is one which intends to reduce cost” (Monden & Hamada,
1991, p. 18). It also underlies the well-known concept of “design to cost.” The
essence of this concept boils down to “making design converge on cost instead of
allowing cost to converge on design” (Michaels & Wood, 1989, p. 1). Neverthe-
less, this assumption may be responsible for many product problems. As Akiyama
(1991, p. 18) explains, product problems increase when the gap between the man-
ufacturer’s intention and customers’ desires widens, as an increasing number of
companies have experienced.
In turn, ABC has a fundamentally limiting assumption, especially since
ABC stresses that activities are the drivers of costs as the name, activity-based
costing, implies. However, in practice, the driving force may work in the opposite
direction. Some managers, protective of their subordinates and determined to
keep or increase their power through increasing their departmental budgets, may
“create” activities for their employees to help justify their budget demands to top
management. Hence, budgeted costs become the driver to create new activities
or intensify the existing ones. That is, an activity-based costing may instigate
budgeted cost-based activities as a reaction to the often-expected and sharp ABC
recommendations: reduce (non-value-added) activities and consequently reduce
the manager’s budget. In effect, two systems exist, an accounting (or consultant)
ABC and an actual ABC. Paradoxically, an accounting ABC system installed to
reduce unneeded activities may drive managers to create new (possibly unneeded)
activities to keep or augment their power.

Limited Costing Focus

ABC focuses primarily on indirect (manufacturing overhead) costs. While indirect


costs often form a substantial portion of a product or service’s total cost, they
exclude engineered costs that enter directly into the fabric of the product, e.g.
direct raw materials, components, modules, interfaces and direct labor. These unit-
level costs play an essential role in designing the product form (appearance) and
substance.
Target costing inherits a similar limitation, but with a different focus. Value engi-
neering, the primary tool of target costing, focuses primarily on direct (engineered)
costs, and less on indirect costs. For example, value engineers determine the types
156 MOHAMED E. BAYOU AND ALAN REINSTEIN

and combinations of parts of an automobile and may use time and motion studies
to determine the standard direct labor time and cost allowed for assembling a ve-
hicle. However, batch-level costs, e.g. machine setups, and facility-level costs, e.g.
factory cafeteria, factory security, facility cleaning and maintenance costs, are dif-
ficult to incorporate into the design of a unit of a concrete product.2 In short, ABC
focusing on indirect costs and engineered target costing focusing on unit-level costs
render them, individually, incomplete costing systems for product design purposes.

Ignoring Strategic Interaction Effects

ABC and target costing do not account for strategic interactions among resources,
activities and their costs. For example, maintenance and testing activities frequently
interact so much so that a reduction in maintenance activities can lead to more de-
fective output units, which in turn may necessitate increased testing activities. Yet,
for practical reasons, ABC models do not account explicitly for these interactions
among activities. With only four groups of different activities, each at two levels,
high and low, 11 interactions (24 – 4 activity groups – 1) among these activities to
account for would arise, as explained in the following section. When considering
that the median number of activity-area-based cost pools in practice is 22 (Innes
et al., 2000, p. 352), accounting for the activity interaction effects becomes even
more impractical. ABC also is a cost traceability model where costs are traced to
the cost object. This leap of costing from input to output bypasses the manufactur-
ing process where resources interact and costs associate (Bayou & Reinstein, 2000,
p. 77). Similarly, value-engineering programs often do not account formally for
input interactions and cost associations. This weakness of target costing systems
when applied to product design arises, for example, with the type of metal (e.g.
steel vs. aluminum) that enters into the production of an automobile, which must
be associated with such other costs as fuel consumption, environmental problems,
safety and price fluctuations of the metal (de Korvin et al., 2001).
A method that does not contain these target-costing and ABC limitations is
shown below.

AN APPLICATION OF THE
ASSOCIATIVE COSTING MODEL
The associative costing model focuses on main factors and interaction effects
among these factors. This model “allows cost interactions to be designed, planned
and controlled in order to help apply the application of process-oriented thinking
Accounting for Cost Interactions in Designing Products 157

and realize its continuous improvement goals” (Bayou & Reinstein, 2000, p. 75).
The ultimate goal of this model when applied to product design is to guide design
engineers and management in determining the optimum product design on the
basis of associating the most important factors, the interactions among these
factors and the costs of their combinations. Following Bhote and Bhote (2000,
p. 93), we label the most important factors Red X, the second-order factors Pink X,
the third-order factors Pale X and the dependent variable, i.e. the output of each
combination, Green Y. Table 1 lists the basic steps of applying this model.
The associative costing model employs well-known statistical methods of
clustering, classification and analysis of variance as illustrated by the following
hypothetical scenario. The object of design is a new model of a laptop computer
targeted for purchase by college students. The product has many elements that

Table 1. Basic Steps of the Associative Costing Model Application.


Step 1: Factor list. Compile an exhaustive list of all factors to be considered in product design of
a new product model or improving an existing one.
Illustration: Components: {A1 , A2 , . . . An . . . B1 , B2 , . . . Bn , . . . Z1 , Z2 , . . . Zn }
Step 2: Clustering: Using a clustering technique, classify the factors of Step 1 into homogeneous
groups of factors.
Illustration: Direct Material: {C1 , C2 , . . . C9 }
Step 3: Choose an appropriate output (Green Y) measure. Quantify this measure using a 1–9
Likert scale.
Illustration: Green Y: Degree of willingness to purchase the product.
Step 4: For each cluster, identify the Red X, Pink X and Pale X factors on the basis of the Green
Y measure using an appropriate methodology.
Illustration:
• Red X factors: C1 , C3 and C7
• Pink X factors: C2 , C4 and C6
• Pale X factors: C5 , C8 and C9
Step 5: Apply the full factorial method if a cluster has four or fewer Red X factors. This step
produces a list of experiments, each with a Green Y value (see Table 4).
Step 6: Using two cost values, low and high, for each factor, determine the cost of each
experiment combination (see Table 4).
Step 7: Summarize the results of Step 5 in terms of Red X, Pink X and Pale X factors, their main
effects, their significant interactions, their Green Y values and the corresponding cost of
each experiment.
Step 8: On the basis of Green Y and cost, choose the optimum combination.
Step 9: Cluster of clusters design: Treating the optimum combination of each cluster as a factor,
repeat Steps 5–8. The result of this step helps ranking the multitude of clusters in terms
of importance on the basis of their Green Ys and costs.
158 MOHAMED E. BAYOU AND ALAN REINSTEIN

can have low and high values, e.g. the RAM size, computing capability, number
and kind of software packages installed on a unit, quality of material for the frame
and carrying case, and the electronic screen. The following discussion applies the
nine steps listed in Table 1 to the new laptop model.

Step 1: Factor List

This step develops an exhaustive list of factors or components. The list can contain
many components since many types of resources are needed to design, manu-
facture and deliver a product to customers. The product design team has several
methods to compile this list. As a starting point, the method of reverse engineer-
ing of competitors’ products can provide insights to differentiate and improve on
competitors’ models. Another method is the rapid automated prototyping (RAP),
which is a new field that creates three-dimensional objects directly from CAD files,
without human intervention. According to Wood (1993, p. 1), prototypes, which
are integral to the industrial design cycle, have three purposes:

(1) Aesthetic visualization – to see how the product appears, especially a consumer
item that must look appealing when printed or packaged.
(2) Form-fit-and-function testing – to ascertain that the part fits and interfaces well
with other parts.
(3) Casting models – to make a casting model around the part for full-scale pro-
duction of replicas of the part.

The prototypes can enhance the design team’s imagination and enliven their
brainstorming sessions. To illustrate, this step develops a list of components: A1 ,
A2 , . . . An , B1 , B2 , . . . Bn , . . . Z1 , Z2 , . . . Zn (Table 1), as explained in Step 2.

Step 2: Clustering

Clustering means grouping of similar objects (Hartigan, 1975, p. 1). Its princi-
pal functions include naming, displaying, summarizing, predicting and seeking
explanation. Since “clustering” is almost synonymous with classification in that
all objects in the same cluster are given the same name, data are summarized
by referring to properties of clusters rather than properties of individual objects
(Hartigan, 1975, p. 6). The concept of “similarity” among members of a cluster
is crucial in a clustering approach (Kruskal, 1977, p. 17). Good (1977) provides
many dimensions to describe alternative clustering approaches.
Accounting for Cost Interactions in Designing Products 159

But accountants who emphasize the scientific statistical methods to identify


and measure the Red X, Pink X, Pale X and their interaction effects for product
design should note two major issues of clustering (Hartigan, 1975, p. 8). First,
design experts may insist that fancy data manipulations are not better than
subjective detailed knowledge. Second, clustering techniques themselves lack
sound probability models and their results are poorly evaluated and unstable
when evaluated. We argue that informed decision making requires using both a
clustering approach that may rely greatly on detailed (subjective) knowledge and
experience along with the statistical testing of the main effects and interaction
effects among elements of chosen clusters, as demonstrated below.
The hierarchical/nonhierarchical facet is one common dimension of clustering
in Good’s (1977, p. 88) list. The value chain, which starts with R&D and ends with
customer services during the product lifecycle is a hierarchical dimension. Each
major section of the value chain is a cluster of items with similar functions. In turn,
each cluster is divided into sub-clusters; each sub-cluster is segmented further into
sub-clusters; and so on. For example, manufacturing is a major cluster in the value
chain, consisting of direct material, direct labor, variable manufacturing overhead
and fixed manufacturing overhead sub-clusters. In turn, the direct-material
sub-cluster includes raw materials, in-house modules, outsourced modules,
interfaces and other parts sub-clusters. Each such sub-cluster can branch out into
more detailed sub-clusters, depending on the design team’s requested degree of
detail. To sum, using the value chain’s hierarchical structure, Step 2 develops six
clusters (R&D, Design, Manufacturing, Marketing, Administrative and Customer
Service). The Manufacturing cluster is classified into four sub-clusters (Direct Ma-
terials, Direct Labor, Variable Manufacturing Overhead and Fixed Manufacturing
Overhead). In turn, the Direct Material sub-cluster is analyzed into components,
C1 , C2 , . . . Cn that enter into the design of the laptop computer (Table 1).

Step 3: Choosing and Quantifying the Green Y

Green Y is the output whose selection and measurement depend on the design
team’s views and corporate goals. To illustrate, the design team decides that in
order to be consistent with the company’s goals to maximize sales revenues and
market share, the output (Green Y) should be defined as the potential customer’s
degree of willingness to buy the product. Using a 1–9 Likert scale, the responses
are defined as follows:

1 = Definitely, Item Ci will NOT affect my decision to buy the product.


9 = Definitely, Item Ci will affect my decision to buy the product.
160 MOHAMED E. BAYOU AND ALAN REINSTEIN

Respondents indicate their perception on the assumption that the price of the
product in question is affordable.3

Step 4: Red X, Pink X, and Pale X Identification

This step identifies critical factors in the chosen cluster. There are several ways
to conduct this step. In each of the following methods, respondents receive a
questionnaire seeking their degree of willingness to buy the product:
(1) Descriptive method: The basis for respondents’ judgments is a description of
several versions of the product, by varying one element at a time. This is the
least expensive method; yet it is also the weakest since respondents do not
physically examine the different versions of the product.
(2) The CAD prototype method: Respondents examine several versions of a three-
dimensional CAD replica (Wood, 1993), on which they express their willing-
ness to buy or not buy. This method is useful when the appearance of the
product or its elements is a key factor in their purchasing decision.
(3) The actual prototype method: Responses are based on an actual version of
the product. This method is the most effective because perception is based
on a real product; yet it is the most expensive since it requires producing
several product versions and experiments, by varying one element at a time.
For example, respondents compare two laptops, one basic with RAM of only
32 MB and one advanced, with 64 MB, then with 128 MB, then with 256 MB
and so on. This is the method we apply in the following illustration since it is
usually the most effective in designing such a relatively expensive (for many
students) product as a laptop computer.
We consider three samples of 30 college students each, from a private school
(PS), a small state school (SS) and a large state school (LS). To test the degree
of importance of each component (factor), Ci , in the component list of Step 2, a
statistical inference for one mean with normal distribution and 95% significance
level is applied to test the following hypothesis:
Ho : ␮≤5 For Component C i ,
Ha : ␮>5
While a mean response equal to or less than 5 for the Green Y indicates a low
degree of perceived importance, an average response greater than 5 denotes a high
degree of importance. Table 2 applies this statistical procedure for Component C1 .
A similar table can be developed for each of the C1 –C9 components of Step 2.
Accounting for Cost Interactions in Designing Products 161

Table 2. Testing Component C1 Hypothesis.


Statistical Inference: One Mean with Normal Distribution

Input Data

Respondent Responses Groups


PS SS LS

1 4 8 5
2 6 6 7
3 9 5 9
4 9 8 6
5 6 6 7
6 8 9 8
7 9 8 7
8 6 9 9
9 9 5 9
10 8 8 7
11 9 6 6
12 8 8 8
13 8 9 6
14 5 4 8
15 6 7 9
16 9 6 9
17 7 8 8
18 8 6 9
19 9 8 7
20 9 8 9
21 8 7 6
22 7 9 9
23 8 6 7
24 9 7 8
25 8 6 8
26 9 4 7
27 7 6 9
28 8 9 6
29 8 7 8
30 9 7 7
Statistical output: Sample statistics
Sample means 6.5 7.5 6.0
Sample Std. Dev. 3.5355 0.7071 1.414
Sample size 30 30 30
Point estimate 6.5 7.5 6.0
Confidence interval: Confidence level critical zone 0.012
Standard error of point estimate 0.645 0.129 0.258
162 MOHAMED E. BAYOU AND ALAN REINSTEIN

Table 2. (Continued )
Statistical Inference: One Mean with Normal Distribution

Input Data

Respondent Responses Groups


PS SS LS

Half width of conf. 0.008 0.002 0.003


Lower level 6.492 7.498 5.997
Upper level 6.508 7.502 6.003
Hypothesis test 2.324 19.365 3.873
Test statistic 1.645 1.645 1.645
Z-critical value 0.010 0.0 5.4E-0.5

Note: Significance level: 95%. PS = Private-school student sample; SS = Small state-school student
sample; LS = Large state-school student sample.
Responses: 1 = Defintely, Component C1 will NOT affect my decision to buy the product;
9 = Defintely, Component C1 will affect my decision to buy the product.

Table 2 shows that the hypothesis testing for Component C1 leads to rejecting the
null hypothesis, which means that this component is significantly important.
The results of the hypothesis testing of the nine components, C1 –C9 , in this
step are as follows (Table 3):

Table 3. Mean and Significant Value of Each Product Component: Perception


of Three Samples of Respondents.
Component C1 C2 C3 C4 C5 C6 C7 C8 C9

Mean & p-value at 95% sig. level


PS 6.5 5.0 7.0 5.0 2.0 4.0 6.0 5.0 3.0
0.00 0.00 0.01 0.01 0.03 0.00 0.00 0.00 0.02
SS 7.5 5.0 8.0 6.0 3.0 5.0 8.0 4.0 3.0
0.00 0.04 0.00 0.03 0.00 0.01 0.03 0.00 0.03
LS 6.0 5.0 7.0 6.0 2.0 5.0 7.0 5.0 1.0
0.00 0.00 0.00 0.01 0.00 0.04 0.00 0.05 0.01
Red X X X X
Pink X X X X
Pale X X X X

Note: PS = Private-school student sample; SS = Small state-school student sample; LS = Large


state-school student sample.
Accounting for Cost Interactions in Designing Products 163

Red X factors: C1 , C3 and C7


Pink X factors: C2 , C4 and C6
Pale X factors: C5 , C8 and C9

Furthermore, the three student samples show consistent perceptions of the


importance of these nine components.

Step 5: Full-Factorial Experimental Design

Most design of experiment (DOE) experts consider the Full Factorial the most pure
formal DOE technique because “it can neatly and correctly separate the quantity
of each main effect, each second-order, third-order, fourth-order, and higher order
interaction effect” (Bhote & Bhote, 2000, p. 282). The Full Factorial requires
2n experiments for a randomized, replicated and balanced design, where n is the
number of factors. Thus, if n equals 3, 4, 5, 6 and 10, a Full Factorial design
would respectively require 8, 16, 32, 64 and 1,024 experiments. Accordingly, the
Full Factorial becomes impractical if the number of factors exceeds four (Bhote
& Bhote, 2000, p. 282).
The Full Factorial methodology requires selecting two levels for each factor, a
low and high level. For n = 3 factors, the number of experiments equals 23 or 8
combinations, where each level of each factor is tested with each level of all the
other factors (Bhote & Bhote, 2000, p. 234). Applying the Full Factorial method
to the three Red X factors, C1 , C3 and C7 of Step 4, Table 4 shows the ANOVA
data and results. To save space, only one sample is considered. The procedure
should be repeated, however, for all samples of respondents.
To examine the content of Table 4 more closely, respondents are asked to
indicate their willingness to purchase each of the eight versions of a laptop on
1–9 Likert scale, where:

1 = Very low degree of willingness to purchase the unit


9 = Very high degree of willingness to purchase the unit

The average response for each experiment (or combination) is listed in the output
(Green Y) column. In experiment 1, the three Red X factors are held at low levels
(each with a –1). This is a laptop version with its Red X factors at the lowest level.
The student group’s examination derived an average response of 1, indicating no
substantial potential customer demand for this laptop. Demand is strongest for
the laptop version of experiment 6, with an average score of 8.
164
Table 4. Full Factorial Experimental Design.
ANOVA (Full Factorial) Table

Experiment # Factors Factor Interactions Output Costa in $


C1 C3 C7 C 1 × C3 C1 × C7 C3 × C7 C 1 × C3 × C7 (Green Y)

1 −1 −1 −1 1 1 1 −1 1 100
2 1 −1 −1 −1 −1 1 1 1 300
3 −1 1 −1 −1 1 −1 1 4 240
4 1 1 −1 1 −1 −1 −1 5 440
5 −1 −1 1 1 −1 −1 1 7 160
6 1 −1 1 −1 1 −1 −1 8 360
7 −1 1 1 −1 −1 1 −1 6 300
8 1 1 1 1 1 1 1 7 500

MOHAMED E. BAYOU AND ALAN REINSTEIN


Low level total −18 −17 −11 −19 −19 −24 −20
High level total 21 22 28 20 20 15 19
Net 3 5 17 1 1 −9 −1
Full-Factorial Costing

Component Low (−1) High (+1)

C1 $50 $250
C3 40 180
C7 10 70
Total $100 $500
a
To illustrate, the cost column is computed as follows for the first three experiments:
Experiment 1 Experiment 2 Experiment 3
C1 $50 $250 $50
C3 40 40 180
C7 10 10 10
Total cost $100 $300 $240
Accounting for Cost Interactions in Designing Products 165

The data in the interaction columns are developed as follows. For the C1 × C3
interaction for experiment 1, the −1 in the C1 column multiplied by the −1 in
the C3 column yields +1. For experiment 2, +1 multiplied by −1 under columns
C1 and C7 , respectively, yields −1 for the C1 × C3 interaction column. Similar
calculations are made for the remaining cells of the interaction columns. The
bottom three rows of Table 4 are computed as follows for column C1 :

Low-level total: (−1 × 1) + (−1 × 4) + (−1 × 7) + (−1 × 6) = −18


High-level total: (1 × 1) + (1 × 5) + (1 × 8) + (1 × 7) = + 21
Net main effect of factor C1 = +21−18 = +3

Similar calculations are performed for the remaining columns in Table 4.


Table 4 shows that factor C7 with a net effect of 17 is clearly the most important
Red X. Factors C1 and C3 , with a net result of 3 and 5, respectively, are less
important than C7 . Except for the C3 × C7 interaction with a net result of –9, all
interaction effects have small net values. This means that C3 and C7 interaction
effects are the only ones that must be accounted for in the laptop design. Figure 1
shows the shape of these two factors interaction. Factor interactions are interpreted
as follows (Bhote & Bhote, 2000, p. 292):

Fig. 1. Graphing the C3 × C7 Component Interaction.


166 MOHAMED E. BAYOU AND ALAN REINSTEIN

 Two parallel lines mean no interaction effect;


 Two nonparallel lines mean a decided interaction effect; and
 Two crossing lines (as in Fig. 1), mean very strong interaction effects.

These interaction patterns have important implications for costing decisions. In


Fig. 1, the best result occurs when components C7 is at the high level and C3 at
the low level, as shown in the following steps.

Step 6: Full-Factorial Costing

As indicated above, Table 4 shows the two costing levels low (−1) and high (+1)
for factors C1 , C3 , and C7 and the results for only one sample of respondents. The
last column of Table 4 shows the total cost of each experiment. To explain how
these costs are computed, let us consider only three experiments, experiments 1,
2, and 6, in Table 4. The average response for the laptop version of experiments
1, 2 and 6 are 1, 1 and 8, respectively on the 1−9 Likert scale. The costs for these
experiments are computed as follows:

Component (Factor) Experiment 1 Experiment 2 Experiment 6

C1 $50 $250 $250


C3 40 40 40
C7 10 10 70
Total cost $100 $300 $360

Steps 7 and 8: ANOVA Results

The ANOVA table (Table 4) shows that experiment 6 is the optimum in terms of
the Green Y result. The average response of 8 indicates a high degree of willingness
to purchase the laptop version. This experiment may be replicated with different
sample groups to develop more confidence in this level of customer perception. The
cost of this laptop version is $360 (Table 4). Simplifying these computations helps
to illustrate the application of the associative-costing method. In some situations,
the product design team may need to make tradeoff decisions where the product
version with a high Green Y value may be too costly to produce, and the version
with low cost is of too low Green Y value.4 In other words, a continuum exists that
Accounting for Cost Interactions in Designing Products 167

begins at the design-to-cost point and ends at the cost-to-design point. Determining
the optimum point on this continuum is a multivariate decision problem, which
we recommend for further research.

Step 9: Cluster of Clusters Design

In Step 8, we find that experiment 6 provides the best combination of factors in


the direct-materials cluster. Through grouping the best combinations of the four
manufacturing sub-clusters (direct materials, direct labor, variable manufacturing
overhead and fixed manufacturing overhead) as explained above, we can perform
a DOE on this group by following Steps 5−8. The end result ascertains the best
combination of the manufacturing cluster. Similar procedures can determine the
best combination for each marketing, delivery, and customer services cluster, thus
helping management rank the various value-chain clusters in providing the highest
values according to customers’ perception.

SUMMARY AND CONCLUSIONS


Activity-based (ABC) and target costing management accounting techniques con-
tain many limitations when applied to product design programs. First are limiting
assumptions. Target costing often does not differentiate between differences in
“users” and “customers” during the planning of product functions. While ABC
assumes that activities drive costs, in practice, managers may create activities for
their subordinates to help justify budget requests and protect such employees from
losing their jobs; that is, budget costs may instigate the creation of activities. In
effect, two ABC systems may exist, an accounting (or consultant) ABC, where
activities drive up costs and an actual ABC, where budgeted costs drive managers
to create activities. Second, both target costing and ABC have a limited costing
focus. ABC focuses on indirect costs and target costing mostly on unit-level costs.
This partial cost accounting is insufficient for product design projects where full
costs are important considerations. Finally, both ABC and target costing method-
ologically do not account for cost interactions.
This paper applies the new costing paradigm of associative costing (Bayou &
Reinstein, 2000) that does not contain these limitations. This method accounts
specifically for resource interactions and cost associations and uses the well-known
statistical techniques of clustering, Full Factorial design and analysis-of-variance.
Applying this method, the paper outlines and illustrates nine steps to design a
hypothetical product, a new laptop computer intended for the college-student
168 MOHAMED E. BAYOU AND ALAN REINSTEIN

market. This illustration simplifies the intricate procedures of associative costing.


The paper concludes that a design team may need to make tradeoff decisions on a
continuum that begins with the design-to-cost point and ends at the cost-to-design
extreme. These decisions become necessary when the best perceived design and
the acceptable cost level of this design are incongruent. A multivariate decision
model to solve this issue is recommended for further research.

NOTES
1. According to Zaccai (1995, p. 6), during the first century of the industrial revolution,
many peoples’ most basic needs were met by products narrowly designed by technical
specialists. These products, although technically functional, often did not meet consumers’
requirement and “as a result could be undermined by more desirable alternatives.” This
problem is magnified today by the variety of products consumers can acquire and the
options and methods of financing or ownership (e.g. lease vs. buy) available to them.
2. Target costing is a concept mired in obscurity. First, an examination of Ansari et al.’s
(1997, pp. 45–48) delineation of costs for target costing purposes reveals a vague, yet,
incomplete set of cost perspectives, which includes:
 Value chain (organizational perspective).
 Life cycle (time) perspective.
 Customers (value) perspective.
 Engineered (design) perspective.
 Accounting (cost type) perspective.

This list contains overlapping functions and illustrates the common problems of func-
tional arrangements. One can add other perspectives, including micro (e.g. competition,
demand elasticity, and substitutes) and macro (e.g. industry and the economy) perspectives.
Second, the cost in the most common target-costing model, called “the deductive method”
(Kato, 1993, p. 36) (where Target Cost = Price − Profit) is a difference, which does not
exist for empirical measurement (Deleuze, 1994 [1964]; Sterling, 1970). (For a detailed
explanation of the Deleuzian difference in accounting, see Bayou & Reinstein, 2001.) This
means that a manufacturer does not and cannot measure (an empirical process) the target
cost of a target product. It can only determine this cost. But cost determination is a result
of calculation (a rational process) based on the design-to-cost view where design should
converge to cost, rather than vice versa, as explained above. In short, target costing is a
vague concept.
3. This assumption helps separate the product design from pricing issues. A target price,
the ground for the target-costing deductive method, is often a vague notion of affordability,
for several reasons. First, in some cases, this price may depend on the design, rather than
the design depending on the price; in other cases, the price and design are locked into a
circular interdependent relationship. Second, an affordable price is often a fuzzy variable that
changes within a wide range (Bayou & Reinstein, 1998). Finally, positing an affordable price
is valid when the target group of customers is homogenous in terms of tastes, preferences,
income, demand elasticity and available means of financing in the short run and the long run.
Accounting for Cost Interactions in Designing Products 169

In the West, this high degree of customer homogeneity is rarely found for many products
and services. Shorter: the target price, which is a key element in the target-cost deductive
method, is often a vague target. Separating design issues from pricing issues makes the
marketing function and component suppliers more crucial in producing and selling the
designed product at acceptable profits.
4. This tradeoff decision is consistent with Bayou and Reinstein’s (1997) discussion of
the circular price-cost interaction and their “tiger chasing his tail” metaphor.

REFERENCES
Akiyama, K. (1991). Functional analysis. Cambridge, MS: Productivity Press.
Ansari, S. L., Bell, J. E., & the CAM-I Target Cost Core Group (1997). Target costing, the next frontier
in strategic cost management. Chicago: Irwin.
Bayou, M. E., & Reinstein, A. (1997, September/October). Formula for success: Target costing for
cost-plus pricing companies. Journal of Cost Management, 11(5), 30–34.
Bayou, M. E., & Reinstein, A. (1998). Applying Fuzzy set theory to target costing in the automobile
industry. In: P. H. Siegel, K. Omer, A. de Korvin & A. Zebda (Eds), Applications of Fuzzy Sets
and the Theory of Evidence to Accounting, II (Vol. 7, pp. 31–47).
Bayou, M. E., & Reinstein, A. (2000). Process-driven cost associations for creating value. Advances
in Management Accounting (Vol. 9, pp. 73–90). New York: JAI Press/Elsevier.
Bayou, M. E., & Reinstein, A. (2001). A systemic view of fraud explaining its strategies, anatomy and
process. Critical Perspectives on Accounting (August), 383–403.
Bhote, K. R., & Bhote, A. K. (2000). World class quality (2nd ed.). New York: American Management
Association.
Borgmann, A. (1995). The depth of design. In: R. Buchanan & V. Margolin (Eds), Discovering Design:
Explanation in Design Studies (pp. 13–22). Chicago: University of Chicago Press.
Cooper, R., & Chew, W. B. (1996). Control tomorrow’s costs through today’s designs. Harvard Business
Review (January–February), 88–97.
de Korvin, A., Bayou, M. E., & Kleyle, R. (2001). A fuzzy-analytic-hierarchical-process model for the
metal decision in the automotive industry. Paper N. 01IBECB-2, Society of Automotive Engi-
neers (SAE) IBEC 2001 International Body Engineering Conference and Exhibition, Detroit,
Michigan, October, 16–18.
Deleuze, G. (1994). Difference and repetition. P. Patton (Trans.). New York: Columbia University
Press.
Good, I. J. (1977). The botryology of botryology. In: J. Van Ryzin (Ed.), Classification and Clustering
(pp. 73–94). New York: Academic Press.
Hartigan, J. A. (1975). Clustering algorithm. New York: Wiley.
Innes, J., Mitchell, F., & Sinclair, D. (2000, September). Activity-based costing in the UK’s largest
companies: A comparison of 1994 and 1999 survey results. Management Accounting Research,
11(3), 349–362.
Kato, Y. (1993). Target costing support systems: Lessons from Leading Japanese companies. Manage-
ment Accounting Research, 4, 33–47.
Kruskal, J. (1977). The relationship between multidimensional scaling and clustering. In: J. Van Ryzin
(Ed.), Classification and Clustering (pp. 17–44). New York: Academic Press.
Michaels, J. V., & Wood, W. P. (1989). Design to cost. New York: Wiley.
170 MOHAMED E. BAYOU AND ALAN REINSTEIN

Monden, Y., & Hamada, K. (1991). Target costing and kaizen costing in Japanese automobile compa-
nies. Journal of Management Accounting Research, 3(Fall), 16–34.
Morello, A. (1995). ‘Discovering design’ means [re-]discovering users and projects. In: R. Buchanan &
V. Margolin (Eds), Discovering Design: Explanation in Design Studies (pp. 69–76). Chicago:
University of Chicago Press.
Redford, A., & Chal, J. (1994). Design assembly: Principles and practice. London: McGraw-Hill.
Roozenburg, N. F. M., & Eekels, J. (1995). Product design: Fundamentals and methods. Chichester:
Wiley.
Ruffa, S. A., & Perozziello, M. J. (2000). Breaking the cost barrier. New York: Wiley.
Sterling, R. (1970). A theory of measurement of enterprise income. Iowa Printing.
Wood, L. (1993). Rapid automated prototyping: An introduction. New York: Industrial Press.
Zaccai, G. (1995). Art and technology: Aesthetics redefined. In: R. Buchanan & V. Margolin (Eds),
Discovering Design: Explanation in Design Studies (pp. 3–12). Chicago: University of Chicago
Press.
RELATIONSHIP QUALITY: A CRITICAL
LINK IN MANAGEMENT ACCOUNTING
PERFORMANCE MEASUREMENT
SYSTEMS

Jane Cote and Claire Latham

ABSTRACT
Performance measurement has benefited from several management ac-
counting innovations over the past decade. Guiding these advances is the
explicit recognition that it is imperative to understand the causal linkage that
leads a firm to profitability. In this paper, we contend that the relationship
quality experienced between two organizations has a measurable impact
on performance. Guided by prior models developed in distribution channel
and relationship marketing research (Cannon et al., 2000; Morgan & Hunt,
1994) we build a causal model of relationship quality that identifies key
relationship qualities that drive a series of financial and non-financial
performance outcomes. Using the healthcare industry to illustrate its
applicability, the physician practice – insurance company relationship is
described within the context of the model’s constructs and causal linkages.
Our model offers managers employing a causal performance measurement
system such as, the balanced scorecard (Kaplan & Norton, 1996) or the
action-profit-linkage model (Epstein et al., 2000), a formal framework to
analyze observed outcome metrics by assessing the underlying dynamics in
their third party relationships. Many of these forces have subtle, but tangible

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 171–190
© 2004 Published by Elsevier Ltd.
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12008-X
171
172 JANE COTE AND CLAIRE LATHAM

impacts on organizational performance. Recognizing them within perfor-


mance measurement theory adds explanatory power to existing performance
measurement systems.

INTRODUCTION
Performance measurement has benefited from several management accounting
innovations over the past decade. Guiding these advances is the explicit recognition
that it is imperative to understand the causal linkage that leads a firm to profitability.
One of the most significant innovations is the integration of non-financial variables
into performance measurement systems. Non-financial performance measures help
firms recognize how specific actions or outcomes impact profitability. Some models
such as the Action-Profit-Linkage (Epstein et al., 2000) and the Service Profit
Chain (Heskett et al., 1997) develop quantitative causal models that demonstrate
how a unit change in non-financial variables impact profitability and other financial
variables. Other models, such as the Balanced Scorecard (Kaplan & Norton, 1996)
represent the interrelationships among financial and non-financial performance
variables as a set of causal hypotheses that serve as guideposts for managerial
decision making. What all of these performance models have in common is the
need for organizations to clearly understand the factors that drive performance
outcomes. In this paper, we focus on one specific domain inherent in most of
these performance models, inter-organizational transactions, to introduce a causal
model that links the quality of inter-organizational relationships to financial and
non-financial outcomes.
Successful organizations develop exchange relationships with other orga-
nizations, that persist over time, to develop a network that provides reliable
sources of goods or services. These relationships often develop to accommodate
specific strategic goals such as when a manufacturer outsources portions of
its value chain. Alternatively, inter-organizational exchange relationships also
arise from unique interdependencies that compel organizations to cooperate to
produce a product or service. In healthcare, for example, insurance companies and
physician practices are dependent upon each other to provide healthcare services
to patients. These relationships are recognized as important performance drivers
by most causal performance measurement models. For instance, the Balanced
Scorecard (BSC), in the internal-business-process perspective, acknowledges
the critical role that vendors play in providing quality inputs efficiently. The
Action-Profit-Linkage Model (APL) is a framework that “links actions taken
by the firm to the profitability of the firm within its market environment”
(Epstein et al., 2000, p. 47). A subset of firm actions are the inter-organizational
Relationship Quality 173

relationships developed to enhance the delivered product or service. As such,


they form a link in the causal chain between firm actions and profitability which
needs to be measured to recognize its role in the organization. Therefore, it
is important to understand the causal forces that impact inter-organizational
relationship quality.
Relationship quality is a function of a series of subtle, but powerful variables
that impact the working relationships between two organizations. Often a manager
can identify arrangements with other organizations that are positive or negative
but not be able to specifically point to the factors that make one relationship
successful and the other troublesome. The statistics reported by the performance
measurement system may be indicative of insidious actions and relationship
structures that are not currently measured, hence overlooked as sources of tension
within the relationship. As managers seek explanations for deviations from
expected vendor (or any partnering organization) performance, management
accounting theory does not guide them towards examination of relationship
quality. It has not been an explicit component of performance measurement
systems, not due to a failing of management accounting, but rather because the
introduction of non-financial performance measures is relatively recent. Now that
non-financial performance measures are widely accepted management accounting
theory and practice, the opportunity exists to explore the causal factors that form
the foundation of successful inter-organizational relationships.
Substantial research examining inter-organizational relationships has been con-
ducted in the marketing distribution channel literature (Frazier, 1999; Rindfleisch
& Heide, 1997). Numerous dimensions affecting the channel relationships have
been identified including, legal contracts, power, conflict, and communication. To
explore the connection between distribution-channel research and management
accounting performance measurement systems, we introduce an adaptation of the
commitment-trust theory of relationship marketing (Morgan & Hunt, 1994) as
a relevant model for measuring the qualitative factors that underlie management
accounting metrics. The model defines the antecedents to trust and commitment
between two organizations and recognizes the inherent causal link between trust
and commitment and several outcome measures. As such, it provides a first step
towards understanding the core structure and actions that form the foundation
upon which inter-organizational relationships are developed and provides insight
into how this structure impacts both financial and non-financial outcomes.
Management accounting performance measurement systems can benefit from
an interdisciplinary approach (Epstein et al., 2000). With respect to relationship
quality, the rich research foundation developed in the marketing channel’s domain
offers potential for management accounting researchers to enhance the knowledge
of the fundamental causality that underlies outcome metrics commonly used
174 JANE COTE AND CLAIRE LATHAM

in current performance measurement contexts. Relationship quality is formed


through a series of subtle forces, many of which are not cognitively recognized in
a cohesive framework. Introducing them as a network of constructs that support
traditional outcome measures offers managers opportunities to pro-actively
examine the underlying structure of their inter-organizational relationships as
drivers of the performance they expect from the arrangement.
We demonstrate how developing a relationship quality framework can be
beneficial in a managerial setting and how it can impact financial and non-financial
performance measures using the healthcare industry as an example. Relationship
quality among organizations in the healthcare value chain is at a tenuous state.
Specifically, many physicians cite difficulties with insurance companies as a
major impediment to patient care (Pascual, 2001; Sharpe, 1998a, b; Shute, 2002).
Many of the problems center on procedure reimbursement and authorization. Prior
research demonstrated that, within one physician practice, substantial variance
in these performance metrics among insurance companies and that elements of
relationship quality are associated with the outcomes (Cote & Latham, 2003).
Thus, the physician-insurer relationship is a timely and relevant context to
illustrate the applicability of a relationship quality framework in management
accounting performance measurement systems.
The remainder of the paper is organized as follows. The next section provides
the conceptual foundation for the model of relationship exchange quality,
identifying the key mediating variables, antecedents and outcomes and discussing
hypothesized direction of effects. We then demonstrate applicability of the
model to the healthcare exchange relationship between physicians and insurers,
including the steps taken in the model refinement and validation process. The
paper concludes with a discussion of potential directions for future research.

CONCEPTUAL FOUNDATION
To build a model of relationship quality that augments current performance mea-
surement frameworks, several studies drawn from the marketing literature provide
direction. For instance, as a starting point, legal contracts are viewed as one of the
primary governance structures that safeguard an exchange while maximizing ben-
efits for the relationship partners. In their “plural form” thesis, however, Cannon
et al. (2000) argue for viewing the contract as just one of a variety of mechanisms
that provide the building blocks for governance structures in relationships; that
focusing on the legal contract alone is a deficient approach to governing modern
exchanges. Their research on purchasing professionals examines the interaction
of contracts and relationship norms in various contexts and demonstrates that
Relationship Quality 175

legal bonds and social norms were effective, in combination, in enhancing


performance.
Expanding further the definitions and interplay of social norms, the
commitment-trust model of relationship marketing developed in Morgan
and Hunt (1994) provides the pivotal theory that can guide management ac-
counting’s understanding of the correlation between such relationship building
activities and performance metrics. For organizations to be competitive they
must have a network of cooperative relationships comprised of partners such as
suppliers, service people, customers, and investors (Solomon, 1992). Successful
relationship building is advantageous because it lowers costs, improves quality
and timely response to organizational needs. Morgan and Hunt (1994) identify
commitment and trust as two key mediating variables in their model of relationship
success.
A new model of relationship quality is presented here which builds on the
work of Cannon et al. (2000) and Morgan and Hunt (1994) that adds insights to
existing management accounting performance measurement systems. Figure 1
identifies antecedent variables comprised of contracting and normative, tangible
and intangible, constructs: legal bonds, relationship termination costs, relationship
benefits, shared values and communication. These antecedents are shown influ-
encing the core of the relationship, commitment and trust. Commitment and trust,
as mediators, are then shown to have an effect on relationship quality outcomes
that describe whether a relationship is viewed as successful or problematic:
acquiescence, propensity to leave, cooperation, financial statement impact,
functional conflict and uncertainty. A discussion of commitment and trust, the
antecedents and their respective proposed influence on these mediators, as well
as relationship outcomes follows.

Commitment and Trust: Mediating Variables

Morgan and Hunt (1994) posit that the key mediating variables in a relational
exchange are commitment and trust. Relationship commitment is defined as
“an exchange partner believing that an ongoing relationship with another is so
important as to warrant maximum efforts at maintaining it; that is, the committed
party believes the relationship is worth working on to ensure that it endures
indefinitely” (Morgan & Hunt, 1994, p. 22). Relationship trust exists when
one exchange partner “has confidence in an exchange partner’s reliability and
integrity” (Morgan & Hunt, 1994, p. 23).
Drawing from the definitions above, these two constructs are positioned as
mediating variables given their central role in influencing partners to: (1) preserve
176
JANE COTE AND CLAIRE LATHAM
Fig. 1. Trust and Commitment Model of Relationship Quality.
Relationship Quality 177

the relationship through cooperation; (2) favor a longer time horizon or working to
ensure it endures; and (3) support potential high risk transactions in the exchange
given the partners’ beliefs that neither will act in an opportune fashion. The
authors further note that trust is a determinant of relationship commitment, that is,
trust is valued so highly that partners will commit to relationships which possess
trust. Thus, they theorize that the presence of both commitment and trust is what
separates the successful from the failed outcomes. Building commitment and
trust to reach relationship marketing success requires devoting energies to careful
contracting, specific cooperative behaviors and other efforts that both partners
invest. We now turn to our discussion of these antecedents.

Antecedents

Legal Bonds
Legal bonds or legal contracting refers to the extent to which formal contractual
agreements incorporate the expectations and obligations of the exchange partners.
A high degree of contract specificity, as it relates to roles and obligations, places
constraints on the actions of exchange partners. It is this specificity and attention
to detail that typically supports a willingness by partners to invest time in an
exchange relationship. Exchange partners who make the effort to work out details
in a contract have a greater dedication to the long-term success of the partnership
(Dwyer et al., 1987). Thus, a higher degree of contract specificity is expected to
have a positive influence on relationship commitment.

Relationship Termination Costs


Relationship termination costs refer to the expected losses from dissolution and
such costs are widely defined in the literature. For example, Heide and John (1992),
refer to specific investments made by the agent in principal-agent relationships
and replacements of commission income. Anderson and Narus (1990) view it as
a measure of relative dependence (e.g. there are other manufacturers available to
firm x who sell product lines comparable to those of our company). Morgan and
Hunt (1994) include non-economic costs as the loss of social satisfaction from
the association as well as the socio-psychological costs of worry, aggravation and
perceived loss of reputation or face. In essence, relationship termination costs
are switching costs. A higher measure of switching costs presents a deterrent to
ending the relationship and strengthens the perceived value of committing to the
relationship. Hence, relationship termination costs will have a positive correlation
with relationship commitment.
178 JANE COTE AND CLAIRE LATHAM

Relationship Benefits
Firms that receive superior benefits from their partnership relative to other
options will be committed to the relationship. As with relationship termination
costs, partnership benefits have been measured along many dimensions. For
example, Morgan and Hunt (1994) capture relationship benefits as an evaluation
of the supplier on gross profit, customer satisfaction and product performances.
Alternatively, Anderson and Narus (1990) discuss benefit as satisfaction from the
perspective of whether the company’s working relationship with the exchange
partner, relative to others, has been a happy one. Finally, Heide and John (1992)
refer to the “norm of flexibility” where parties expect to be able to make adjust-
ments in the ongoing relationship to cope with changing circumstances. Morgan
and Hunt (1994) propose that benefits which relate to satisfaction and/or global
satisfaction generally show a strong relationship with all forms of commitment.
It is then expected that as the benefits to the relationship increase, relationship
commitment will be stronger.

Shared Values
Shared values are “the extent to which partners have beliefs in common about
what behaviors, goals, and policies are important or unimportant, appropriate or
inappropriate, and right or wrong” (Morgan & Hunt, 1994, p. 25). Dwyer et al.
(1987) note that contractual mechanisms and/or shared value systems ensure sus-
tained interdependence. Shared values are shown to be a direct precursor to both
relationship commitment and trust, that is, exchange partners who share values are
more committed to their relationships.

Communication
Communication refers to the formal and informal sharing of “meaningful and
timely information between firms” (Anderson & Narus, 1990, p. 44). Mohr and
Nevin (1990) note that communication is the “glue” that holds a relationship to-
gether. Anderson and Narus (1990) see past communication as a precursor to trust
but also that the building of trust over time leads to better communication. Hence,
relationship trust is positively influenced by the quality of communication between
the organizations.

Opportunistic Behavior
Opportunistic behavior is “self-interest seeking with guile” (Williamson, 1975,
p. 6). Opportunistic behavior is problematic in long-term relationships affecting
trust concerning future interactions. Where opportunistic behavior exists, partners
no longer can trust each other which leads to decreased relationship commitment.
We therefore expect a negative relationship between opportunistic behavior
and trust.
Relationship Quality 179

In summary, trust and commitment is a function of specific efforts both


organizations invest in the relationship to improve the value they derive from
the arrangement. When a long term association is expected many organizations
recognize the benefits that come from developing a strong bond of trust and
commitment. For the effort to be worthwhile both must recognize substantial
benefits from their joint association and have some common views related to the
values they employ in business conduct. Perceptions of opportunism on either side
will dampen the potential for trust within the relationship. Alternatively, where
switching costs related to developing substitute relationships are substantial,
partners will make more concerted efforts to maintain commitment to the existing
dyad. Energies devoted to legal contracting and communication then serve
to strengthen the commitment and trust bonds. We now turn to the outcomes
observed through the presence of trust and commitment in the relationship.

Outcomes

Acquiescence
Acquiescence is the extent to which a partner adheres to another partner’s requests
(Morgan & Hunt, 1994). This is an important construct in relationship quality
because when organizations are committed to successful relationships, they
recognize that the demands made by each other are mutually beneficial. Where
requests are perceived as extraordinary, those in a committed relationship are
willing to acquiesce because they value the relationship. Morgan and Hunt (1994)
found support for higher levels of acquiescence in highly committed relationships.

Propensity to Leave
Commitment creates a motive to continue the relationship. The investments to
create the committed relationship, described as the antecedents in the model,
directly impact the perceptions that one or both partners will dissolve the rela-
tionship in the near future. Partners in relationships expected to terminate in the
near term behave differently than those that perceive that both are invested in the
relationship for the long term. Thus propensity to leave, resulting from the level of
relationship commitment, is an outcome variable with performance implications.

Financial Statement Impact


Activity based costing has successfully demonstrated that inter-organizational
arrangements have heterogeneous effects on profitability (Shapiro et al., 1987).
Intuitively most managers recognize differential financial impacts among their
third party interactions and recently many have begun to strategically structure
terms with these organizations to enhance the financial benefits (Morton, 2002).
180 JANE COTE AND CLAIRE LATHAM

Similarly, relationship quality can be expected to have direct and indirect


effects on revenues and expenses. Specifically, we propose that the levels of
trust and commitment will have a positive impact on financial performance
indicators.
Trust has been previously defined as “confidence in an exchange partner’s
reliability and integrity” (Morgan & Hunt, 1994, p. 23). With a trusting relation-
ship, the partners do not need to continually verify adherence with agreed upon
arrangements and procedures. Hence the need for costly monitoring systems is
diminished. Likewise, commitment or “the enduring desire to maintain a valued
relationship” (Morgan & Hunt, 1994, p. 23), can impact profitability. When a
longer term relationship is expected, there are incentives for organizations to pro-
vide each other with favorable terms. For instance, favorable pricing, delivery or
service terms may be present within committed relationships because the partners
are confident that throughout the relationship a variety of benefits will flow in both
directions. Alternatively, when relationship commitment is low fewer incentives
exist to offer favorable financial terms or services. This behavior is evident in sit-
uations where one exchange partner is considered a “backup supplier,” contacted
only when other more favorable exchange partners are not available (Kaplan,
1989). In these circumstances, managers must either negotiate to improve relation-
ship commitment or they must evaluate the implications for creating an alternative
working relationship.
Both trust and commitment are important forces influencing profitability. The
financial statement impact may enhance revenues or reduce costs depending on
the environmental circumstances. Recognizing the role trust and commitment
perform in a performance measurement system has the potential to improve the
causal linkage between firm actions and profitability.

Cooperation
Cooperation refers to the exchange parties working together to reach mutual goals
(Anderson & Narus, 1990). Even if partners have ongoing disputes concerning
goals, they will continue to cooperate because both parties’ termination costs are
high. Cannon et al. (2000) use the term “solidarity” which encompasses “the extent
to which parties believe that success comes from working cooperatively together
vs. competing against one another” (Cannon et al., 2000, p. 183). Though both
are outcome variables, Morgan and Hunt (1994) point out that cooperation is
proactive in contrast to acquiescence which is reactive. Organizations committed to
relationships and trusting of their partners, cooperate to make the relationship work.
Once trust and commitment is established, exchange partners will be more likely
to undertake high-risk coordinated efforts (Anderson & Narus, 1990) because they
believe that the quality of the relationship mitigates the risks.
Relationship Quality
Table 1. Variables and Proposed Direction of Effect.
Antecedent Variable Mediating Variable Direction

Impact on Mediating
Legal bonds Relationship commitment + Exchange partners having a higher degree of contract specificity
have a greater commitment to the relationship.
Relationship termination cost Relationship commitment + Exchange partners having a higher measure of relationship
termination costs have a greater commitment to the relationship.
Relationship benefits Relationship commitment + Exchange partners possessing a higher measure of relationship
benefits have a greater commitment to the relationship.
Shared values Relationship commitment + Exchange partners possessing a higher measure of shared values
have a greater commitment to the relationship.
Shared values Trust + Exchange partners with a higher measure of shared values have
greater relationship trust.
Communication Trust + Exchange partners with a higher degree of formal and informal
communication have greater trust.
Opportunistic behavior Trust − Exchange partnerships where a higher degree of opportunistic
behavior exists have less trust.
Mediating Variable Outcome Variables Direction

Impact on outcomes
Relationship commitment Acquiescence + Exchange partners who have higher measure of relationship
commitment are more willing to make relationship-specific
adaptations (higher measure of acquiescence).
Relationship commitment Propensity to leave − Exchange partners who have a higher measure of relationship
commitment are less likely to end the relationship.
Relationship commitment Cooperation + Exchange partners who have a higher measure of relationship
commitment are more likely to cooperate.

181
182
Table 1. (Continued )
Antecedent Variable Mediating Variable Direction

Trust Cooperation + Exchange partners who have a higher measure of trust are more
likely to cooperate.
Trust Functional conflict + Exchange partners who have a higher measure of trust are more
likely to resolve disputes in an amicable manner (functional

JANE COTE AND CLAIRE LATHAM


conflict).
Trust Decision-making uncertainty − Exchange partners who have a higher measure of trust are less
likely to have decision-making uncertainty.
Relationship Quality 183

Functional Conflict
The resolution of disputes in a friendly or amicable manner is termed functional
conflict is a necessary part of doing business (Anderson & Narus, 1990). Morgan
and Hunt (1994) show that trust leads an exchange partner to believe that future
conflicts will be functional, rather than destructive. When an organization is con-
fident that issues that arise during the conduct of their arrangement with the other
organization will be met with positive efforts to reach a mutual solution, they
perceive the relationship quality to be higher and expect tangible benefits to result.

Uncertainty
Decision-making uncertainty encompasses exchange partners’ perceptions
concerning relevant, reliable, and predictable information flows within the
relationship. The issue relates to whether the exchange partner is receiving
enough information, in a timely fashion, which can be then used to confidently
reach a decision (Achrol, 1991; Morgan & Hunt, 1994). Cannon et al. (2000)
conclude that uncertainty creates information problems in exchange. They further
divide uncertainty into external and internal, where external refers to the degree of
variability in a firm’s supply market and internal refers to task ambiguity. Morgan
and Hunt (1994) support a negative relationship between trust and uncertainty. The
trusting partner has more confidence that the exchange partner will act reliably and
consistently.
In summary, we have discussed the antecedent variables legal bonds, relation-
ship termination costs, relationship benefits, shared values and communication.
These antecedents have been shown to influence commitment and trust. Com-
mitment and trust, as mediators, are then shown to have an effect on relationship
quality outcomes acquiescence, propensity to leave, cooperation, financial state-
ment impact, functional conflict and uncertainty. Table 1 provides an overview of
this discussion of variables and direction of effect.

APPLYING THE MODEL TO


THE HEALTHCARE INDUSTRY
The relationship quality model has wide applicability in a variety of industry
and organizational contexts. We demonstrate its relevance using the healthcare
industry because currently, issues related to relationship quality and the implica-
tions for healthcare funding and access are primary concerns for many healthcare
practitioners. Specifically, we select the physician practice – insurance company
relationship as the inter-organizational dyad that best illustrates relationship
quality and its impact on performance measurement metrics.
184 JANE COTE AND CLAIRE LATHAM

The physician – insurer relationship is symbiotic. Physician practices gain


improved patient access and simplified accounts receivable procedures through
their arrangements with insurance companies. Patients without health insurance
often seek only emergency healthcare, complicating comprehensive patient care.
Prior to the widespread use of health insurance, physician practices collected
payment in full from each patient directly and many physicians were reluctant to
demand payment from financially strapped patients causing erratic cash flow for
the physician practice. Concurrently, the insurance companies benefit from devel-
oping a relationship with the physician practice because they achieve economies
of scale through their integrated systems for authorization and payments.
Though symbiotic, the relationship is replete with friction. Rapid changes in
the healthcare funding system have led physicians to perceive loss of autonomy
with corresponding demands to re-direct their attention from patients towards
profits (Pascual, 2001; Sharpe, 1998a, b; Shute, 2002). Insurers, caught between
employers seeking to control health insurance costs, physicians seeking to
provide quality healthcare, and patients seeking access, have placed stringent
controls on their funding systems to achieve operating efficiencies. These systems
have created friction between the physician – insurer relationship that will
require complex and comprehensive solutions. Prior research has established that
relationship quality is a critical component that distinguishes successful from
unsuccessful healthcare partnerships (Burns, 1999; Cote & Latham, 2003). It
is then this inter-organizational dyad that provides a rich context for exploring
the application of the relationship quality model as a causal link in performance
measurement systems.

Antecedents

To be successful, physician practices must contract with a broad selection of insur-


ance providers. Each insurer has unique procedures and systems requiring separate
legal contracts that detail the terms of the relationship. The contract forms the basis
for each interaction requiring substantial investment from both sides to negotiate
terms (Cannon et al., 2000; Cote & Latham, 2003; Leone, 2002). It is through this
process that the physician practice and insurer define the legal level of commitment.
Relationship benefits and termination costs become relevant constructs for
physicians and insurers. From the physician’s perspective, the larger insurers cover
a substantial fraction of the patients within their geographical area, necessitating
willingness for the physician practice to invest substantial efforts to assure the
relationship is successful. Likewise, there are often large physician groups that
insurers need to be associated with in order to compete within a geographical
Relationship Quality 185

area. These environmental characteristics create substantial termination costs and


relationship benefits that motivate the physician and insurers to develop a long
term, committed relationship.
Relationships between physicians and insurers often break down or endure
substantial friction due to mis-matched values. Expectation gaps concerning
procedure authorization, reimbursement, and general patient care are evidence
that the physician and insurer do not completely share each others values in
healthcare delivery. When it occurs physician practices often must make repeated
oral and written contact to convince insurers to acquiesce to their position. As
this conflict is replicated over a series of patients, trust begins to deteriorate and
the physician practice begins to assess their level of commitment to the insurer.
When values are aligned, both the insurer and physician practice are confident that
judgments made by one side will be accepted by the other and the interactions are
relatively seamless.
Trust in the physician – insurer relationship is influenced both by commu-
nication and opportunistic behavior. Communication occurs frequently through
procedure authorizations and receivable claims and periodically through practice
management advice, processing updates, and office visits. Some insurers provide
consistently accurate responses to physician practice inquiries, leading the
practice to trust the insurer (Cote & Latham, 2003). Others give conflicting
advice, dependent on the insurance representative responding to the inquiry. This
destabilizes the relationship, forcing the practice to make multiple inquiries to a
single issue and document each interaction precisely, creating decision-making
uncertainty. Opportunistic behavior is exemplified in claims processing experi-
ences. Receivable turnover is legally defined, in number of days, by most state
insurance commissioners. An insurer must remit payment on a “clean claim”
within the statutory period. Clean claims are those with no errors, regardless
of the source of the error (Cote & Latham, 2003). If an error is detected, the
statutory time period is reset to the beginning. Insurers acting opportunistically
will return claims to the physician practice frequently with small errors or errors
emanating from their own electronic processing system, thus extending the
statutory receivable turnover period. When this happens consistently with an
insurer, physician practices begin to doubt the sincerity of insurers’ behavior.

Outcomes

Similar to the antecedent constructs, the physician practice – insurer relationship


experiences the outcomes as pertinent issues driving the structure of both the
relationship and operating systems. For instance, there is a trend whereby physician
186 JANE COTE AND CLAIRE LATHAM

practices eliminate their relationships with insurers, creating a practice structure


that is analogous to a law firm (Pascual, 2001; Sharpe, 1998a, b; Shute, 2002).
Patients pay a retainer for immediate access to the physician. The physician accepts
cash for services and patients must seek insurance reimbursement on their own.
This represents the extreme case where trust and commitment have dissolved and
the physician has refused to acquiesce to insurers’ demands and completely left
the system. Most physician practices have not resorted to such extremes, yet are
still influenced by the model’s outcomes.
Cooperation, functional conflict, and decision making uncertainty are ever
present in the physician – insurer relationship. As stated earlier, the relationship
is symbiotic; each needs to cooperate with the other to provide patient care.
Often the physician practice administrators can trace specific issues related to
cooperation and conflict back to the level of trust with the insurer (Cote & Latham,
2003). Patient care is complicated, with each patient having unique needs. In a
trusting relationship where there is a high degree of confidence that the insurer
is reliable and will respond faithfully to patient cases, the physician practice can
predict how certain treatment options will be handled. Without trust, there is a
degree of randomness in the responses from the insurer, making it difficult for the
practice to prepare inquiries to the insurer and anticipate their success.
Practice administrators acknowledge revenue and cost heterogeneity among in-
surers (Cote & Latham, 2003). For instance, approval for a particular medication,
termed formulary, must be obtained from each insurance company to assure that
it will be a covered expense. Some insurers require extensive paperwork prior
to formulary approval, whereas others use a more streamlined approach. Time
and paperwork create a measurable financial statement impact for the physician
practice. Relationship quality as indicated by the levels of trust and commitment
built within the relationship are often factors affecting the ease with which such
authorizations are accomplished.
In summary, the physician – insurer relationship is built on trust and commit-
ment. This model describes the factors inherent in the relationship and provides a
causal chain that indicates how these factors interact to form relationship quality.
The model is particularly relevant to the healthcare industry because it is highly
dependent on a network of inter-organizational alliances. From a performance
measurement perspective, this model provides managers with the framework for
diagnosing the root causes of observed performance metrics.

MODEL REFINEMENT AND VALIDATION


Model refinement occurred through an iterative process, from practice con-
sultation to literature review and back over approximately a two year time
Relationship Quality 187

frame. We worked with a physician-owned practice, which is one of the largest


physician groups in its community. It is comprised of 8.5 FTE physicians
(full time equivalents) who each see approximately 18–22 patients daily. Six
insurance providers – half are preferred provider organizations (PPO’s) and
the other half are health maintenance organizations (HMO’s) – represent the
majority of their patient base that utilizes employer-provided insurance. Initially
the global issue of third party relationships and performance measurement was
presented to the practice manager and chief financial officer. The research was
motivated by the growing trend towards physician practices terminating all
association with insurance companies (Pascual, 2001; Sharpe, 1998a, b; Shute,
2002). Explanations for these strategic moves did not fit with a cost – benefit or
transaction cost analysis argument (e.g. Williamson, 1985). Clearly there were
underlying frictions affecting the relationship’s performance that were ill-defined.
At this first stage the goal was to determine how the physician practice –
insurance company relationship functions, the key issues they face in managing
the interactions, and to assess correspondence with constructs from prior research.
These initial interviews motivated an empirical analysis of billing records to
identify the existence of factors differentiating insurance company performance
(Cote & Latham, 2003).
A second stage of interviews diagnosed the underlying causes of the factors
leading to the differential performance among insurance companies. Further
review of several parallel literature streams indicated the applicability of the
distribution channel model and relationship marketing as analogous to the
physician practice – insurance company relationship. In particular, the theory of
commitment and trust in third party relationships (Morgan & Hunt, 1994) closely
mapped the situations we were observing. Integrating this theory with our obser-
vations and corresponding research (e.g. Cannon et al., 2000; Epstein et al., 2000),
we adapted prior research findings to develop the model presented in this paper.
Further interviews were conducted to assess applicability and refine construct
definitions.
At this point we had considerable interaction with the physician practice side
of the relationship. To ascertain the model’s applicability from the insurance
company perspective we conducted an interview with insurance company
executives at a large regional organization. This insurer is one of the largest
insurance providers in their community, currently representing an average of
approximately 20–25% of the patient population in medical practices in the
geographical region. They concurred with the physician practice executives that
the model was a fair representation of the causal network that defines the dyad’s
relationship structure. Coincidently, they had a system and philosophy in place to
build trust and commitment with local physician practices and hospitals, with the
belief that these investments had a financial return to the insurance company.
188 JANE COTE AND CLAIRE LATHAM

This iterative process involving literature search and qualitative interviews is an


integral step in theory building (Eisenhardt, 1989). It links theory to practice in an
organized and thoughtful process. Empirical analysis can then be conducted using
construct definitions that have been developed concurrently with observations
in practice. Theory developed using this process “is likely to have important
strengths like novelty, testability, and empirical validity, which arise from the
intimate linkage with empirical evidence” (Eisenhardt, 1989, p. 548).

DIRECTIONS FOR FUTURE RESEARCH


AND CONCLUSIONS
Existing performance measurement systems, such as APL and BSC, have made
two important contributions to management accounting: the integration of non-
financial metrics and the development of a causal network that links performance
outcomes into an interdependent system. We build upon this foundation by in-
troducing a causal model that incorporates relationship constructs that are subtle,
but compelling forces affecting organizational performance. As often stated, “if
you can’t measure it, you can’t manage it” (Kaplan & Norton, 1996, p. 21). Thus,
raising relationship dynamics from the manager’s subconscious to a measurable,
interconnected level of awareness gives the manager opportunities to address the
root causes directly.
The model can be implemented numerous ways depending on the centrality
of inter-organizational relationships within the firm. Where there is a strong
direct relationship between the quality of inter-organizational relationships and
financial success (e.g. the healthcare industry) the model may be employed as
a stand alone tool that assists managers in understanding the sources of friction
in their relationships. Where inter-organizational relationships are of secondary
importance, the model can be nested within current BSC or APL frameworks to
provide insight into the underlying causal links between key vendor or customer
performance measures.
Further research requires empirical testing to assess the strengths of the causal
links in the model as well as integrating it with existing performance measurement
systems. Generally within the existing literature related to inter-organizational
relationships, studies examine one dyad with an industry that has the potential to
generalize its findings across multiple environmental contexts. When replicated
across multiple industry specific settings it is possible to assess the model’s
robustness as well as the quality of the construct definitions and measures.
Additional research could also consider the managerial implications. For instance,
how does this model improve managerial decision making? Are there defined
Relationship Quality 189

environmental contexts where this model has greater relevance than others? The
model represents an initial step towards measuring the role that relationship
dynamics have in organizations by considering them as a part of performance
measurement systems. Further research can bring refinements that will build an
integrated and insightful perspective to performance measurement systems.

REFERENCES
Achrol, R. (1991). Evolution of the marketing organization: New forms for turbulent environments.
Journal of Marketing, 55(October), 77–94.
Anderson, J. C., & Narus, J. A. (1990). A model of distributor firm and manufacturer firm working
partnerships. Journal of Marketing, 54(January), 42–58.
Burns, L. R. (1999). Polarity management: The key challenge for integrated health systems. Journal
of Healthcare Management, 44(January–February), 14–34.
Cannon, J., Achrol, R., & Gundlach, G. (2000). Contracts, norms and plural form governance. Journal
of the Academy of Marketing Science, 28(Spring), 180–194.
Cote, J., & Latham, C. (2003). Exchanges between healthcare providers and insurers: A case study.
Journal of Managerial Issues, 15(Summer), 191–207.
Dwyer, P., Schurr, H., & Oh, S. (1987). Developing buyer-seller relationships. Journal of Marketing,
51(April), 11–27.
Eisenhardt, K. M. (1989, October). Building theories from case study research. Academy of
Management Review, 14(4), 532–550.
Epstein, M. A., Kumar, P., & Westbrook, R. A. (2000). The drivers of customer and corporate
profitability: Modeling, measuring, and managing the causal relationships. Advances in
Management Accounting, 9, 43–72.
Heide, J. B., & John, G. (1992). Do norms matter in marketing relationships. Journal of Marketing,
56(April), 32–44.
Heskett, J. L., Sasser, W. E. Jr., & Schlesinger, L. A. (1997). The service profit chain: How leading
companies link profit and growth to loyalty, satisfaction, and value. New York, NY: Free Press.
Kaplan, R. S. (1989). Kanthal (A). Harvard Business School Case #190–002. Harvard Business
School Press.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard. Boston, MA: Harvard Business School
Press.
Leone, A. J. (2002). The relation between efficient risk-sharing arrangements and firm characteristics:
Evidence from the managed care industry. Journal of Management Accounting Research, 14,
99–118.
Mohr, J., & Nevin, J. (1990). Communication strategies in marketing channels: A theoretical
perspective. Journal of Marketing, 58(July), 20–38.
Morgan, R. M., & Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. Journal
of Marketing, 58(July), 20–38.
Morton, W. (2002). The unprofitable customer: How you can separate the wheat from the chaf. The
Wall Street Journal (October 28), A1.
Pascual, A. M. (2001). The doctor will really see you now. Business Week (July 9), 10.
Rindfleisch, A., & Heide, J. (1997). Transaction cost analysis: Past, present, and future applications.
Journal of Marketing, 61(October), 30–54.
190 JANE COTE AND CLAIRE LATHAM

Shapiro, B. P., Rangan, V. K., Moriarty, R. T., & Ross, E. B. (1987). Manage customers for profits
(not just sales). Harvard Business Review, 65(September–October), 101–107.
Sharpe, A. (1998a). Boutique medicine: For the right price, these doctors treat patients as precious.
Their consultancy signals rise of a system critics say favors the wealthy. practicing HMO
avoidance. The Wall Street Journal (August 12), A1.
Sharpe, A. (1998b). Health care: Discounted fees cure headaches, some doctors find. The Wall Street
Journal (September 15), B1.
Shute, N. (2002). That old time medicine. U.S. News and World Reports (April 22), 54–61.
Solomon, R. C. (1992). Ethics and excellence. Oxford: Oxford University Press.
Williamson, O. (1975). Markets and hierarchies: Analysis and antitrust implications. New York, NY:
Free Press.
Williamson, O. (1985). The economic institutions of capitalism: Firms, markets, and relational
contracting. New York, NY: Free Press.
MEASURING AND ACCOUNTING
FOR MARKET PRICE RISK
TRADEOFFS AS REAL OPTIONS IN
STOCK FOR STOCK EXCHANGES

Hemantha S. B. Herath and John S. Jahera Jr.

ABSTRACT
The flexibility of managers to respond to risk and uncertainty inherent
in business decisions is clearly of value. This value has historically been
recognized in an ad hoc manner in the absence of a methodology for more
rigorous assessment of value. The application of real option methodology
represents a more objective mechanism that allows managers to hedge
against adverse effects and exploit upside potential. Of particular interest to
managers in the merger and acquisition (M&A) process is the value of such
flexibility related to the particular terms of a transaction. Typically, stock
for stock transactions take more time to complete as compared to cash given
the time lapse between announcement and completion. Over this period, if
stock prices are volatile, stock for stock exchanges may result in adverse
selection through the dilution of shareholder wealth of an acquiring firm or a
target firm.
The paper develops a real option collar model that may be employed by
managers to measure the market price risk involved to their shareholders
in offering or accepting stock. We further discuss accounting issues related
to this contingency pricing effect. Using an acquisition example from U.S.

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 191–218
© 2004 Published by Elsevier Ltd.
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12009-1
191
192 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

banking industry we illustrate how the collar arrangement may be used


to hedge market price risk through flexibility to renegotiate the deal by
exercising managerial options.

INTRODUCTION
An important area of research in management accounting is the implementation
of strategic management decisions and managerial behavior that focus on ways
to improve management and corporate performance. With the increased reliance
of firms on financial instruments to manage business risk, its measurement and
disclosure has become increasingly important in accounting. This research looks
at one element of business risk, specifically, the measurement of market price
risk to shareholders in a merger and an acquisition transaction using an emerging
capital budgeting tool – real option methodology and accounting issues of market
price risk to the acquiring firm.
Merger and acquisition activity has increased sharply since the 1990s. Strikingly,
noticeable in the merger and acquisition activity in this decade is that companies
are increasingly paying for acquisitions with stock rather than cash. Stock for
stock offers typically takes more time from announcement to completion than
cash offers. This is particularly noticeable in regulated mergers as in the banking
industry. Over this extended time from announcement to completion, the target
and acquiring firm stock prices can change dramatically due to various factors
even though they are in the same industry. Especially, if the acquiring firms stock
is highly volatile, it can significantly affect the value of the deal at consummation
if there are no price protection conditions built into the deal.
Published literature dealing with price protection in mergers and acquisitions
is sparse. However, the contingent pricing effect on the value of a deal in a stock
for stock exchange due to stock price volatility has important risk management
implications for managers and both sets of shareholders. A practical way to
provide price protection to both acquiring and target firm shareholders is to set
conditions for active risk management by managers. For example, one possibility
is to provide managers the flexibility to renegotiate the deal and hedge the market
price risk by specifying a range within which the deal is allowed to fluctuate as in a
collar type arrangement.
This paper investigates how to better structure a stock for stock offer as a collar
using real option methodology when stock prices are volatile and when there
is considerable time lapse between announcement and final consummation. We
propose that managers use real option analysis to measure the price risk involved
to their shareholders. The main advantage of using real option analysis is that it can
Measuring and Accounting for Market Price Risk Tradeoffs 193

capture and measure the value of intangible such as maintaining flexibility when
there is high uncertainty. We argue that explicit valuation of managerial flexibility
including in the terms of interest may enhance deal value for both parties and
enforce favorable managerial behaviors.
We also discuss accounting issues related to business combinations and in
particular accounting for these intangibles. We propose that perhaps these real
options should be accounted as contingencies? Using a recent acquisition case
example from U.S. banking industry, the paper illustrates how the proposed collar
is used to hedge the market price risk and how this acquisition structure avoids
earnings per share (EPS) dilution to both sets of shareholders.
The paper is organized as follows. First, we discuss stock price variability and its
valuation effects on stock for stock transactions. Second, we introduce real option
theory and managerial flexibility in M&A decisions. Third, we present well-known
formulas for optimal exchange ratios of a target and an acquiring firm. Fourth, we
discuss the proposed real options collar model for valuing managerial flexibility in
stock for stock transactions. Fifth, we discuss accounting issues related to business
combinations and in particular accounting for contingencies. Sixth, we apply the
real option collar model to value the recent acquisition decision of BankFirst
Corporation by BB&T. Finally, we discuss our findings and future research.

STOCK PRICE VARIABILITY


AND VALUATION EFFECTS
Stock for stock transactions generally take more time to complete than cash
transactions. If stock price variability of both buying and selling firm is high,
the value of respective shares may change between time an exchange ratio is
determined and acquisition date. If a fixed exchange ratio is used to determine a
target’s compensation, an acquiring firm overpays when its stock price is higher
on merger date than on agreement date or in an event that a target’s stock price
is lower on merger date than agreement date. Alternatively, a target loses if an
acquiring firm stock price is lower on merger date than on agreement date or when
a target’s stock price has risen on merger date. Hence, the time to consummation
and the underlying price volatility can result in over or underpayment.
One solution to this problem suggested by Gaughan (1999) is to arrange a
provision to adjust the exchange ratio if stock prices go above or below a certain
threshold. Typically, an adjustment provision has to be negotiated between the two
firms and agreed upon at time the deal is negotiated. Consequently, an adjustment
provision in an agreement is more useful when one or both stock prices of a
target and/or an acquiring firm are highly volatile. The above adjustment to the
194 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

exchange ratio is a naı̈ve approach although it seems to work similar to a collar


type arrangement.

REAL OPTIONS – MANAGERIAL FLEXIBILITY

Valuation of a M&A requires a bidding firm to forecast free cash flows of a


target. A target’s cash inflows should exceed investment cost or price to be paid
by the acquiring firm. Also, it should provide a return that is at least equal to
or greater than current return to the acquiring firm. The operating cash flows of
the combined company due to synergies are the fundamental value driver that
determines the market value of the combined company (Herath & Jahera, 2001;
Houston & Ryngaert, 1996; Rappaport & Sirower, 1999).
Real option methodology is an emerging area of research in management
accounting (Booth, 1999; Brabazon, 1999; Busby & Pitts, 1997; Stark, 2000;
Zhang, 2000). In recent years, practitioners and academics have argued that the
traditional discounted cash flow models do not adequately capture the value
of managerial flexibility to delay, grow, scale down or abandon projects. For
example, Myers (1977, 1984) and Kester (1984) argue that a substantial portion of
the market value of a company arises not from assets in place but from the present
value of future growth opportunities. Real option methodology combine strategy
with valuation. The insight is that an investment opportunity can be conceptually
compared to a financial option.
Real options methodology is conceptually better than conventional capital
budgeting techniques for M&A valuations since it takes into account managerial
flexibility to respond to uncertainties (Copeland & Antikarov, 2001; Herath &
Jahera, 2001; Lambrecht, 2001). Managerial flexibility in M&A decisions can
be explicitly considered through innovative deal structures. For example, Herath
and Jahera (2001) considered the value of managerial flexibility as an exchange
real option to offer stock or cash for a target when an acquiring firm’s stock is
highly volatile.
In the real options framework a M&A investment decision is valued as the sum
of two value components. The first value component is standard net present value
(NPV) of a M&A investment decision which is the value without managerial
flexibility. The second value component is the value of managerial flexibility to
respond to uncertainties. In financial options terminology, NPV and managerial
flexibility to delay/modify an investment are equivalent to an option’s intrinsic
and time values. Consequently, the value of an M&A investment with managerial
flexibility or strategic NPV can be considered as sum of two value components:
Strategic NPV = Base NPV without flexibility + Value of managerial option(s)
Measuring and Accounting for Market Price Risk Tradeoffs 195

OPTIMAL EXCHANGE RATIOS IN


STOCK FOR STOCK TRANSACTIONS
In this section we present well-known formulas for exchange ratios. In stock for
stock transactions, an acquiring firm purchases a target firm by exchanging its
stock for that of target. An acquiring firm may offer a premium to target to make
an offer attractive. The exchange ratio is determined prior to acquisition date.
In a conventional stock for stock transaction the exchange ratio remains fixed
independent of stock price variability of both buyer and seller. An exchange ratio
gives the number of shares of acquiring firm that will be exchanged for a target.
The exchange ratio (F) is given by
No S B (1 + Z)
F= =
NB SA
where N o = the number of acquiring firm shares offered in exchange for target
shares; N B = the number of shares outstanding of target firm; N A = the number
of shares outstanding of acquiring firm; S A = market price of acquiring firm stock;
S B = market price of target firm stock; Z = merger premium.
While an exchange ratio is typically fixed when a deal is agreed upon, in fixing
the ratio it is of interest to both an acquiring firm and target to determine the
optimal exchange ratios that will not dilute shareholder wealth of the two firms. To
determine optimal exchange ratios one assumes constant stock prices. An acquiring
firm is interested in the largest exchange ratio (Fmax ) that will not reduce post
acquisition share holder value or stock price. This ratio is given by:
   
1 EA NA + EB NB P
F max ≤ − NA
NB SA E c
Target firm shareholders benefit from a higher exchange ratio resulting in more
shares for each of its stock. The minimum exchange ratio (Fmin ) that preserves the
post-acquisition target’s shareholder value is given by:
 
SB NA
F min ≥
(E A N A + E B N B )(P/E)c − S B N B
where E A = the pre-merger earnings per share (EPS) of the acquiring firm; E B =
the pre-merger earnings per share (EPS) of the target firm; (P/E)c = post acquisition
P/E ratio.
The expression for Fmax is an increasing linear function of post acquisition
(P/E) ratio while the expression for Fmin is a decreasing convex function. The
relationships between Fmax and (P/E)c and between Fmin and (P/E)c is shown in
196 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

Fig. 1. Dependency of Acquiring and Target Firm Wealth on Exchange Ratio and Post
Merger P/E Ratio.

Fig. 1. Of particular interest is the region where shareholders of both firms will
benefit. An exchange ratio f ∗ (F min ≤ f ∗ ≤ F max ) for some (P/E)c that lies in the
optimal region will theoretically increase post acquisition share holder wealth of
both parties. The minimum post completion price to earnings ratio (P/E)∗c is where
the two expressions equate (F min = F max ). As shown in Fig. 1, an exchange ratio
( f ) that is greater than minimum exchange ratio (Fmin ) and less than maximum
exchange ratio (Fmin ) for a any post completion (P/E)c ratio that is greater than
minimum post completion price to earnings ratio (P/E)∗c should be negotiated.

MANAGERIAL FLEXIBILITY AS A REAL


OPTION COLLAR ARRANGEMENT
A better way to structure a stock for stock acquisition is to minimize consideration
paid by a bidding firm and maximize deal value to the target over the period from
announcement to completion of a deal. The theoretical value of an acquisition is
defined as the deal value based on a fixed exchange ratio. Notice that the theoretical
price will depend on acquiring firm stock price at consummation. Consequently, if
stock price of acquiring firm increases or decreases, the theoretical deal value will
either increase or decrease accordingly. In order to minimize the value of a deal,
an acquiring firm could buy a call option or a cap, which guarantees a minimum
Measuring and Accounting for Market Price Risk Tradeoffs 197

deal value. A target on the other hand could consider a put option or a floor,
which ensures that holder, would receive the maximum deal value. Consequently,
managerial flexibility to both parties can be structured as a collar arrangement,
going long on a cap and shorting a floor.
An acquiring firm should buy a real call option on theoretical value with a
strike price equal to a deal value that cap the exchange ratio to a maximum
exchange ratio that will not dilute post acquisition stock value for acquiring firm
shareholders. The underlying asset is the theoretical deal value. The cap would
guarantee that the deal value at any given time would be the minimum of the
theoretical value or the deal value based on the maximum exchange ratio. On
the other hand, to maximize its payment a target should hold a real put option on
the theoretical value with a strike price based on the minimum exchange ratio.
In this way the floor guarantees that a target would receive maximum of the
theoretical value or the deal value using the minimum exchange ratio.
In order to price the cap and the floor, we use the binomial lattice framework
for pricing options on a stock. The real option prices are thus consistent with
risk-free arbitrage pricing. Let SA and SB denote stock prices of an acquiring firm
and a target firm at announcement of the deal. The stock price of firm (i), S(i)
follows a random walk. The time between the announcement (t0 ) and the actual
closing of the deal (t1 ) is denoted by (t) where (t = t 1 − t 0 ). Assume that there
are four decision points (T = 0, 1, 2, 3) pertaining to when a deal may be closed.
We divide the time period (t) into equal periods of length (T = t/3), which
may be measured in weeks or months. In the binomial option pricing model, the
formulas to compute risk neutral probability (p(i) ), and upward movement for
stock price u (i) and downward movement d(i) for stock price (i) are as follows:

u i = e␴i T

and

d i = e−␴i T

and
er f T − d i
pi = ,
ui − di
where, ␴(i) is the stock price volatility of firm (i). The short-term interest rate (rf ),
stock price volatility of the acquiring firm (␴A ) and target firm (␴B ) are assumed
to remain constant in the model. Upward and downward movement in stock price
are represented by the state variable (k) and denoted by (+) and (−) respectively.
Using above parameters we next develop a three period binomial lattices for
movement in stock prices of an acquiring firm and a target firm as shown in Fig. 2.
198 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

Fig. 2. Three Step Binomial Lattices for Stock Prices.

For any exchange ratio ( f ) the deal value (P kT ) at any period (T ) in state (k),
can be calculated as follows:
P kT = f N B S kA
As per our notation, the state variable (k) would be either + or −. The deal
value based on the maximum and minimum exchange ratios are given by:
U kT = N B S kA kF max and L kT = N B S kA kF min respectively. The theoretical deal value
based on a fixed exchange ratio is given by:
V kT = N B S kA F. Notice that value of U kT and L kT are also a function of acquiring
firm stock price. Alternatively, a target and acquiring firm may agree upon
minimum and maximum deal values based on the acquiring firm’s stock price
at announcement. In this case, the maximum and minimum deal values will be
equal to the following constant expressions U = N B S A F max and L = N B S A F min .
Notice that stock prices of both target and acquisition firms can vary from the
time a deal is announced to when it is completed. In addition, using the historic
price volatility the ratio of the projected stock price of an acquiring firm to a target
firm under each state (k) and time (T) can be computed. We refer to this variable
exchange ratio as the critical exchange ratio defined by ( f k ) where
S kB
fk =
S kA
Measuring and Accounting for Market Price Risk Tradeoffs 199

Notice that critical exchange ratio ( f k ) which is based on projected stock prices can
be used to determine the target’s market value at any state k. The target’s market
value (W kT ) based on variable exchange ratio at any period (T) in state (k), can be
obtained by substituting f k in the expression for P kT . The target’s market value is
W Tk = f k N B S kA

Real Call Option – Cap

A buying firm would want flexibility to minimize the theoretical deal value at
completion. Thus a cap would guarantee that a deal is valued as the minimum of
the theoretical deal value or the agreed upon value based on maximum exchange
ratio given by min [U, V kT ] = min[NB SA Fmax , NB FS kA ]. An acquiring firm which
holds the cap should pay a target a terminal payoff of max[NB FS kA − NB SA Fmax ,
0] = max[NB FS kA − U, 0] to receive the benefit of having the right to pay min[NB
SA Fmax , NB FS kA ]. Notice that U is analogous to the exercise price of the call option
when the underlying asset in option terminology is the theoretical deal value V kT .

Real Put Option – Floor

A target should hold a floor which would guarantee the maximum of the theoretical
value or the agreed upon deal value based on the minimum exchange ratio given
by max[L, V kT ] = max[NB SA Fmin , NB FS kA ]. The floor would provide a target
the managerial flexibility to maximize the value of the deal to its shareholders. A
target should pay an acquiring firm a floor premium max[Fmin NB SA − NB FS kA ,
0] = max[L − NB FS kA , 0] to have the right to receive benefit of max[NB SA Fmin ,
NB FS kA ]. Here L is analogous to the exercise price of the put option.
Given that the theoretical post acquisition market price of a target and acquiring
firm will depend on post acquisition earnings and the (P/E) ratio, the following
optimal terminal cap and floor values can be identified.
Proposition 1. If the exchange ratio is greater than maximum exchange ratio
then post acquisition shareholder wealth will be lower; that is S A > S c . The call
option (cap) to an acquiring firm has value (X kT > 0) and the put option (floor)
to the target has zero value (Y kT = 0).
Proof: To prove this result let us assume that f > F max
   
1 EA NA + EB NB P
F max ≤ − NA
NB SA E c
200 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

then
   
1 EA NA + EB NB P
f> − NA
NB SA E c
 
P
f N B S A > (E A N A + E B N B ) − NA SA
E c
 
P
( f N B + N A )S A > (E A N A + E B N B )
E c
 
EA NA + EB NB P
SA >
fN B + N A E c
 
P
SA > Ec
E c
SA > Sc
Where the post acquisition earnings per share of the acquiring firm is given by:
EA NA + EB NB
Ec >
fN B + N A
When f > F max , an acquiring firm would prefer to cap a deal to maximum deal
value of X kT = min[NB SA Fmax , NB fS kA ]. Since we assumed that both parties
agreed on the fixed exchange at announcement by substituting f = F we obtain
X kT = min[U, V kT ]. The terminal call option payoff is then equal to max[NB FS kA
− NB SA Fmax , 0] = F NB S kA − NB SA F max > 0. For a target, the terminal
payoff of its put option is Y kT = max[Fmax NB SA − NB FS kA , 0] = 0. Therefore
if f > F max it is beneficial for the acquiring company to cap the deal since it
will otherwise dilute post acquisition EPS of its shareholders. 

Proposition 2. If the exchange ratio is less than minimum exchange ratio then
the deal does not preserve the post acquisition shareholder wealth of target
shareholders. The call option (cap) to the acquiring form has no value (X kT = 0)
and the put option (floor) has value (Y kT > 0).
Proof: To prove this result let us assume that f < F min
 
SB NA
F min ≥
(E A N A + E B N B )(P/E)c − S B N B
SB NA
f<
(E A N A + E B N B )(P/E)c − S B N B
(E A N A + E B N B )(P/E)c f < S B N A + S B N B f
Measuring and Accounting for Market Price Risk Tradeoffs 201

(E A N A + E B N B )(P/E)c f < S B (N A + N B f)
 
EA NA + EB NB P
f < SB
NA + NB fk E c
 
P
Ec f < SB
E c
Substituting for the variable exchange ratio f = F = N 0 /N B we obtain the
following:
 
P
Ec No < NB SB
E c
Pc No < NB SB
This completes the proof; the target shareholder’s wealth after acquisition
is less than target shareholder wealth prior to acquisition. When f < F min , a
target firm would prefer to receive maximum deal value equal to Y kT = max[NB
SA Fmin , NB f S kA ]. Again by substituting f = F, we get Y kT = max[L, V kT ]. The
terminal put option payoff is then equal to max[NB SA Fmin − NB FS kA , 0] =
NB SA Fmin − NB FS kA > 0. For an acquiring firm terminal payoff of its call
option is X kT = max[F NB S kA − Fmin NB SA , 0] = 0. Therefore if f < F min it
is beneficial for the target company to hold a floor to maximize its deal value.
Otherwise it will dilute post acquisition EPS of target shareholders. 
Proposition 3. If the critical exchange ratio lies between minimum exchange
ratio and maximum exchange ratio (F min < f < F max ) then both acquiring firm
and target firm shareholder’s gain. There is no EPS dilution to either party and
the collar has no value.
Proof: To prove this result let us assume that f < F max
   
1 EA NA + EB NB P
F max ≤ − NA
NB SA E c
then
 
P
f N B S A < (E A N A + E B N B ) − NA SA
E c
 
P
( f N B + N A )S A < (E A N A + E B N B )
E c
 
EA NA + EB NB P
SA <
f NB + NA E c
202 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
 
P
SA < Ec
E c
SA < Sc
The post acquisition stock price of acquiring firm is greater than its stock price
before acquisition. 
Proof: To prove this result let us assume that f > F min
 
SB NA
F min ≥
(E A N A + E B N B )(P/E)c − S B N B
SB NA
f>
(E A N A + E B N B )(P/E)c − S B N B
 
P
(E A N A + E B N B ) f > SB NA + SB NB f
E c
 
EA NA + EB NB P
f > SB
NA + NB f k E c
 
P
Ec f > SB
E c
Substituting for the variable exchange ratio f = F = N 0 /N B we obtain the fol-
lowing:
Pc No > NB SB
The post acquisition shareholder wealth of a target firm is greater than its share-
holder’s wealth before acquisition. Therefore, if F min < f < F max then there is
no dilution in the post acquisition shareholder wealth of an acquiring firm or
a target firm. By substituting f = F we obtain the following. When F < F max
value of the cap given by X kT = max[F NB S kA − NB SA Fmax , 0] = 0. Similarly,
when F > F min value of the floor given by Y kT = max[NB SA Fmin − NB FS kA , 0]
= 0. The value of the collar can be found by holding a cap and shorting a floor;
collar = cap – floor which is equal to zero. In Table 1 we summarize terminal
payoffs of the cap and the floor. 

Pricing the Cap and Floor

Using terminal payoff values in Table 1, we can now price the cap and the floor
based on risk-free arbitrage pricing. The payoff values (X K
T ) of a call option in
Measuring and Accounting for Market Price Risk Tradeoffs 203

Table 1. Terminal Payoffs to the Buyer and Seller.


Acquiring firm Real Target Real Put
Call Option – CAP Option – FLOOR

F > Fmax and Objective min[U, V kT ] = Fmax NB SA , max [U, V kT ] = F NB S kA ,


S kA > S A Best value With maximum exchange ratio With fixed exchange ratio (F)
(Fmax )
Payoff max[F NB S kA − Fmax NB SA , 0] max[Fmax NB SA − F NB S kA , 0]
= NB S kA F − U =0
F < Fmin and Objective min[L, V kT ] = F NB S kA , max [L, V kT ] = Fmin NB SA ,
S kA > S A Best value With fixed exchange ratio (F) With minimum exchange ratio
(Fmin )
Savings max[F NB S kA − L, 0] = 0 max[Fmin NB SA − F NB S kA , 0]
= L − NB S kA F

each period (T) and state (k) can be calculated as:

X kT = max[N B S kA F − U, 0]

The payoff values (Y K


T ) of the put option in each period (T) and state (k) is given
by:

Y kT = max[L − N B S kA F, 0]

In Fig. 3, we present payoff values in each state (k) and time (T) for a real call
and a real put (as shown in box). Next, we employ a risk neutral approach to price
the real call and real put options. Since we have assumed that the time between
announcement and final consummation was divided into four decision points, we
need to find managerial flexibility available to both buyer and seller in each period
T = 1, 2 and 3. The value of managerial flexibility to the buyer can be priced
as three European calls with respective maturity at time T = 1, 2 and 3. Let XT
denote value of a T period call. For example, a two period call to offer minimum
deal value in period T = 2 can be value as follows.
The real call X2 is valued by finding terminal payoff values at T = 2,
X ++ −+ −−
2 , X 2 , X 2 and folding back two periods. For the payoffs at T = 2 we obtain

++
X2++ = max[NB SA F − U, 0]
X2+− = max[NB SA
+−
F − U, 0]
X2−− = max[NB SA
−−
F − U, 0]
204 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

Fig. 3. Payoff Values for the Real Call and Real Put (in Box).

Using the standard binomial approach, we next find values of X + −


2 and X 2 at end
of period 1 by risk-neutral discounting.
Specifically,

X2+ = e−rf T [pA X2++ + (1 − pA )X2−+ ]


X2− = e−rf T [pA X2++ + (1 − pA )X2−− ]

Thereafter, by applying risk-neutral discounting one more time, we compute the


value of real call X2 as:

X 2 = e−r f T [p A X + −
2 + (1 − p A )X 2 ]

Similarly, using the risk neutral procedure we can calculate flexibility to minimize
deal values at T = 1 and 3, given by X1 and X3 respectively.
The value of managerial flexibility to a target can be priced as three European
puts with respective maturity at time T = 1, 2 and 3. Let YT denote the value of
a T period real put option. For example, the one period put to receive maximum
deal value in period T = 1, Y1 can be valued by finding the terminal payoff values
Measuring and Accounting for Market Price Risk Tradeoffs 205

at T = 1, Y + −
1 , Y 1 and folding back one period. For the payoffs at T = 1 we obtain

Y1+ = max[L − NB SA
+
F, 0]
Y1− = max[L − NB SA

F, 0]
Using risk neutral discounting, we next find the value of a one period put option as
Y 1 = e−r f T [p A Y + −
1 + (1 − p A )Y 1 ]

Similarly, by employing a risk neutral discounting procedure we can calculate


the flexibility to maximize deal values at T = 2 and 3, denoted by Y2 and Y3
respectively.

Pricing the Collar

In order to consider the value of managerial flexibility available to both the target
and acquiring firm to benefit by renegotiating the deal at a future decision point,
we can go long on the real call and short the real put. This would cap the theoretical
deal value to a maximum exchange ratio to benefit the buyer while guaranteeing
a deal value based on minimum exchange ratio to benefit the seller. Since the cap
and floor have different strike prices U kT = N B S kA F max and L kT = N B S kA F min by
holding a cap and selling a floor we can effectively create a collar. Therefore, the
value of a collar at any period T can be computed as the difference between value
of managerial flexibility to a buyer (call) and value of flexibility to a seller (put)
given by ␾T = X T − Y T . Thus the deal value including managerial flexibility to
both buyer and seller (V) can be calculated as:

3
V = V0 + ␾T
T =1

Where (V0 ) is the acquisition value with out managerial flexibility agreed between
the two parties at time T = 0 given by V 0 = FN B S A , where F is the fixed exchange
ratio and ␾T is the net value of flexibility (value of flexibility to buyer) – (value
of flexibility to seller). Notice that value of flexibility to a buyer will increase the
purchase price of an acquisition while the value of flexibility to a seller would re-
duce it. Value of an acquisition if only buyer’s managerial flexibility to renegotiate
the deal by exercising the real call option is considered equal to

3
VA = V0 + XT
T =1
206 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

Similarly, the value of an acquisition if only seller’s managerial flexibility to rene-


gotiate the deal by exercising the real put option is considered equal to

3
VB = V0 − YT
T =1

A real option collar can alternatively be valued instead of separately pricing each
cap and floor. In order to do so, one can find the value of collar (␾T ) at each time
period (T ). Consider terminal collar payoffs at each state (k) and time (T ) which
is equivalent to max[N B S kA F − U, 0] − max[L − N B S kA F, 0] and then price the
collar at each period (T ) using risk neutral discounting. This method will give
yield an equivalent value for the real option collar.

ACCOUNTING ISSUES: BUSINESS COMBINATIONS


AND CONTINGENT CONSIDERATIONS
Types of Business Combinations

Mergers and acquisition transactions are treated for accounting purposes as busi-
ness combinations. In a business combination transaction one economic unit unites
with another or obtains control over another economic unit. Accordingly, there are
two forms of business combinations; purchase of net assets, where an enterprise
acquires net assets that constitute a business and; purchase of shares, where an
enterprise acquires sufficient equity interest of one or more enterprises and obtains
control of that enterprise or enterprises. The enterprise(s) involved in a business
combination can be either incorporated or unincorporated. However, the purchase
of some (less than 100%) of an entity’s assets is not a business combination. The
form of consideration in a business combination could be cash or a future promise
to pay cash, other assets, common or preferred stock, a business or a subsidiary or
any combination of the above.
In a purchase of assets, an enterprise may buy only the net assets and leave the
seller with cash or other consideration, and liabilities. Alternatively, a buyer may
purchase all the assets and assume all the liabilities. The more common form of
business combination however is the purchase of shares. In a purchase of share
transaction, the acquiring firm’s management makes a tender offer to target share-
holders to exchange their shares for cash or for acquiring firm shares. The target
continues to operate as a subsidiary. In both purchase of net assets and purchase of
shares the assets and liabilities of the target are combined with the assets and liabil-
ities of the acquiring firm. If the acquiring firm obtains control by purchasing net
Measuring and Accounting for Market Price Risk Tradeoffs 207

Fig. 4. Purchase of Net Assets and Purchase of Share Transactions.

assets the combining takes place in acquirer’s books. If acquirer achieves control
by purchasing shares combining takes place when the consolidated financial state-
ments are prepared. The types of business combinations are summarized in Fig. 4.

Accounting for Business Combinations

Prior to July 1, 2001, in the U.S., there were two alternative approaches to account
for business combinations. The pooling method was used when an acquirer could
not be identified. Stock deals were accounted for by the pooling method. The
pooling method was only possible in a share exchange since if cash was offered
then the company offering cash became the acquirer and the purchase method had
to be used. Under the pooling method, the price paid was ignored, fair market
values were not used, and the book values of two companies were added together.
The pooling method avoided creating goodwill and ignored value created in a
business combination transaction. Also reported earnings were higher.
Since July 1, 2001, revised accounting standards in the U.S. allow only the
purchase method to account for mergers and acquisitions. Regardless of the
purchase consideration, if one company can be identified as the acquirer, the
purchase method has to be used. Under this method, the acquiring company
records net assets of the target at the purchase price paid. The purchase price
may include cash payments; fair market value of shares issued, and the present
value of promise to pay cash in the future. Goodwill is the difference between
purchase price paid and fair market value of net assets of the target. Goodwill
is reviewed for impairment and any impairment loss is charged against earnings.
Future reported earnings under the purchase method are lower.
208 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

The purchase method is recommended due to the following limitations of


pooling method (Hilton, 2003). In the pooling method, the fair market values
are ignored and thus do not reflect the valued exchanged in a M&A transaction.
Pooling information is less complete and often it is difficult to compare two
methods of accounting for business combinations, under what is essentially the
same economic event. Also, while future reported earnings would be different
under the two methods, future cash flows are essentially the same. Finally,
regardless of whether cash or stock is offered as purchase consideration, the
method of recording an acquisition should be the same.
After identifying the acquiring firm, the acquisition cost has to be allocated
to the assets and liabilities that are acquired. The acquisition cost may constitute
of cash paid, the present value of any promise to pay cash in future, and (or) the
fair market value of shares issued. If the fair market value of shares cannot be
ascertained, the fair market value of net assets is used. In addition, the acquisition
cost may also include contingent considerations. We next discuss accounting for
contingencies in business combinations. In the previous sections we illustrated
how one could measure market price risk tradeoffs to acquiring and target firm
as real options in a stock for stock exchange. We discussed why managerial
flexibility to hedge market price risk by exercising the appropriate managerial
options should be considered and reported. Lastly, we argue whether these real
options to hedge market price risk should be treated for accounting purposes as
contingencies in business combinations?

Contingencies

In some situations the consideration promised in an acquisition may be adjustable


depending upon future events. For example, business combination terms may re-
quire an additional cash payment or a share issue contingent on some specific
future event. If the outcome is considerably certain to occur and can be estimated
on date of acquisition, the estimated amount is recorded as part of purchase cost.
However, if on the date of the combination, the contingent amount cannot be es-
timated and the outcome is uncertain, it is recorded at some future date when the
outcome becomes payable. Whether the contingency amount is considered as part
of purchase cost or as a capital transaction will depend on specific circumstances
as illustrated below.
If the contingency is based on earnings, it is considered an additional cost
of purchase. The total acquisition cost will be the purchase cost that is incurred
when the acquisition is complete plus the contingency when it is recorded. Since
the contingency is recorded later a question may arise as to how to treat this
Measuring and Accounting for Market Price Risk Tradeoffs 209

additional amount. The logical treatment would be to consider it as goodwill. If


the contingency is based on future share prices, any consideration arising at the
future date will be recorded at fair market value but as a reduction in amount of
the original share issue. The consideration is then not considered as additional
cost of the purchase.

CASE STUDY: BB&T CORPORATION’S ACQUISITION


OF BANKFIRST CORPORATION (BKFR)
In this section we present a recent acquisition that occurred in U.S. banking industry
to illustrate the real options collar model. Herath and Jahera (2001) used this case
to demonstrate how a more flexible deal structure to exchange cash for fair market
value of a deal may have reduced dilution of value to acquiring bank shareholders.
In August 23rd 2000, BB&T Corporation announced the acquisition of
BankFirst Corporation (BKFR) for $149.7 million in stock. Based on BB&T’s
closing price of $26.81, BKFR with $848.8 million in assets was valued at
$12.21 per BankFirst share. The closing price of BKFR on announcement day
was $11 21 . The exchange ratio was fixed at 0.4554 BB&T stock for each share
of BankFirst Corporation. The following financial data pertaining to BB&T and
BKFR acquisition given in Table 2 is used to determine the optimal exchange
ratios for the real option collar model.
Using the above data, the number of shares of BB&T that is offered for BKFR
is equal to 5,579,216. The post acquisition EPS based on the combined earnings
of the two firms is equal to $ 1.94 and post acquisition (P/E) ratio is found
to be 13.75. Since the financial markets are efficient, bootstrapping of EPS is
not possible and the post-acquisition (P/E) ratio will be the weighted average
of the pre-acquisition (P/E) ratios. Incidentally, post acquisition (P/E) ratio is
the minimum post completion price to earnings ratio (P/E)2c where the two
expressions for optimal exchange ratios equate (F min = F max ). The theoretical

Table 2. Financial Data for BB&T and BKFR.


BB&T (A) BKFR (B)

Present earnings ($ million) 767 8.754


Shares outstanding 395,360,825 12,260,500
Earnings per share (EPS) $ 1.94 $ 0.72
Stock price (average) $ 26.71 $9
P/E ratio 13.9 12.50
Target’s premium 35%
210 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

post-merger stock price can be computed using post-merger (P/E) ratio and the
post-merger EPS relationship as $26.60.
The deal announced on August 23, 2000, closed on December 28, 2000 for
$216.2 million in stock. Accordingly, on a per share basis the deal was valued at
$17.42 per BKFR share based on BB&T closing price of $38.25. Notice, that over
the four-month period, from announcement to closing, the deal value increased
by $66.6 million, a 44% increase. Theoretically, this increase is a hidden loss
to shareholders of the acquiring bank since they are effectively paying more for
relatively the same net assets if the deal had closed at the original value of $149.7
million. Notice that fundamental economics of the acquisition have not changed
since expected post acquisition cash flows of target will remain unchanged. The
prices paid in a stock swap are real prices and as such there is greater dilution
of equity interest of acquiring firm shareholders. From a purely accounting
perspective, however, it would not make a difference since the transaction would
have been recorded using the pooling method at historic book values of net assets.
What would have been the deal value, if both BB&T and BKFR’s managerial
flexibility to renegotiate the deal by exercising real call and put options had been
considered? How much would that flexibility be worth? The data for our model
were obtained from company annual reports. The volatility of BB&T and Bank-
First stocks were estimated using stock price data from August 1998 to December
2000. BankFirst Corporation went public in August 1998. Although for illustration
purposes we used stock price data from August 1998 to December, 2000, notice
that stock price data beyond the announcement date should not be used.
Data for the model are summarized as follows:
 BB&T Corporation annual stock price volatility ␴ estimated using historic data
A
is 35.7%. BankFirst Corporation annual stock volatility ␴B using historic data is
found to be 33%. The binomial lattices for BBT and BKFR along with the actual
stock prices (high) are presented in Appendix Exhibit 2. Volatility is measured
as standard deviation of log returns of stock price over the period.
 A constant risk-free rate, r f = 6%, is assumed.
 Fixed exchange ratio is F = 0.4554 BBT stock for each BKFR share.
 Fair market value of BankFirst Corporation net assets is NB FSA = $149,500,000.
 Number of BKFR common stock outstanding is N B = 12,260,500 shares.
 The time from announcement to closing is T = 4 months since the deal was
announced end of August and closed end of December. Thus T is divided in to
4 periods of equal length (T = 1 month or 0.0833 years).
 Market price of BB&T stock at announcement S A = 26.81.
Stock price data of BB&T and BKFR used to estimate the volatility, resulting
binomial parameters and lattices for movements of stock prices for the acquiring
Measuring and Accounting for Market Price Risk Tradeoffs 211

Table 3. Variable Exchange Ratio of BKRF/BBT Stock Price (f k ).


T=0 T=1 T=2 T=3 T=4

0.4498 0.4463 0.4428 0.4394 0.4360


0.4533 0.4498 0.4463 0.4428
0.4568 0.4533 0.4498
0.4604 0.4568
0.4640

bank and target are shown in Appendix Exhibits 1 and 2. Once we develop
binomial trees pertaining to each stock, we next compute the variable stock
exchange ratio ( f k ) in each state (k). For example the variable stock exchange
ratio at T = 0, is computed as f k = 12/26.68 = 0.4498. The variable exchange
ratios in spreadsheet format are shown in Table 3.
In spreadsheet format, an upward movement is shown directly to the right and
a downward movement is shown directly to the right but one step down.

Fig. 5. Optimal Exchange Ratios.


212 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

In order to price each cap and floor, one needs to find maximum and minimum
exchange ratios. The relationship between these two exchange ratios and the
post merger (P/E) ratio is shown in Fig. 5. Notice that the fixed exchange
ratio of 0.4554 will only benefit BB&T. Since the minimum post merger (P/E)
ratio is 13.75, for both firm shareholders to benefit we selected a post-merger
(P/E) ratio of 13.8, which falls in the optimal region. The corresponding
maximum and minimum exchange ratios are 0.4591 and 0.3358. The optimal
exchange ratios and resulting minimum and maximum agreed upon deal value are
as follows:
 Maximum exchange ratio F max = 0.4591.
 Minimum exchange ratio F min = 0.3358.
 Agreed upon minimum deal value L = 110.4 million.
 Agreed upon maximum deal value U = 150.9 million.
In order to price the four real call options (caps) and four real put options (floors)
pertaining to each decision point T = 1, 2, 3 and 4 we use formulas for X kT and Y kT
at each state (k) and time (T). The terminal payoff for the cap at T = 2, k = ++
is X ++
2 = max{(12,260,500)(32.79)(0.4554) − 150,900,000, 0} = 32,155,266.
Similarly we can compute terminal payoff of the floor at T = 4, k = − − −− as
L −−−−
1 = max {110,400,000 − (12,260,500)(17.67)(0.4554), 0} = 11,737,537.
The terminal payoff values for pricing the cap and floor are presented
in Table 4.
Using formulas developed in preceding section, we next calculate values of the
four caps. This is done by finding expected terminal payoff values using risk-free

Table 4. Terminal Payoff Values.


T=0 T=1 T=2 T=3 T=4

Cap payoff values (X kT )


14,228,892 31,155,266 52,027,630 74,057,231
0 0 14,228,892 32,155,266
0 0 0
0 0
0
Floor payoff values (Y kT )
0 0 0 0
0 0 0
0 0
1,029,571 0
11,737,537
Measuring and Accounting for Market Price Risk Tradeoffs 213

probabilities and discounting by the short-term interest rate. More specifically, a


two period cap is valued by first finding the risk neutral discounted payoff at T = 1

X −−
1 =e
−(0.06)0.0833
[0.499(32, 155, 266) + (0.501)0] = $15,950,563
X −+
1 =e
−(0.06)0.0833
[0.499(0) + (0.501)(0)] = $0

And then applying risk neutral discounting one more time to find the cap payoff
at T = 0 as X2

X 2 = e−(0.06)0.0833 [0.499(15, 950, 563) + (0.501)(0)] = $7, 912, 249

Similarly, by using formulas developed in the preceding section, we can calculate


values of the four floors (YT ). The value of managerial flexibility available to
BB&T (XT ) and value of managerial flexibility available to BKFR (YT ) at each
decision point T = 1, 2, 3 and 4 are presented in Table 5. By going long on a cap
and shorting a floor we find the value of real option collar (␾T ). The value of real
option collar at each decision point includes the value of managerial flexibility
available to both (acquiring firm) BB&T and (target) BKFR. The four real option
collar values are presented in Table 5.
The conventional stock for stock offer agreed by BB&T and BKFR did not take
into consideration real options available to both buyer and seller in structuring the
acquisition. Thus value of the acquisition without flexibility to renegotiate was
V0 = $149.5 million.

Table 5. Cap, Floor and Collar Values.


Description T=1 T=2 T=3 T=4 Total

Cap (XT )
Value of buyer’s $7,058,217 $7,912,249 $11,591,413 $12,317,427 $38,879,306
flexibility
Floor (YT )
Value of seller’s 0 0 $127,898 $727,536 $855,435
flexibility
Collar (␾T )
Combined value of $7,058,217 $7,912,249 $11,463,515 $11,589,891 $38,023,872
buyer and seller
flexibility
214 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

DISCUSSION
The original deal, which was valued at $149.7 million, was closed on December
28th 2000 for $216.2 million in stock, a dilution of $66.6 million to BB&T share-
holders. If the acquisition was structured to include only BB&T Corporation’s
managerial flexibility to cap the deal value to hedge dilution to its shareholders,
the deal would be valued at $188.6 million. Alternately, if the deal were structured
with only BKFR’s flexibility to benefit its shareholders, it would be worth $148.9
million. Ideally, the acquisition could have been fairly valued by considering man-
agerial flexibility available to both BB&T and BKFR’s. This would result in a deal
valued at $187.72 million. The fair deal value based on the real option collar model
is thus $ 28.5 million lower than actual closing value.
Although we considered managerial flexibility to hedge market price risk avail-
able to both parties as a collar, at consummation of the deal only a cap or a floor
will be exercised since they are mutually exclusive managerial actions. Therefore,
when negotiating, the collar arrangement should clearly provide a contractual
provision that allows for the final purchase cost to be computed as the deal value
without flexibility plus the real call or put value that is relevant. For example, let
us assume that the deal was consummated four months later when BBT’s stock
price is $38.25. This would indicate that the acquiring firm’s management would
exercise a four period call to cap the deal. The value of the four period call is
$12 million and the purchase cost to the acquiring firm would be $161 million
(deal value of $149.7 million without flexibility plus cost of the call option of
$12 million). At the time of consummation the acquiring firm’s management will
exercise the call option and cap the deal to the agreed maximum value of $150.9
million. By exercising the call option the acquiring firm’s has hedged a dilution
of $65 million ($216 million less $150.9 million) at a cost of $12 million. The
corresponding managerial actions for the two firms at each decision state (k) are
shown in Table 6. The corresponding payoff diagrams of the collar are shown
in Fig. 6.

Table 6. Optimal Exercise Decisions.


T=0 T=1 T=2 T=3 T=4

CAP CAP CAP CAP


– – CAP CAP
– – –
FLOOR –
FLOOR
Measuring and Accounting for Market Price Risk Tradeoffs 215

Fig. 6. Hedging the Risks.

We have demonstrated that a better way to structure a stock for stock transaction
with stock price variability is to consider value of managerial flexibility in the
acquisition structure. Conventional stock for stock transactions ignores the value
of managerial flexibility available to both parties, which may be significant.
Using real option methodology we demonstrated how this managerial flexibility
could be valued using market based data. In this paper we propose that these
real options should also be accounted as contingencies in business combination
transactions so that managers are held accountable. In summary, we have argued
the case for real options as responses to anticipated managerial actions, which
provide a mechanism to commit managers to desirable behaviors that mitigate
EPS dilution over the period the exchange ratio is fixed and the acquisition
is complete.
Real option methodology is a significant step forward in conceptualizing and
valuing managerial flexibility in strategic investment decisions. The real option
methodology is conceptually superior especially when the there is high degree
of uncertainty associated with investment decisions, but it also has limitations.
While much of the academic research in real options to date has been done
by corporate finance academics, there is scope for extending and applying real
option methodology in management accounting areas such as capital budgeting
for IT investments, research and development and performance measurement
and evaluation.
216 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

ACKNOWLEDGMENTS
We thank Harjeet Bhabra, Graham Davis, Bruce McConomy and Peter Wilson
for their helpful suggestions. The paper also benefited from comments received
at the 2002 Northern Finance Association meeting presentation in Banff, Canada,
2003 AIMA Conference on Management Accounting Research presentation in
Monterey, California and 2003 CAAA Annual Conference presentation in Ottawa,
Canada.

REFERENCES
Booth, R. (1999). Avoiding Pitfalls in investment appraisal. Management Accounting (November),
22–23.
Brabazon, T. (1999). Real options: Valuing flexibility in capital investment decisions. Accountancy
Ireland (December), 16–18.
Busby, J. S., & Pitts, C. G. C. (1997). Real options in practice: An exploratory survey of how decision
makers in industry think about flexibility. Management Accounting Research, 8, 169–186.
Copeland, T., & Antikarov, V. (2001). Real options – A practitioner’s guide. New York, NY: Texere
Publishing Limited.
Gaughan, P. A. (1999). Mergers, acquisitions and corporate restructuring (2nd ed.). New York, NY:
Wiley.
Herath, H. S. B., & Jahera, J. S., Jr. (2001). Operational risk in bank acquisitions: A real options
approach to valuing managerial flexibility. In: Advances in Operational Risk (pp. 53–65).
London: Risk Books.
Hilton, M. W. (2003). Modern advanced accounting in Canada (3rd ed.). Toronto: McGraw-Hill.
Houston, J. F., & Ryngaert, M. D. (1996). The value added by bank acquisitions: Lessons from Wells
Fargo’s acquisition of First Interstate. Journal of Applied Corporate Finance, 9(2), 74–82.
Kester, W. C. (1984). Today’s option for tomorrow’s growth. Harvard Business Review (March-April),
153–160.
Lambrecht, B. M. (2001). The timing and terms of merger, stock offers and cash offers. Working
Paper, University of Cambridge.
Myers, S. C. (1977). Determinants of corporate borrowing. Journal of Financial Economics, 5(2),
147–176.
Myers, S. C. (1984). Financial theory and financial strategy. Interface, 14(January–February), 126–137.
Rappaport, A., & Sirower, M. L. (1999). Stock or cash? The trade-offs for buyers and sellers in
mergers and acquisitions. Harvard Business Review (November–December), 147–158.
Stark, A. W. (2000, April/May). Real options (dis)investments decision-making and accounting
measures of performance. Journal of Business, Finance and Accounting, 27(3&4), 313–331.
Zhang, G. (2000). Accounting information, capital investment decisions, and equity valuation: Theory
and empirical implications. Journal of Accounting Research, 38(2), 271–295.
Measuring and Accounting for Market Price Risk Tradeoffs 217

APPENDIX

Exhibit 1. Monthly Stock Prices and Returns for BBT and BKFR (August 1998
to December 2000).
Month BBT Stock BKFR Stock (Target)
Closing Price $ Return ri (%) Closing Price $ Return ri (%)

1 26.23 – 12.00 –
2 28.05 0.0669 11.38 −0.0535
3 33.70 0.1835 11.00 −0.0335
4 34.82 0.0327 9.75 −0.1206
5 38.00 0.0874 8.94 −0.0870
6 35.75 −0.0611 10.31 0.1431
7 35.87 0.0033 10.81 0.0473
8 34.27 −0.0456 10.00 −0.0782
9 37.99 0.1032 10.00 0.0000
10 34.72 −0.0900 9.22 −0.0813
11 34.90 0.0051 9.31 0.0101
12 33.73 −0.0342 9.00 −0.0342
13 32.05 −0.0509 8.88 −0.0140
14 30.97 −0.0342 9.50 0.0681
15 35.02 0.1226 9.00 −0.0541
16 31.05 −0.1204 9.38 0.0408
17 26.35 −0.1639 8.63 −0.0834
18 27.29 0.0349 8.38 −0.0294
19 22.80 −0.1797 8.00 −0.0458
20 27.23 0.1774 7.31 −0.0898
21 26.02 −0.0454 8.63 0.1650
22 28.65 0.0962 7.75 −0.1070
23 23.33 −0.2052 8.25 0.0625
24 24.58 0.0519 9.13 0.1008
25 26.68 0.0824 12.00 0.2739
26 29.69 0.1066 13.75 0.1361
27 31.67 0.0646 14.00 0.0180
28 33.16 0.0460 15.00 0.0690
29 37.07 0.1115 17.13 0.1325
Mean return 0.0123 0.0127
Standard deviation 0.1030 0.0971
Annual standard deviation 35.7% 33.6%
Annual mean 14.8% 15.2%
218 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.

Exhibit 2. Binomial Parameters and Stock Prices in the Four-Step Lattice.


Binomial Parameters BBT BKFR

Risk free rate (rf ) 6% 6%


Volatility (␴) 35.7% 33%
Subinterval (T) 0.0833 0.0833
Proportion of upward movement (u) 1.1086 1.0999
Proportion of downward movement (d) 0.9021 0.9091
Growth rate during period T (a) 1.0050 1.0050
Risk-free probability (p) 0.499 0.502
T=0 T =1 T=2 T=3 T=4

Lattice for BBT


26.68 29.58 32.79 36.35 40.29
24.07 26.68 29.58 32.79
21.71 24.07 26.68
19.58 21.71
17.67
Actual stock price 26.68 29.68 31.67 33.16 37.07
Lattice for BKFR
12.00 13.20 14.52 15.97 17.57
10.91 12.00 13.20 14.52
9.92 10.91 12.00
9.02 9.92
8.20
Actual stock price 12 13.75 14 15 17.13
CONNECTING CONCEPTS OF
BUSINESS STRATEGY AND
COMPETITIVE ADVANTAGE TO
ACTIVITY-BASED MACHINE COST
ALLOCATIONS

Richard J. Palmer and Henry H. Davis

ABSTRACT
As manufacturers continue to increase their level of automation, the issue of
how to allocate machinery costs to products becomes increasingly important
to product profitability. If machine costs are allocated to products on a
basis that is incongruent with the realities of machine use, then income and
product profitability will be distorted. Adding complexity to the dilemma of
identifying an appropriate method of allocating machine costs to products
is the changing nature of machinery itself. Depreciation concepts were
formulated in days when a machine typically automated a single operation on
a product. Today’s collections of computer numerically controlled machines
can perform a wide variety of operations on products. Different products
utilize different machine capabilities which, depending on the function used,
put greater or less wear and tear on the equipment. This paper presents
a mini-case that requires management accountants to consider alternative
machine cost allocation methods. The implementation of an activity-based

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 219–236
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12010-8
219
220 RICHARD J. PALMER AND HENRY H. DAVIS

method allows managers to better match machine cost consumption to prod-


ucts. Better matching of machine costs to products enables better strategic
decisions about pricing, mix, customer retention, capacity utilization, and
equipment acquisition.

INTRODUCTION
This paper presents a mini-case that requires management accountants to consider
alternative methods of machine cost allocation and how activity-based logic may
assist modern businesses in connecting machine costs in fixed-cost intensive
environments to products based on the demands products place on the machine.
Better matching of machine costs to products enables better strategic decisions
about pricing, mix, customer retention, capacity utilization, and equipment
acquisition.

PAT’S CAR RENTALS

Pat is the first person in the neighborhood to buy a new car. Impressed by the vehicle,
all of Pat’s neighbors express an interest in renting the car to satisfy their various
travel needs. Being a friendly (and entrepreneurial) neighbor, Pat wants to rent the
car to each neighbor and establish “Pat’s Car Rentals.” However, while renting the
car to each neighbor at the same price may maintain harmony on the block, Pat
knows that each neighbor plans to use the car quite differently. For example:

 Neighbor A is a teenager who wants to borrow the car for dating purposes. Pat
knows Neighbor A will drive at a high rate of speed about town, blast the CD
player, and accelerate and decelerate quickly while making many stops to pick
up friends.
 Neighbor B is wants to borrow the car for a short vacation driving in the nearby
mountains. Neighbor B plans to attach his recreational camper by tow ball to
Pat’s car for the vacation.
 Neighbor C plans to deliver telephone books throughout town. This driving will
entail many stops and starts as well as ignitions and re-ignitions of the engine.
 Neighbor D wants to take the car on a trip to a large city near Pat’s town. In the
town, Neighbor D will primarily be engaged in start and stop city driving.
 Neighbor E wants to use the car to take a vacation at the beach. This will entail
a long straight drive on the flat Interstate highway from Pat’s town to the coast.
Neighbor E will drive the car at the posted speed limits.
Connecting Concepts of Business Strategy and Competitive Advantage 221

 Neighbor F want to use the car for short “off-road” adventures. Neighbor F
will drive the car off of the main roads to various hunting and fishing locations
within the region.
 Neighbor G is an elderly person who plans limited driving about town, primarily
to nearby church social events.

Pat estimates that the total automobile cost will equal $40,000. Pat believes
that the useful life of the auto depends upon the type of miles driven. In other
words, Pat recognizes that the diminution of the value and physical life of the
car will differ depending upon the lessee’s pattern of use. If lessees put “easy
miles” on the car, Pat estimates the car will go 200,000 miles before disposal;
if “hard miles” are put on the car, Pat estimates the car has a useful life of
100,000 miles.
Pat has evaluated the local rental car agency pricing scheme as a guide to
determine how much should be charged per mile. The local rental agency charges
a flat rate per mile to all lessees. However, Pat feels that the agency’s “one size fits
all” mileage rate doesn’t make sense given the wide differences in vehicle usage
by customers. Pat believes that knowledge of the lessees’ use of the automobile
can be used to obtain a competitive advantage and that accounting records should
reflect the economic fact that different customers consume the car to different
degrees. For example, Pat could use knowledge about customer driving habits to
offer a lower rental rate to Neighbor G (the grandparent) than that neighbor could
obtain at the local rental agency. By doing so, Pat will attract more neighborhood
customers who put “easy miles” on the automobile like Neighbor G. Conversely,
Pat’s rental rates for Neighbor A (the teenager) and Neighbor B (heavy load puller)
will likely be higher than the local car rental agency, driving these customers to
the lower rates of the local car rental agency (which are, in effect, subsidized by
neighbor G-type customers). If the local rental agency does not understand and
use information about the automobile usage patterns of its customers for pricing
decisions, they will eventually be driven out of business by their “hard miles”
customer base. Pat reckons that variable costs to operate the vehicle are irrelevant,
since these costs are borne by the lessee.
Pat seeks your assistance in developing reliable estimate of the relevant costs
associated with automobile use. First, Pat wants to develop a costing method that
measures the diminution in value associated their particular auto usage patterns.
Second, Pat’s Car Rentals must develop a method for monitoring the actual
manner of car use by customers. Though presently confident of the neighbors’
planned uses for the automobile, Pat is not confident that potential customers will
accurately reveal their usage patterns when different rental rates are applied to
different types of auto usage.
222 RICHARD J. PALMER AND HENRY H. DAVIS

Fig. 1. Machine and Machine-Related Cost Center Allocation Model.

Another friend provided Pat with a diagram of how an organization should,


in theory, allocate machine-related costs (see Fig. 1). Pat doesn’t understand
what is meant by intensity, duration, and transaction allocation methods, but has
identified the following items as important determinants that decrease the useful
life of an automobile: load pulled, number of starts and stops, number of ignitions,
average speed, rate of acceleration, miles driven and off-road activities, or type
of terrain traversed. In Pat’s mind, each type of automobile use is a “cost driver”
that contributes equally to the diminution in the value of the vehicle (e.g. load
pulling wears out a car in the same degree as start/stop traffic) and, if performed
on a consistent basis, would reduce vehicle life by about 33,333 miles.

Required

(1) Explain the concept of activity based “cost drivers.”


(2) Explain the concepts of: (a) intensity; (b) duration; and (c) transaction drivers.
Classify Pat’s determinants of auto usage into one of these three categories
and justify your classification scheme.
(3) Develop a costing method that recognizes differential auto consumption due
to different rental usage patterns.
Connecting Concepts of Business Strategy and Competitive Advantage 223

(4) Compare and contrast the cost method developed in (3) with that traditionally
assigned to rental cars.
(5) Devise a method to accurately monitor actual auto usage patterns of renters.
(6) How does Pat’s alternative costing/pricing method provide Pat with a compet-
itive advantage in the rental car business?

PAT’S CAR RENTALS:


TEACHING NOTES
Background for Instructional Use

Faced with the need to raise productivity to survive, especially against low cost
competitors in nations such as China, North American companies are pushing to-
ward fully automated production processes (known as “lights-out” or “unattended
operations” manufacturing) (see, for example, Aeppel, 2002). As manufacturers
continue their inexorable movement toward increased automation, the issue of how
to allocate machine costs to products becomes increasingly important to organiza-
tional understanding of profitability dynamics. Ignorance of profitability dynamics
is dangerous in highly competitive manufacturing contexts. If machine costs are
allocated (i.e. depreciated) to products on a basis that is incongruent with the re-
alities of the machine use, then income and product profitability will be distorted
over the life of the asset.
Pat’s Car Rentals presents a simple scenario about the increasingly important
business and accounting problem of allocating fixed costs to products produced
or services rendered. We have found that this case is easily comprehended by and
provides useful insights to both undergraduate and graduate students, especially
those enrolled in advanced cost or strategic cost management courses.
There are two major hurdles to negotiate in teaching the case. The first hurdle
is to get students to understand the analogy between the “car” and advanced
manufacturing equipment. Like advanced manufacturing equipment, the car
performs multiple operations based on the demands of different customers.
Further, customer demands place differing amounts of stress on machinery (autos)
– in ways not always commensurate with machine hours (driving time). Once
this analogy is comprehended, students tend view depreciation of machinery
quite differently. The second hurdle relates to the topic itself. The significance of
depreciation is not adequately addressed in current accounting texts. Occasionally
our students have argued that depreciation represents an irrelevant sunk cost and
that any effort to allocate these charges is counterproductive.
224 RICHARD J. PALMER AND HENRY H. DAVIS

The next sections provide a brief historical perspective on the concept of


depreciation and modern application of activity-based principles to the case of
Pat’s Car Rentals. Additionally, the machine cost assignment model presented
in the case is described in greater detail as a mechanism to obtain a better match
between machine resource consumption and machine cost assigned.

A BRIEF HISTORY OF THE MACHINE COST


ALLOCATION DILEMMA
Pat’s concerns about the relationship between the neighbor’s use and the
physical life of the automobile are non-trivial and shared by managers of
automation-intensive manufacturing operations. For some time, direct labor has
been decreasing while manufacturing overhead (and in particular, machine and
machine-related costs) has been an increasing component of product cost. The
cost of goods sold at Ford Motor Company’s Romeo Engine Plant, for example,
is comprised of 2% direct labor, 23% manufacturing overhead, and 75% direct
material (Kaplan & Hutton, 1997). In high technology computer manufacturing,
labor and material are even smaller components of cost (Cooper & Turney, 1988).1
The most common incarnation of inaccurate machine cost allocation, an
artifact of the popular time-based depreciation method, is product costs that are
unrealistically high and product line income figures that are unrealistically low in
the early years of production.2 Since the movement toward increased automation
is unlikely to abate (particularly given the positive impact of automation on
quality), the problem of machine cost allocation and its impact on product costs
will only increase in importance.3
Adding complexity to the dilemma of identifying an appropriate allocation of
machine cost to products is the changing nature of the machinery itself. Deprecia-
tion concepts were formulated in the days when a machine largely automated one
particular operation on a product. If all products undergo the exact same operation
on the machine, allocating machine costs based on (for example) machine hours is a
reasonable approach to assigning machine costs to products. However, today’s ma-
chines are very different from days past. Computer numerically controlled (CNC)
machines perform a wide variety of operations on products. Collections of CNC
machines (each designed for a broad range of functions), connected by a material
handling system and controlled by a central computer system, comprise a flexible
manufacturing system (FMS). Different products utilize different machine capa-
bilities, which, depending on the capabilities used, put greater or less wear and
tear on the equipment.
Connecting Concepts of Business Strategy and Competitive Advantage 225

POPULAR DEPRECIATION
FRAMEWORKS FOR MACHINERY
Understanding machine resource consumption and the method of allocating the
consumption of machinery among products are choices organizations make within
the boundaries of accounting, legal, and regulatory boundaries. There are numerous
ways to calculate depreciation, accounting rules only require that the method by
which asset consumption is measured be rational and systematic (i.e. consistently
applied across time periods). Until recently the measurement and allocation of de-
preciation charges to products typically was an immaterial issue. However, as noted
above, increased levels of investment in machinery now make depreciation method-
ology a key element of accounting policy (see, for example, Brimson, 1989).
The most popular machine cost allocation schemes are time-based or volume-
based single factor models. However, both of these methods distort the cost of
products produced by the machine. Specific problems with the use of a single
factor method of depreciation, whether time or volume-based, are discussed below.

Time-Based Depreciation Models

Depreciation methods that recover costs over a fixed period of time assume that
value added to the product is independent of individual products and the actual
utilization of technology during the recovery period. Time-based models of de-
preciation dominate modern corporate accounting practice primarily because they
guarantee (assuming that a conservative estimate of useful life is employed) that
machine cost will be assigned (or “recovered”) by the end of a specified deprecia-
ble life. In addition, time-based methods are simple to calculate, require minimal
on-going data collection, and are agreeable to virtually all regulatory reporting
agencies.4
However, there are several important and well-known problems with the use
of time-based depreciation models. First, time-based methods tend to increase
current period expense (and hence product costs) compared to volume-based or
activity-driven cost allocation schemes. This occurs because productivity often
declines significantly in the first year when compared with the prior technology.
The productivity drop in the first year is associated with the learning curve that
companies must scale as they familiarize themselves with new machinery (Hayes
& Clark, 1985; Kaplan, 1986). Hence, time-based depreciation charges are partic-
ularly onerous on individual product costs and return on investment calculations
in the early years.5
226 RICHARD J. PALMER AND HENRY H. DAVIS

Time-based approaches may also lead to dysfunctional behavior. Managers,


conditioned to equate time with cost will fear that idle machinery is a drain on
profitability (Brimson, 1989). Thus, managers may feel compelled to overproduce,
resulting in excess inventory.
Business cyclicality also highlights weaknesses in time-based depreciation
methods. The problematic interaction of straight-line depreciation with business
cycles was identified by a manager at Cummins Engines (Hall & Lambert, 1996)
as follows:
During the 1980s, the cyclicality of Cummins’ business became more frequent, more pro-
nounced, and longer-lived, to the extent that straight-line depreciation of production equipment
over 12 years no longer resulted in a systematic and rational method of allocating cost. The
straight-line method over 12 years significantly understated product cost in years of high de-
mand while overstating costs significantly in years of low demand (p. 33).

Volume-Based, Single-Factor Models

A popular alternative to a time-based method is to depreciate machine cost on the


basis of a measure of expected volume. Volume measures may be inputs (such
as actual machine utilization) or outputs (such as standard machine hours for
units produced). Volume-based approaches provide a better match of automation
expenses to changing economic conditions, resulting in the lessening of the im-
pact of economic cycles. There are, however, problems inherent with use of this
methodology.
Problems with depreciation methods that employ a single volume-based factor
are both technical and behavioral. The technical issues relating to depreciation
based solely on output volume measures were summarized by Staubus (1971):
Rarely would one of these [output] measures be completely satisfactory because the units of
service provided by a particular asset are not homogeneous. Every mile traveled by a motor
vehicle is not identical to every other mile traveled; a mile is not a constant measure of service.
This is more obviously true when the asset has a long life so that the conditions of its use change
a great deal (p. 57, emphasis added).

By extension, all machine hours are not equal. The case of Pat’s Rentals, for
example, recognizes that some customers will put “hard miles” (or hours) on the
rented automobile while other customers will put “easy miles” (or hours) on his
car. Likewise, in manufacturing contexts differences in speed and feed rates of
CNC equipment subject the machinery to differing amounts of stress. Further,
depreciation amounts based on any one volume-related factor fail to address
situations where the machine is not in use. Typically, the machine loses service
potential without regard to actual use because of deterioration or obsolescence.6
Connecting Concepts of Business Strategy and Competitive Advantage 227

Dysfunctional employee behavior may also be encouraged by the use of any sin-
gle volume-based metric for overhead allocation. Cooper and Kaplan (1999) argue,
for example, that the use of machine hours alone to allocate depreciation encour-
ages acceleration in speed and feed rates to increase production with a minimum
number of machine hours. This behavior can damage machinery, reduce product
quality, choke downstream bottleneck operations, and encourage outsourcing of
machined parts that may result in a “death spiral” for the manufacturer.
Staubus suggested that it may be possible to create a refined measure of
depreciation that adjusts for unwelcome variations by either by refining the
measurement unit itself or arbitrarily varying the weights of measured service
units. An example of the former case is John Deere Component Works, where
kilowatt-hours were multiplied by a calculated machine “load” factor to assign
utility costs to four different machines (Kaplan, 1987). An example of the latter
approach would be to use an “adjustment factor” that assigns higher costs to the
early miles in the life of transportation equipment.
Milroy and Walden (1960) recognized multiple causal factors associated with
the consumption of capital resources and suggested “it may well be possible that
a [scientific] method could be devised in which consideration would be given to
variation in the contribution of service units to firm revenue” (p. 319). One means
by which an organization can recognize several significant causes of depreciation
of high technology machinery is to develop a multiple-factor model.

MULTIPLE COST DRIVERS FOR


MACHINE COST ALLOCATION
Over fifty years ago, Finney and Miller (1951) stated that no one factor alone is
necessarily the primal causal factor of the consumption of the machine resource.
Their support for the use of a multi-factor approach invoked the following logic:
Although physical deterioration is undoubtedly affected by use, and although it is proper to
adjust depreciation charges to give consideration to varying levels of operations, it does not
follow that the depreciation charges should be exactly proportionate to use. The normal peri-
odical depreciation charge may include provisions for: (a) deterioration caused by wear and
tear; (b) deterioration caused by the action of the elements; and (c) obsolescence. If a machine
which normally is operated for one eight-hour shift daily is, for some reason, operated on a
twenty-four hour basis, the proper depreciation may be more or less than three times the normal
charge (emphasis added, p. 452).

Historically, some innovative companies have used variations of multi-factor


depreciation models. In early days in the railroad industry, for example, the cost
of transportation equipment was allocated on the basis of ton-miles as opposed to
228 RICHARD J. PALMER AND HENRY H. DAVIS

linear miles (Kaplan, 1985). More recently, Cummins Engines (Hall & Lambert,
1996) managers described their use of a modified units-of-production method
that assumes that depreciation of an asset is a function of both time and usage.
Consequently, depreciation in periods of extremely low production volume is a
fixed amount; yet, as production volume increase above low levels, depreciation
is increasingly attributable to volume.
Though previous multi-factor depreciation models are both important and
practical, they do not provide an overarching framework that can be applied to
a broader set of business contexts. The comprehensive multi-factor depreciation
model presented in Pat’s Rentals is built upon the activity-based costing logic
of Cooper and Kaplan (1999). According to Cooper and Kaplan, there are three
types of activity cost drivers: transaction, duration, and intensity. Transaction
drivers, such as the number of setup, number of receipts, and number of products
supported, count how often an activity is performed. Duration drivers, such as
setup hours, inspection hours, and machine hours, represent the amount of time
required to perform an activity. Intensity drivers directly charge for the resources
used each time an activity is performed, such as an engineering change notice or
creation of a pound of scrap. Intensity drivers are the most accurate activity cost
drivers but are the most expensive to implement because they, in effect, require a
direct charging via job order tracking of all resources used each time an activity is
performed.7

A GENERAL MODEL FOR


MACHINE COST ALLOCATION
There are two significant challenges relating to machine depreciation: (1)
identifying the appropriate measure of depreciation (as discussed in earlier
sections of this paper); and (2) identifying the best way to assign depreciation
(however measured) to the products that pass through a transformation process
of the machine. Applying Cooper and Kaplan’s activity driver concepts to the
machine cost allocation problem, the instructor can review the two stage causal
model of machine resource consumption presented in Fig. 1.
In the first stage, the consumption of the machine resource is measured by ref-
erence to the machine activities that have transpired (thus the dotted line between
intensity, duration, and transaction machine cost activity drivers). In the second
stage, the cost of the machine resource consumed is allocated to products or cus-
tomers on the basis of duration, transaction, and intensity cost drivers associated
with their use of the machine. Further definition of these activity cost drivers is
discussed next.
Connecting Concepts of Business Strategy and Competitive Advantage 229

Duration Drivers

An important factor exhausting machine resource is volume – typically measured


as the machine’s actual processing time or physical production. Machine utilization
time probably is the most popular single activity factor prescribed by proponents
of activity-based costing in machine-intensive environments (see, for example,
Cooper & Kaplan, 1999). A CNC machine or machine center has an estimated
number of productive hours under normal operating conditions. Thus, an hourly
machine utilization rate for productive time is an important measure of the con-
sumption of technology resources.
However, as noted above, all machine hours are not equal. One factor that
modifies the impact of the machine’s actual processing time on the consumption
of the technology resource is the “intensity” (or “load”) placed on the machine.
This aspect of machine utilization is discussed next.

Intensity Drivers

The intensity of machine use (commonly referred to as “load”) is defined as the


multiple of the speed and feed rates at which an operation is performed. Any
departure from the average load will either place additional strain on or conserve
the resource. Within an operation, the path, speed, and feed rate of tool heads are
determined by operators. Identifying the intensity of machine use is analogous
to understanding wear and tear factors on Pat’s rental automobile. Driving an
automobile over mountainous terrain while pulling at trailer or driving at very high
speeds will shorten the life of an automobile. Analogously, operating equipment
at higher than normal prescribed speed and feed rates is a driver of inordinate
consumption of any machine resource. Thus, an intensity factor must be combined
with machine hours to address the weaknesses of depreciation attribution based
solely on machine hours.8

Transaction Drivers

Transaction drivers count how often an activity is performed. A machine operation


is a set of machining tasks that can be completed without repositioning the part.
The number of operations needed to produce a product is one driver of technology
resource consumption. For example, flexible manufacturing systems are capable
of performing many different operations. A typical operation contains several
transformation activities, such as drilling, boring, and reaming. The ability to
230 RICHARD J. PALMER AND HENRY H. DAVIS

perform multiple transformation activities on the same machine is one of the


greatest benefits of FMSs. Some machines have as many as 500 different tools
in their magazine.
A flat charge rate could be established for each machine operation utilized to
produce a product. Charging products on the basis of the number of operations
would motivate the use of the full range of FMS capabilities by encouraging mini-
mal product re-positioning within and across different operations. In addition, such
charges would encourage simplification in product design and the manufacturing
process since unneeded features or operations become more costly. This trans-
action activity consumes the resource by wear and tear associated with changing
from one operation to another. In effect, it is a setup charge – the new workpiece
has to be fetched by the machine and inserted automatically. Upon completion of
that operation another ejection occurs, and another insertion of a workpiece occurs
for another operation.
A second transaction driver of machine resource consumption is the number
of geometric movements a machine must make within an operation. This factor
captures the physical complexity of the product. Every movement or shift within
an operation places an additional element of wear and tear on the machinery. For
example, one circuit board may require many movements (twists, turns, rotations,
or shifts) or changes in the tool path to complete diode insertion. On the other
hand, the product recipe of another circuit board may place much less demand on
the machine. Therefore, a shift rate is a relevant factor to capture the wear-and-tear
on an FMS machine. By allocating a portion of the machine cost on the basis of
number of shifts or movements, programmers are given an incentive to program
the geometric path of the machine more efficiently. In the case of Pat’s Rentals,
transaction drivers would include the number of starts and stops and the number
of ignitions.

AN ABC MACHINE COST


ALLOCATION MODEL TO SOLVE
PAT’S CAR RENTAL DILEMMA
Step One: Identify duration, intensity, and transaction drivers and measures related
to the machinery in use. Table 1 shows the activity drivers and measures in the
case of Pat’s Rentals.
Step Two: Calculate hourly machine cost allocation rates or rates per unit of
output based on combinations of different machine utilization patterns. Table 2
shows how Pat’s rental could assign different depreciation costs of the automobile
to different customers based upon duration, intensity, and transaction activities.
Connecting Concepts of Business Strategy and Competitive Advantage 231

Table 1. Identification of Transaction, Intensity, and Duration Drivers, Cost


Driver Categories, Asset Attributes, and Activities Measures Related to
Automotive Machinery.
Automobile Machinery Activity Cost Asset Activity
Activity Drivers Driver Category Attribute Measure

Amount of time car is driven Duration Service life Engine hours


Amount of miles driven Duration Physical life Odometer
Load being pulled Intensity Service life Average engine RPM;
hookups to trailer
Off-road activities Intensity Physical life Global positioning system
(GPS) tracking/shock and
strut sensors
Speed Intensity Service life Global positioning system
tracking
Rate of acceleration/deceleration Intensity Service life Brake/tire sensors
Terrain traversed Intensity Physical life GPS/Average RPM
Start/stops Transaction Service life Brake sensors
Ignitions/re-ignitions Transaction Service life Ignition sensor

Step Three: As shown in Table 3, apply the different rates calculated in Step
Two to assign costs to customers and measure rental profitability. Pat will now
show a profit on rental to Customer A if the rental charge includes a depreciation
component that is above $0.40 per mile, while profitable rentals may be made to

Table 2. Allocation of $40,000 Automobile Costs Depending on Transaction,


Intensity, and Duration Drivers.a
Automobile Consumption Score
(a) (b) (c) (d) Cost (e) Expected (f) Depreciation
Duration Intensity Transaction Adjustment Automobile Rate Per Mile on
Score Life in ($40,000 vehicle
(a + b + c) Miles divided by (e))

0 0 0 0 200,000 (easy) miles $0.20


0 0 1 1 166,667 0.24
0 1 0 1 166,667 0.24
1 0 0 1 166,667 0.24
0 1 1 2 133,333 0.30
1 1 0 2 133,333 0.30
1 0 1 2 133,333 0.30
1 1 1 3 100,000 (hard) miles 0.40
a For illustrative purposes only. See section on assumptions and limitations regarding the measurement

of resource consumption.
232 RICHARD J. PALMER AND HENRY H. DAVIS

Table 3. Assigning Machine Costs Based on Customer Usage of Machine- The


Case of Pat’s Rentals.
Customer Machine Activity Cost Driver Customer Depreciation
Use Score Charge Per
Duration Intensity Transaction Mile

A (teenager) 1 1 1 3 0.40
B (trailer tow in mountains) 0 1 1 2 0.30
C (delivery service) 0 1 1 2 0.30
D (city driving) 0 0 1 1 0.24
E (high speed highway) 1 0 0 1 0.24
F (off-road) 0 1 0 1 0.24
G (grandparent) 0 0 0 0 0.20

Customer G for half that amount. Other customers fall somewhere in between,
depending on usage patterns.

THE CHALLENGE OF MEASUREMENT


Recommendations that add complexity to internal record-keeping systems are
rarely welcome in the marketplace and the willingness of companies to bear the
additional costs of data collection to support a multi-factor depreciation model
– computer programming, data collection, and data analysis – are questionable.
However, utilization of the proposed model can be justified on the basis of
several important facts. First, machine costs are a very large component of total
manufacturing charges in some industries. At high technology manufacturing
organizations, machine cost allocations can make or break customer or product
profitability and alter machine investment decisions. Second, product margins
in globally competitive manufacturing are thin; thus increasing the need for
companies to understand the connection between products and their consumption
of machine resources has never been higher. Third, the cost of technology to
measure machine activities continues to drop. Many of the commentators who
identified the need of better machine cost allocation lived in the pre-computerized
manufacturing world where the ability to gauge technology consumption was
non-existent. That problem has slowly dissipated with increasingly sophisticated
manufacturing and measurement tools.9 Fourth, the implementation a multi-factor
model is easier to accomplish in advanced manufacturing environments since
human resistance to change is minimized in highly automated contexts.
Connecting Concepts of Business Strategy and Competitive Advantage 233

APPLICATION OF MODEL TO
MACHINE-RELATED COSTS
The concept that underlies the activity-based machine cost allocation model can be
applied to other significant machine-related costs such as repairs and maintenance,
tooling (and associated costs of purchasing, material handling, tool crib, etc.) and
utilities (including compressed air, water, and power where not more accurately
measured by separate metering).

LIMITATIONS AND ASSUMPTIONS


The activity-based depreciation model presented in Fig. 1 assumes that each ma-
chine resource consumption factor contributes equally to the deterioration of the
machine itself. Further, it assumes that there is no multiplicative impact associated
with various combinations of the resource consumption factors. However, viola-
tion of these assumptions does not undermine the general model. The accountant
simply needs to work with production personnel to understand the variety of
possible production scenarios and the machine resource consumption pattern
associated with each scenario. Engineering simulations may prove valuable in
this context.

DISCUSSION AND CONCLUSION


The purpose of this paper was to present an instructional mini-case that requires
students to contemplate accounting thought on machine cost allocations and
consider how activity-based logic may assist modern businesses in connecting
machine costs in fixed-cost intensive operations to products based on the
demands products place on the machine. Examination of the underlying issues
revealed that, while problems with depreciation measurement are not new, their
significance has grown commensurate with increases in the level of factory
automation. Furthermore, advances in the technology by which to measure
machine activity drivers enable organizations to apply activity-based accounting
principles to more accurately connect product and customer demands with the
consumption of machinery. A better understanding of how to match machine costs
to products and customers will enable firms to make better strategic decisions
relating to pricing, product mix, customer retention, capacity utilization, and
equipment acquisition.
234 RICHARD J. PALMER AND HENRY H. DAVIS

Ultimately, the logic embodied in this case should be employed to further the
development of more refined models of machine cost allocation. Simulations of
machine use and maintenance by engineers, analogous to that presented in this
paper, could also provide a useful estimate of machine resource consumption. For
purposes of depreciation estimation, accountants may one day defer judgment to
these engineering simulations, much as they do now with geologists estimating
the oil and gas “reserves” still in the ground. Provided the depreciation method
is consistently applied on a uniform basis, such an accounting mechanism would
pass muster for financial statement purposes and create no more additional work
than is commonly found reconciling financial statement and tax income. Third,
it should be noted that not every feature of the multi-factor model described in
this paper needs to be employed. The model is intended to portray the need for a
more comprehensive consideration of machine resource consumption. Companies
would be best served to use the model as a general guide, customizing their
own depreciation methodology in a manner consistent with their manufacturing
realities.

NOTES

1. Further evidence of the significance of machine cost in business-to-business commerce


can be found in special adjustments made for the cost of equipment in cost-based pricing
situations (see, for example, the Shionogi and Company case by Cooper, 1996).
2. The unattractive product margins in the early years of the machine investment has
been criticized by those who think that accounting practices discourage managers from
investing in new machines as soon as necessary (see, for example, Porter, 1992).
3. While not within the scope of this paper, the well-publicized Enron debacle has
increased scrutiny of accounting measures and increases the need for more accurate
measures of current period income. For instance, Microsoft (Wall Street Journal, 2002b)
has been chastised for under-reporting income and the SEC has suggested that adherence to
GAAP is no justification for inaccurate financial statements (Wall Street Journal, 2002a).
4. The certainty of complete cost recovery by time-based models should be contrasted
to production-based models that assign costs to products based upon, for example, machine
hours. The production-based models require reasonably accurate estimates of future
production activities. Errors in these estimates can create a situation where technology
costs are illogically assigned to products or left unassigned to production entirely.
5. Managerial anticipation of an over-allocation of machine-related costs to products
in the early year of a machinery investment (when time-based depreciation methods are
employed) will have a significant impact on investment selection in “by the numbers”
capital budgeting contexts. Managers who justify projects on the basis of expected ROI will
either reject the project outright or accept the project provided that the technology cost be
depreciated over an unjustifiably long time frame (Keys, 1986). Under the latter alternative,
the understatement of depreciation in the early years will result in an overstatement of ROI
Connecting Concepts of Business Strategy and Competitive Advantage 235

and income in those years. Unfortunately, when the technological (but not physical) life of
the investment is over, managers have a double incentive to hold on to the old equipment
– to avoid the loss on disposition and the higher depreciation charges associated with
new equipment. Further, time-based approaches to depreciation typically are inconsistent
with the assumptions used in the investment justification decision. For consistency with
external reports, most companies use the arbitrary depreciable life span provided in the tax
code, rather than depreciating equipment over the shorter of the technological or physical
life of the equipment. This incongruity can become more distorted when the company has
invested in a FMS that produces products with short product life cycles.
6. In addition, there is greater potential for denominator forecast error when a single
volume factor is employed (Staubus, 1971).
7. Cooper and Kaplan (1999) state the some ABC analysts, rather than actually tracking
the time and resources required for an individual product or customer, may simulate
a duration or intensity driver with a weighted index approach that utilizes complexity
indexes. This technique might, for example, entail asking employees to estimate the
relative complexity of performing task for different products or customers. Thus, a standard
product might get a weight of 1, a moderate complexity product or customer a weight of
3, and a very complex product or customer a weight of 5.
8. In fact, these limits are designed in by manufacturers of advanced manufacturing
technologies. In most cases, information concerning these limits can be obtained from the
manufacturer.
9. An example of the shrinking cost of measurement technology (applicable to Pat’s
Rentals) is global positioning technology. Global positioning systems are now routinely
used in the trucking industry and have been used by one car rental agency to penalize
lessees by speeding violations (Wall Street Journal, 2001).

ACKNOWLEDGMENTS
Special thanks to Marvin Tucker, Professor Emeritus of Southern Illinois Univer-
sity and Mahendra Gupta of Washington University in St. Louis for contributions
to the concept in this paper.

REFERENCES
Aeppel, T. (2002, November 19). Machines still manufacture things even when people aren’t there.
Wall Street Journal, B1.
Brimson, J. A. (1989, March). Technology accounting. Management Accounting, 70(9), 47–53.
Cooper, R. (1996). Shionogi & Company, Ltd: Product and kaizen costing systems. Harvard Business
School Case.
Cooper, R., & Turney, P. B. B. (1988). Textronix: Portable Instruments Division (A) and (B). HBS
Cases 188–142 and 143.
Cooper, R., & Kaplan, R. (1999). The design of cost management systems (2nd ed.). Prentice-Hall.
236 RICHARD J. PALMER AND HENRY H. DAVIS

Finney, H. A., & Miller, H. E. (1951). Principles of intermediate accounting. Englewood Cliffs, NJ:
Prentice-Hall.
Hall, L., & Lambert, J. (1996, July). Cummins engines changes its depreciation. Management Account-
ing, 30–36.
Hayes, R. H., & Clark, K. B. (1985). Exploring the sources of productivity differences at the factory
level. In: K. B. Clark, R. H. Hayes & C. Lorenz (Eds), The Uneasy Alliance: Managing the
Productivity Dilemma. Boston, MA: Harvard Business School Press.
Kaplan, R. S. (1985). Union Pacific (A). Harvard Business School Case 186–177.
Kaplan, R. S. (1986, March–April). Must CIM be justified by faith alone. Harvard Business Review,
64(2), 87–93.
Kaplan, R. S. (1987). John Deere Component Works (A) and (B). HBS Cases #9–187–107 and 108.
Kaplan, R. S., & Hutton, P. (1997). Romeo Engine Plant (Abridged). HBS Case 197–100.
Milroy, R. R., & Walden, R. E. (1960). Accounting theory and practice: Intermediate. Cambridge,
MA: Houghton Mifflin Company.
Porter, M. E. (1992). Capital choices: Changing the way America invests in industry. Research Report
co-sponsored by Harvard Business School and the Council on Competitiveness.
Staubus, G. (1971). Activity costing and input-output accounting. Homewood, IL: Irwin.
The Wall Street Journal (08/28/2001). Big brother knows you’re speeding – Rental-car companies
install devices that can monitor a customer’s whereabouts.
The Wall Street Journal (2/12/02a). SEC accounting cop’s warning: Playing by rules may not ward off
fraud issues.
The Wall Street Journal (2/13/02b). SEC still investigates whether Microsoft understated earnings.
CHOICE OF INVENTORY METHOD
AND THE SELF-SELECTION BIAS

Pervaiz Alam and Eng Seng Loh

ABSTRACT
We examine the sample self-selection and the use of LIFO or FIFO inventory
method. For this purpose, we apply the Heckman-Lee’s two-stage regression
to the 1973–1981 data, a period of relatively high inflation, during which the
incentive to adopt the LIFO inventory valuation method was most pronounced.
The predicted coefficients based on the reduced-form probit (inventory choice
model) and the tax functions are used to derive predicted tax savings in the
structured probit. Specifically, the predicted tax savings are computed by
comparing the actual LIFO (FIFO) taxes vs. predicted FIFO (LIFO) taxes.
Thereafter, we estimate the dollar amount of tax savings under different
regimes. The two-stage approach enables us to address not only the man-
agerial choice of the inventory method but also the tax effect of this decision.
Previous studies do not jointly consider the inventory choice decision and the
tax effect of that decision. Hence, the approach we use is a contribution to the
literature. Our results show that self-selection bias is present in our sample
of LIFO and FIFO firms and correcting for the self-selection bias shows that
the LIFO firms, on average, had $282 million of tax savings, which explains
why a large number of firms adopted the LIFO inventory method during
the seventies.

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 237–263
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12011-X
237
238 PERVAIZ ALAM AND ENG SENG LOH

INTRODUCTION
Management accounting provides critical accounting information for day-to-day
managerial decision-making. The choice of the inventory method influences
managerial behavior for purchasing, cash flow management, and tax manage-
ment. For instance, based on managerial accounting information regarding
expected cash flows on various products and services, managers may decide
that it is optimal to forego LIFO (last-in, first-out) tax savings. Thus, managers
need to have a good understanding of the expected cash inflows and outflows
from various segments of the business. They should also have information on
available tax saving vehicles, including depreciation, interest, tax losses and
the affect of these variables on taxable earnings. Not only could management
accounting provide information on the initial selection of inventory method but
it could also assist in deciding to continue with the inventory accounting method
currently in use. One important role of managerial accounting in this area is in
monitoring inventory-levels to prevent LIFO layer liquidation in the event LIFO
is used.
Over the past twenty years, researchers in accounting have examined various
issues arising from a firm’s choice of accounting methods. Much of this literature
has been on the choice of the inventory costing method. Early research on inventory
selection method estimated the tax effects of the LIFO vs. FIFO (first-in, first-out)
method under the assumption that operating, financing, and investing activities
remain unaffected as a result of a change in inventory method. Researchers have
previously recognized that this ceteris paribus assumption ignores endogeneity
and self-selection of LIFO and FIFO samples (see Ball, 1972; Hand, 1993;
Jennings et al., 1992; Maddala, 1991; Sunder, 1973). The endogeneity problem
in the choice of LIFO vs. FIFO method is particularly important because the tax
effects of the inventory method may affect firm valuation (see Biddle & Ricks,
1988; Hand, 1995; Jennings et al., 1992; Pincus & Wasley, 1996; Sunder, 1973).
However, it is not possible to observe what the managerial decision would have
been had they used an inventory method different from the method currently in
use. Hence, a number of studies have developed “as-if” calculations to estimate the
tax effects of LIFO vs. FIFO (see Biddle, 1980; Dopuch & Pincus, 1988; Morse
& Richardson, 1983).1 These studies estimate a LIFO firm’s taxes as if it was a
FIFO firm and LIFO taxes for a FIFO firm. The “as-if” approach assumes that a
firm’s managerial decisions would have remained unchanged with the use of an
alternative method.
The purpose of this study is to re-examine the choice of LIFO vs. FIFO by using
Heckman (1976, 1979) and Lee’s (1978) two-stage method that incorporates
Choice of Inventory Method and the Self-Selection Bias 239

self-selection and endogeneity. The Heckman-Lee approach is used to incorporate


the endogenous choice of inventory method in the tax estimation equation. The
explicit inclusion of the inter-dependence of the choice of the inventory method
and the tax effects of the choice distinguishes this paper from those extant in the
literature. We use Lee and Hsieh’s (1985) probit model for the first-stage inventory
choice model and develop a tax estimation function using prior literature for the
second-stage analysis. Finally, we compare Heckman-Lee based measure of tax
savings to those developed using ordinary least squares (OLS).
Our analysis leads to the following results. First, following Lee and Hsieh (1985),
we derive the reduced-form probit estimates for the inventory choice method.
The significance and sign of the coefficients of our probit estimates are generally
similar to those of Lee and Hsieh. Second, the self-selection bias is a significant
factor in analyzing whether firms choose to use LIFO or FIFO. Firms, on average,
choose the inventory method that gives them the largest tax benefit. Third, the
results of structured-probit show (not reported) that the predicted tax savings has a
significant positive coefficient suggesting that firms are likely to select LIFO when
predicted FIFO taxes are more than actual LIFO taxes. Fourth, correcting for self-
selection bias enables us to infer that, on the average, LIFO firms would pay more
taxes if they were using FIFO, and FIFO firms would pay lower taxes if they were
using LIFO.
Our results for selectivity-adjusted approach show $282.2 million of mean
(median $297.4 million) tax savings for LIFO firms and $12.3 million (median
$1.2 million) of tax savings foregone for FIFO firms. Results based on ordinary
least squares (OLS) estimate indicate that both LIFO (means $40.6 million,
median $2.6 million) and FIFO firms (mean $11.3 million, median $5.0 million)
had foregone tax savings by choosing the method they were using. In other
words, these firms would have had tax savings had they used the alternative
method.
We have greater confidence in the results of the selectivity approach because
these results are econometrically derived and are based on variables, which are
largely accepted in the LIFO and tax literature. The larger tax savings of LIFO
results under the selectivity-based approach explains why a large number of
firms had adopted LIFO inventory valuation method during the seventies. We
also recognize that FIFO firms could have obtained sizable tax savings had they
switched to LIFO. However, firms do operate under various restrictive conditions,
which suggest that tax minimization is not the only objective function a firm’s
management desires to achieve (Scholes & Wolfson, 1992). Hence, it appears
that a firm’s choice of the inventory method is a rational economic decision even
when an alternative method could have produced larger tax savings.2
240 PERVAIZ ALAM AND ENG SENG LOH

PRIOR RESEARCH
A number of studies have attempted to explain why firms do not use the LIFO
method in periods of rising prices and thus forego the opportunity of potential
tax savings (see Adel-Khalik, 1985; Hunt, 1985; Lee & Hsieh, 1985). One
explanation advanced is that FIFO firms are concerned about the drop in stock
prices upon the adoption of LIFO. Still another explanation is that the cost of
LIFO conversion may be more than the tax benefits of adoption. For instance,
using 1974–1976 LIFO data, Hand (1993) estimated that the cost of LIFO
adoption for his sample of firms was as high as 6% of firm value, a sizable cost for
most firms.
Empirical studies on whether LIFO tax saving is valued by investors has been
extensively studied. Many of these studies suffer from the problems of event date
specification, contaminating events, firm size, etc. (Lindahl et al., 1988). Kang’s
(1993) model predicts that positive price reaction to LIFO adoption occurs only
when the expected LIFO adoption costs are less than expected LIFO tax savings.
He argues that the positive stock price reaction will occur because the switch to
LIFO will recoup previously lost LIFO tax savings. Some studies demonstrate
positive stock price reaction surrounding LIFO adoption (Ball, 1972; Biddle &
Lindahl, 1992; Hand, 1995; Jennings et al., 1992; Sunder, 1973) while other
studies have reported negative market reaction to LIFO adoption (see Biddle &
Ricks, 1988; Ricks, 1982). Pincus and Wasley (1996) results show some degree
of market segmentation. They found positive market reaction to OTC firms and
negative market returns for NYSE/ASE firms. Finally, Hand’s (1993) results
indicate that the LIFO adoption or non-adoption decision resolves uncertainty
regarding LIFO tax savings.
Contracting cost theory also provides reasons why firms may not adopt the
LIFO inventory method. LIFO adoption decreases asset values and net income,
potentially causing some firms to be in violation of debt covenants. Furthermore,
managers on bonus contracts may not want lower LIFO earnings because the
use of LIFO may reduce their total compensation. Adel-Khalik (1985), Hunt
(1985), and Lee and Hsieh (1985) provide some evidence that debt covenants
help to explain the choice of inventory method but compensation plans do not.3
Another reason why firms decide to use a particular method is that firms differ
systematically in the nature of the production-investment opportunity set available
to them. Therefore, the LIFO method is an optimal tax reporting choice for some
firms and not for others. A common empirical approach to estimate the amount of
tax savings firms could have obtained from an alternate method, other than their
observed choice of inventory accounting method, is the as-if method (see Biddle,
1980; Biddle & Lindahl, 1982; Dopuch & Pincus, 1988; Morse & Richardson,
Choice of Inventory Method and the Self-Selection Bias 241

1983; Pincus & Wasley, 1996). We implement an alternative approach, which


relies on the work of Heckman (1976, 1979) and Lee (1978).

CONCEPTUAL FRAMEWORK
This section presents the conceptual basis for the empirical analysis that follows.
Assume firms have only two inventory valuation methods available: LIFO or FIFO.
A typical firm’s decision to adopt LIFO depends on its own assessment of the ben-
efits to be gained and the costs that must be incurred. As previously stated, LIFO
costs are associated, among others, with implementation, negative market reaction,
and contracting costs. In addition, there may be LIFO layer liquidation costs
resulting from price decline (e.g. electronics industry). As a result, a firm would
rationally choose to use LIFO only if the expected benefits outweigh the expected
costs.4 Otherwise, it would remain as a FIFO firm. With the LIFO or FIFO status
thus determined, the firm’s LIFO benefits depend on their operating and financial
characteristics, the nature of the industry, and the provisions of the tax code.
Let the benefit of adopting the LIFO method be measured by the tax savings
received by the firm. We assume that tax savings is a benefit to the firm because the
increased cash flow widens the set of feasible production-investment opportunities
and, thus, improves the firm’s long-term prospects. However, foregoing tax bene-
fits may be an optimal strategy for a firm in a framework of Scholes and Wolfson
(1992) or when the inventory valuation method is used to signal firm value.5
Let the total taxes T paid by each type of firm be written as:
T L = aX L + e L (1)
T F = bX F + e F (2)
where the subscripts L and F denote the LIFO and FIFO firms, X is a vector of
explanatory variables common to both groups of firms, a and b are vectors of
coefficients, and e is the random error term. Firms choose the type of inventory
valuation method that will maximize their overall tax benefits given other con-
straining factors. Thus, they choose the LIFO method if
TS = (T F − T L ) > C (3)
where TS is the positive dollar tax savings (assuming T F > T L ), and C is the
associated dollar cost, of choosing LIFO. There is unlikely to be a single observed
number in the firm-level data set to represent the cost of choosing LIFO, although,
in practice, many reasons can be found to justify the assumption that choosing LIFO
is not cost-free. Prior literature suggests that the dollar cost of LIFO adoption may
242 PERVAIZ ALAM AND ENG SENG LOH

reduce management compensation (Hunt, 1985), lead to possible violation of debt


covenants (Hunt, 1985; Morse & Richardson, 1983), and increased fixed costs
of computing ending inventory value (Hand, 1993; Morse & Richardson, 1983).
Using these arguments, we assume that there is a systematic relationship between
LIFO costs (C) and these factors; that is,

C = cY + n (4)

where Y is a vector of regressors, c is a vector of coefficients, and n is the error


term. Substituting Eq. (4) in Eq. (3), we get

T F − T L > cY + n (5)

This expression may be written as a probit equation:

I = LIFO if I ∗ > 0
I = FIFO if I ∗ ≤ 0

where

I ∗ = ␣0 + ␣1 (T F − T L ) + ␣2 Y − ␮ (6)

where ␮ is the error term of the probit function. In practice, we observe the tax
payments of either the LIFO or FIFO firm, but never both simultaneously, implying
that the probit Eq. (6) cannot be estimated directly. One way to proceed is to
estimate the tax Eqs (1) and (2), via ordinary least squares (OLS) and use the
predicted values in Eq. (6). However, because of the self-selected nature of the
LIFO and FIFO firms, the expected mean of the error terms in the tax equations
are non-zero, i.e. E(e L |I = LIFO) = 0 and E(e F |I = FIFO) = 0. Thus, the OLS
estimation of the tax equations leads to inconsistent results. There is no guarantee
that the estimated coefficients will converge to the true population values even in
large samples. To avoid this bias, we proceed via the two-stage method suggested
by Heckman (1976, 1979) and Lee (1978).6 We begin by estimating the reduced
form probit equation found by substituting Eqs (1) and (2) into Eq. (6),

I = LIFO if I ∗ > 0
I = FIFO if I ∗ ≤ 0

where

I ∗ = ␥0 + ␥1 X + ␥2 Y − (␮ − e). (7)
Choice of Inventory Method and the Self-Selection Bias 243

Equation (7) is the form of the estimation equation commonly found in the LIFO
determinants literature. After estimating (7), we derive the inverse Mills’ ratios,
 
−␸(u)
␭L = , (8)
(u)
 
␸(u)
␭F = (9)
1 − (u)
where u is the predicted value of the error term from the reduced form probit, ␸ is
the standard normal probability density function (pdf) for u, and  its cumulative
density function (cdf). In the second stage, the coefficient of the lambda terms in
the tax functions serve as sample covariances between the tax function and the
criterion I∗ (i.e. a 2 = ␴L␮ and b 2 = ␴F␮ ). Hence, the following the tax functions
are obtained:
T L = a 1 X L + a 2 ␭L + ␧L (10)
T F = b 1 X F + b 2 ␭F + ␧F . (11)
Equations (10) and (11) are then estimated using the self-selectivity and the OLS
approaches to construct predicted FIFO tax payments for observed LIFO firms and
predicted LIFO taxes for observed FIFO firms.7 In essence, the Heckman-Lee two-
stage procedure treats the self-selection bias as arising from a specification error: a
relevant variable ␭ is omitted from each of the tax equations. Statistical significance
on a2 and b2 shows that these covariances are important and that management
selection of the LIFO or the FIFO inventory valuation method is not random. In
short, self-selection is present. Interpretation of the firms’ behavior depends on
the signs of ␴L␮ and ␴F␮ , which may be either positive or negative. For instance,
if ␴L␮ < 0 then firms whose expected LIFO taxes are lower than average, should
have a lower chance of being a FIFO firm. Similarly, if ␴F␮ < 0 then firms whose
expected FIFO taxes are lower than average should have a lower chance of being
LIFO firms. Although these covariances can bear any sign, model consistency
requires that ␴L␮ > ␴F␮ (see Trost, 1981). This condition of the covariance term
(␴L␮ > ␴F␮ ) ensures that the expected FIFO taxes of FIFO firms will be less than
their expected taxes if they switched to LIFO status. Similarly, the expected tax
payments of LIFO firms will remain less than their expected tax payments if they
switched to FIFO.
An important assumption of the Heckman-Lee model is that the error terms in
the structural equations (␧L , ␧F , and ␮) are joint-normally distributed. Inconsistent
estimates result if the underlying population distribution is non-normal (although,
strictly speaking, this problem exists with any parametric model). Given the
perceived rigidity of the joint-normality assumption, researchers have suggested
244 PERVAIZ ALAM AND ENG SENG LOH

alternative approaches based on nonparametric and semi-parametric estimators


(see, for example, Duncan, 1986; Heckman, 1976, 1979; Manski, 1989, 1990;
Ming & Vella, 1994). Unfortunately, these approaches are often limited in their
applicability. For instance, the nonparametric bounds model in Manski (1990)
is defined for only two regressors. Thus, one might gain robustness in estimates
but may not be able to exploit the breadth of data available. This issue remains
unsettled at the present time, leaving the Heckman-Lee model as the accepted
dominant vehicle for the empirical analysis of selection bias. In this study, we
use Pagan and Vella (1989) for the joint normality test. The results of this test are
reported in Note 23.
In order to assess tax savings under different regimes we use three different
approaches: (1) the estimated tax savings is calculated as the difference between
predicted taxes less actual taxes and is used as an independent variable in a
structured probit; (2) the coefficients from selectivity-adjusted tax equations are
used to calculate alternative dollar FIFO tax savings for observed LIFO firms, and
vice versa; and (3) the coefficients from OLS tax equations are used to compute
LIFO (FIFO) dollar tax savings for FIFO (LIFO) firms.8

MODEL SPECIFICATION AND DATA SELECTION


Determinants of Inventory Method Selection

We use Lee and Hsieh’s (1985) model for purposes of estimating the reduced-form
probit model described in Eq. (7). We select their model because of the comprehen-
siveness of the variables examined and the theoretical justification for the selection
of those variables. They test the joint effect of political cost, agency theory, and
Ricardian theory on the LIFO-FIFO decision, by using eight proxy variables and
an industry dummy to capture the features of the production-investment opportu-
nity set that are pertinent to the choice of the inventory accounting method. The
variables they use are: firm size, inventory variability, leverage, relative firm size,
capital intensity, inventory intensity, price variability, income variability, and in-
dustry classification.9 Thus in this study, the inventory choice model is expressed
as follows:

I = ␥0 + ␥1 LGTASSTit + ␥2 INVVARit + ␥3 LEVit + ␥4 RELASSTit + ␥5 CIit


(+) (?) (−) (+) (+)

+ ␥6 INVMit + ␥7 CPRICEit + ␥8 INCVARit + ␥9 IDNUMt ␧it (12)


(−) (+) (−) (?)
Choice of Inventory Method and the Self-Selection Bias 245

where:
I = 1 for LIFO firms and I = 0 for FIFO firms;
LGTASST = firm size computed as the log value of total assets;
INVVAR = inventory variability computed as the coefficient of variation (vari-
ance/mean) for year-end inventories;
LEV = agency variable derived as the ratio of long-term debt less capitalized
lease obligations to net tangible assets;
CI = capital intensity variable computed as the ratio of net fixed assets
to net sales;
RELASST = relative firm size derived as the ratio of a firm’s assets to the total
industry assets;
INVM = inventory intensity computed as the ratio of inventory to total assets;
CPRICE = price variability derived as the relative frequency of positive price
change for each four-digit SIC industry code;
INCVAR = accounting income variability as the coefficient of variation (vari-
ance/mean) of before tax accounting income; and
IDNUM = captures the industry effect by assigning a dummy variable to each
of the 19 two-digit industries. Thus, IDNUM is a vector of 19 two
digit SIC codes.
Table 1 provides further description of the variables used in Eq. (12). The table
lists the variables used, their description, and the Compustat data items used to
derive the variables.
A brief description of the reasons for the selection of the regressors used in
the probit function (Eq. (12)) follows.10 Ceteris paribus, larger firms (proxied by
LGTASST) are likely to adopt LIFO because of their comparative advantage in
absorbing costs of LIFO conversion and related bookkeeping and tax-reporting
costs. The high inventory variability (INVVAR) suggests that the cost of inventory
control may be higher because of the possibility of liquidation of LIFO layers or
possible excess inventories. On the other hand, it is likely that adopting LIFO may
lead to lower inventory variability in order to maintain LIFO layers. Hence, it is
difficult to predict the association of the INVVAR variable to LIFO use. The lever-
age variable (LEV) serves as an agency proxy. Firms with higher leverage are more
likely to default on debt covenant restrictions (Smith & Warner, 1979), driving them
to choose income-increasing accounting methods. Hence, the leverage variable is
likely to be negatively related with the use of LIFO method. The relative firm size
(RELASST) is a measure of size with respect to industry. It is expected that rela-
tively larger firms in an industry will have a comparative advantage in using LIFO.
Firms with high values of capital intensity (CI) generally possess necessary
resources to engage in extensive financial and production planning needed to
246 PERVAIZ ALAM AND ENG SENG LOH

Table 1. Operationalization of Variables Used for Probit and Tax Functions.


Variable Definition

I Dependent variable used in probit estimation where I = 1 if the firm uses LIFO and 0
if the firm uses FIFO (59).
LNTXPAY Dependent variable used in tax functions. Defined as the natural logarithm of total
income tax expenses (16).
LGTASST Firm size measured as log of total assets (6).
INVVAR Coefficient of variation (variance/mean ratio) for year-end inventories.
LEV Leverage ratio computed by dividing long-term debt less capitalized lease obligations
to net tangible assets (9−84)/(6−33).
RELASST Ratio of a firm’s assets to the total of industry assets based on the SIC four-digit
industry codes.
CI Capital intensity measured as net property, plant, and equipment divided by net sales
(8/12).
INVM Inventory materiality computed as a ratio of inventory to total assets (3/6).
CPRICE Relative frequency of positive price changes for each SIC four-digit industry code
over the sample period. Producer Price Index was obtained from the publications of
the U.S. Department of Commerce.
INCVAR Coefficient of variation (variance/mean ratio) of before tax accounting income.
FIXED Property, plant, and equipment, net (8) divided by the market value of equity at the
fiscal year-end (24 × 199).
NDTS Non-debt tax shield computed as the ratio of the sum of deprecation and investment
tax credits (103 + 51) to earning before interest, taxes, and depreciation (172 + 15 +
16 + 103).
LATS Available tax savings measured as natural logarithm of tax loss carryforwards (52)
multiplied by cost of goods sold (41).
INVTS Inventory to sales measured as inventories (3) to net sales (12).
TLCF Net operating tax loss carryforward measured as a ratio of net operating tax
carryforward to net income before interest, taxes, and depreciation expenses (52/(172
+ 15 + 16 + 103)).

Note: Compustat data item numbers in parentheses.

use LIFO. Hagerman and Zmijewski (1979), Lee and Hsieh (1985), and Dopuch
and Pincus (1988) suggest that large capital-intensive firms have a comparative
advantage in adopting LIFO. The inventory to total assets ratio (INVM) serves as
a proxy for measuring how efficiently the inventory has been managed. Following
Lee and Hsieh (1985), INVM is expected to be negatively associated with the
use of the LIFO method. The price variability (CPRICE) variable is a proxy for
inflation. The higher the inflation rate, the higher the likelihood that firms would
adopt LIFO. Lee and Hsieh (1985) argue that production-investment opportunity
sets will vary from industry to industry. Therefore, a dummy variable is assigned
to each of the two-digit SIC industries.
Choice of Inventory Method and the Self-Selection Bias 247

Specification of Tax Functions

The regressors for the tax functions were identified based on the review of the
relevant tax literature (see Biddle & Martin, 1985; Bowen et al., 1995; Dhaliwal
et al., 1992; Trezevant, 1992, 1996). The tax functions are listed below:

T L = ␦0 + ␦1 FIXEDLt + ␦2 NDTSLt + ␦3 LATSLt + ␦4 INVTSLt + ␦5 TLCFLt


+ ␧Lt (13a)
T F = ␦0 + ␦1 FIXEDFt + ␦2 NDTSFt + ␦3 LATSFt + ␦4 INVTSFt + ␦5 TLCFFt
+ ␧Ft (13b)

where:
TL or TF = the logarithmic value of total taxes for LIFO or FIFO firms,
respectively;
FIXED = net property, plant, and equipment divided by the market value of
equity;11
NDTS = non-debt tax shield derived as the sum of depreciation and investment
tax credits divided by earnings before interest, taxes, and depreciation;
LATS = available tax savings measured as the logarithmic value of tax loss
carryforwards times cost of goods sold;
INVTS = inventory turnover measured as inventories to net sales; and
TLCF = net operating loss carryforward to net income before interest, taxes,
and depreciation.
The variables used in tax functions (13a) and (13b) are based on the assumption that
a firm’s taxes depend upon net fixed assets, non-debt tax shield, tax savings avail-
able from adopting LIFO, efficiency of inventory management, and the amount of
the tax loss carryforward. It is important to note that firms do trade-off or substitute
various tax shields to minimize the marginal tax rate. The model used in this study
examines not only the effect of the individual coefficients in the tax function but
also the joint effects of these coefficients. Thus, when high tax shields increase the
possibility of tax exhaustion, the firm is likely to have a lower marginal tax rate
which may decrease the likelihood of LIFO use.
Ceteris paribus, we expect that firms with relatively high values of net property,
plant, and equipment scaled by the market value of equity (FIXED) are likely to
pay lower taxes. FIXED is a measure of debt securability (Dhaliwal et al., 1992;
Trezevant, 1992). Firms with a larger proportion of their assets represented by
fixed assets are likely to raise larger amounts of debt or lower the cost of financing
(Titman & Wessels, 1988). Therefore, the variable FIXED provides a tax shield by
248 PERVAIZ ALAM AND ENG SENG LOH

enhancing the possibility of increased debt financing, which increases the level of
interest deductibility, and consequently the FIXED variable indirectly lowers taxes.
Assuming no available substitution of tax shields, the higher proportion of non-
debt tax shield (NDTS) would lower the marginal tax rate, and therefore lower will
be the taxes.12 The variable LATS is a proxy measure for tax savings. It is computed
by multiplying the cost of goods sold with tax loss carryforwards. This measure is
based on the argument that firms with relatively higher cost of goods sold and tax
loss carryforwards are likely to pay lower taxes (see Bowen et al., 1995). Therefore,
the expected sign of the coefficient for the LATS variable is negative.13
The variable INVTS represents efficiency in inventory management (Lee &
Hsieh, 1985). The INVTS coefficient is expected to be negatively associated with
taxes. This relationship could be best explained by using the following illustration.
Suppose, net sales increases from $150,000 to $175,000, and cost of goods sold
and ending inventory remain unchanged at $80,000 and $20,000, respectively.
This would cause inventory to sales ratio (INVTS) to decrease from 13.3% to
11.4%, increasing gross margin from $70,000 to $95,000 thereby increasing taxes
assuming that the marginal tax rate is the same as in the previous year.
Finally, the variable TLCF is expected to be negatively associated with taxes
for LIFO firms. In other words, firms are less likely to use LIFO if they have
tax loss carryforwards, which could be used to shield taxes.14 Auerbach and
Porterba (1987) indicate that firms expecting persistent loss carryforwards are
likely to experience lower marginal tax rates. On the other hand, firms are
more likely to use FIFO or other income-increasing methods even if they have
tax loss carryforwards in the event the alternative available tax shields are tax
exhaustive. Thus, the expected sign of the TLCF coefficient cannot be predicted.
Table 1 gives further description of the variables used in the tax functions. It also
gives the data item numbers used for extracting financial statement values from
Compustat tapes.

Sample and Data Collection

The data for the variables used in this study were obtained from the back data
Compustat files. The sample firms were obtained from 1973 to 1981 years, a
period of historically high inflation rates in the United States during which firms
adopting LIFO could obtain substantial tax savings. This yielded an initial sample
of 10,777 observations for firms using either the LIFO or FIFO inventory method.
Since firms use a combination of inventory methods for financial reporting, LIFO
firms are those who use LIFO for most of their inventory accounting and FIFO
Choice of Inventory Method and the Self-Selection Bias 249

Table 2. Sample Selection and Distribution by Industries (1973–1981).


Panel A: Sample selection
Number of observations on the back data compustat files 10,777
for which the inventory method is either LIFO or FIFO
from 1973 to 1981
Number of missing observations 4,687
Number of observations where the primary inventory 6,090
method is either LIFO or FIFO
Number of LIFO observations (number of firms = 247) 1,050
Number of FIFO observations (number of firms = 1006) 5,040
Panel B: Sample Distribution by Industries
Industry Number of Observations

LIFO FIFO

01–19 (Agricultural products) 17 204


20–39 (Manufacturing) 872 3,255
40–49 (Transportation & utilities) – 127
50–59 (Wholesale & retail products) 225 699
60–69 (Finance, insurance & real estate) 16 110
70–89 (Services) 20 563
90–99 (Public administration) – 82
Total number of sample observations 1,050 5,040

firms are those who use FIFO as their predominant inventory valuation method.
After eliminating missing observations, the total sample is made of 6,090 obser-
vations. Of this number 1,050 (247 firms) represents LIFO observations and 5,040
observations (1006 firms) are of FIFO firms. Table 2, Panel A lists the sample
selection procedure.15
Table 2, Panel B shows the two-digit SIC code industry composition of the LIFO
and FIFO samples. The LIFO group consists of 1,050 observations distributed over
5 different two-digit industries with the manufacturing industry being the largest
followed by the wholesale and retail industries. The FIFO observations are dis-
tributed over seven different two-digit industries where the largest concentration
is also in the manufacturing industries followed by the wholesales and retail in-
dustries. Overall, the sample distribution for the two groups of firms appears to be
concentrated in the manufacturing, wholesale, and retail industries.
Table 3 presents descriptive statistics for the variables used for multivariate
analyses, classified by LIFO and FIFO groups. We corrected for price inflation
whenever a variable entered as a dollar value, using 1982 as the base-year and
250 PERVAIZ ALAM AND ENG SENG LOH

Table 3. Sample Descriptive Statistics.


Variable LIFO FIFO t-Value
Mean Std. Dev. Med. Mean Std. Dev. Med.

Lntxpay 1.294 3.926 2.004 −0.961 4.461 0.260 15.2∗∗∗


Lgtasst 4.837 1.954 4.550 3.184 1.921 3.081 25.28∗∗∗
Invvar 0.134 0.121 0.096 0.486 0.737 0.253 15.40∗∗∗
Lev 0.148 0.115 0.144 0.183 0.228 0.150 4.24∗∗∗
CI 0.243 0.185 0.195 0.320 0.973 0.171 2.51∗∗∗
Relasst 0.005 0.017 0.000 0.003 0.018 0.000 −3.85∗∗∗
Invm 0.277 0.141 0.269 0.270 0.164 0.278 1.26
Cprice 6.828 4.194 6.300 5.587 3.921 5.000 5.86∗∗∗
Incvar 0.138 3.095 0.183 0.303 5.398 0.338 0.95
Fixed 0.883 3.143 0.140 0.774 3.336 0.080 −0.97
Ndts 0.292 1.918 0.186 0.152 1.545 0.142 2.54∗∗∗
Lats 2.408 3.408 1.954 5.590 3.694 4.716 5.91∗∗∗
Invts 0.166 0.086 0.155 0.186 0.135 0.176 4.62∗∗∗
Tlcf −0.012 0.722 0.000 −0.229 12.343 0.000 −0.57
Tasst 1177.25 4362.33 94.597 209.597 928.597 21.777 14.27∗∗∗
Sales 1509.14 6518.58 173.926 293.798 1439.72 33.049 11.92∗∗∗
Invent 170.04 529.41 26.485 50.935 247.80 4.789 11.14∗∗∗

Note: Tasst, sales, and inventories represents total assets, sales, and inventories in millions of dollars.
All other variables are defined in Table 1. The t-value tests differences in means between LIFO
and FIFO samples. ∗∗∗ , ∗∗ significant at 0.01, and 0.05 levels, respectively. Dollar values adjusted
for price inflation using 1982 as the base year.

CPI as the index. Table 3 shows the mean, median, and standard deviation
values for each of the two groups. Also given is the t-value for testing significant
differences in mean values for each of the variables. The t-test shows that with
the exception of INVM, INCVAR, FIXED, and TLCF, the remaining variables
used in the probit and tax functions are significantly different between LIFO
and FIFO groups thereby suggesting that the two groups differ from each other
on several dimensions.16 Table 3 also provides statistics on selected financial
statement variables for each of the two groups of firms. The total assets (TASST)
of LIFO firms (median value of $94.6 million) are about four times the value
of total assets for FIFO firms (median value of $21.8 million). Similarly, the
size of LIFO inventory (median value of $26.5 million) is nearly five times the
size of inventory for FIFO firms (median value of $4.8 million). The results
of the Kolmogrov-Smirnov and Shapiro-Wilks tests show that the observed
distribution of individual variables is not significantly different from a normal
distribution.
Choice of Inventory Method and the Self-Selection Bias 251

EMPIRICAL RESULTS
Estimates of Reduced-Form Probit Equations

Table 4 shows the estimated coefficients for the reduced-form probit Eq. (12) in
which the dependent variable is a dichotomous dummy variable defined to be unity
if the firm is LIFO and zero if FIFO. We present the results without the coefficient
for the industry variable. Columns 1 and 2 provide estimated regression coefficients
and t-values.
Table 4, Columns 1 and 2 shows that sample firms are more likely to be
LIFO firms because of price-level increases. The inflation (CPRICE) variable is
statistically significant at the 1% level. Columns 1 and 2 also shows that firms
may not use the LIFO inventory method because of high variability of inventories
(INVVAR), high leverage (LEV), high capital intensity (CI), and relatively high
inventory as a component of total assets (INVM). The table shows that with the

Table 4. Estimated Coefficients for Reduced-Form Probit Equations.


I = ␥0 + ␥1 LGTASSTit + ␥2 INVVARit + ␥3 LEVit + ␥4 RELASSTit + ␥5 CIit + ␥6 INVMit
+ ␥7 CPRICEit + ␥8 INCVARit + ␧it

Variable Coefficient t-Value


1 2

INTERCEPT 0.001 0.008


LGTASST (+) 0.025 1.194
INVVAR (?) –2.824 –11.446***
LEV (−) –0.787 –4.795***
RELASST (+) –0.663 –0.563
CI (+) –0.198 –1.786*
INVM (−) –1.113 –5.932**
CPRICE (+) 0.025 4.555***
INCVAR (−) –0.008 –1.519
No. of observations 6090
Chi-square 906.4***
Log-likelihood –2346.3
Estimated R2 16.0%
Percent correct classification 83.0%

Note: Expected sign of the coefficients is in parentheses. CPRICE is the relative frequency of price
increases in each industry during the 1973–1981 period. See Table 1 for definition of variables.
∗ Significant at 0.10 level.
∗∗ Significant at 0.05 level.
∗∗∗ Significant at 0.01 level.
252 PERVAIZ ALAM AND ENG SENG LOH

exception of the CI variable, which is significant at 10%, the coefficients for


INVVAR, LEV, INVM variables are statistically significant at less than 5% and
the sign is generally in the expected direction.17

Estimates of Income Tax Equations

Table 5 presents the estimates of selectivity-adjusted income tax equations in


Columns 1 and 2. The dependent variable in both equations is the natural logarith-
mic value of the total income taxes for the year.18 All regressors are as defined in

Table 5. Estimated Coefficients for Income Tax Equations.


T = ␥0 + ␥1 FIXEDt + ␥2 NDTSt + ␥3 LATSt + ␥4 INVTSt + ␥5 TLCFt + ␧t

Variable Selectivity-Adjusted OLS


(Expected Sign)
LIFO FIFO LIFO FIFO

1 2 3 4

Intercept 6.666 −2.152 3.205 0.761


(18.07)*** (−12.65)*** (11.13)*** (6.04)***
FIXED (−) −0.145 −0.103 −0.071 −0.130
(−4.04)*** (−4.39)*** (−1.32) (−4.47)***
NDTS (−) −0.294 0.027 −0.270 0.047
(−5.37)*** (0.55) (−2.64)** (0.46)
LATS (−) −0.518 −0.233 −0.513 −0.196
(−6.55)*** (−5.96)*** (−3.29)*** (−3.99)***
INVTS (−) −4.603 −5.158 −9.255 −3.863
(−3.67)*** (−8.43)*** (−5.49)*** (−6.47)***
TLCF (?) 0.558 0.024 0.565 0.029
(4.04)*** (3.44)*** (1.27) (2.47)**
LAMBDA −3.612 −9.614
(−13.09)*** (−29.43)***
Num. of obs. 1050 5040 1050 5040
Adj. R-square 0.243 0.243 0.123 0.042
F-value 57.17*** 270.33*** 27.72*** 34.74***

Note: See Table 1 for definition of variables. Lambda is the inverse Mills’ ratio derived as: ␭ =
[−␸(u)/(u)], where u is the predicted value of the reduced form probit, ␸ is the standard
normal probability density function for u, and  is its cumulative density function. a Figures
in parentheses are t-values based on heteroskedasticity-consistent variance-covariance matrix
derived in Heckman (1979), Columns 1 and 2, and White (1980), Columns 3 and 4.
∗∗ Significant at 0.05 level.
∗∗∗ Significant at 0.01 level.
Choice of Inventory Method and the Self-Selection Bias 253

Table 1. As stated in Section 3, the LAMBDA terms are the inverse Mills ratio.19
The statistical significance of the LAMBDA term in Columns 1 and 2 indicate a
non-zero covariance between the error term in the inventory choice and the LIFO
(FIFO) tax equations. In short, the self-selection of firms into the LIFO (FIFO)
category is confirmed.20 The negative sign of LAMBDA terms show that firms,
which expect to pay higher than average taxes in the LIFO (FIFO) category are
less likely to be LIFO (FIFO) firms. Thus, the average firm’s choice of the LIFO or
the FIFO method for inventory valuation in this sample is consistent with rational
decision-making. The typical LIFO or FIFO firm in the sample would have been
worse off had it chosen the alternate method. Note also that the sign pattern on
the LAMBDA terms satisfy the condition for model consistency: ␴L␮ > ␴F␮ . The
lambda coefficient for LIFO firm is −3.612 and for FIFO firms it is −9.614 and
both these values are statistically significant.
Other results shown in Table 5, Column 1 are noteworthy. In Column 1, all
of the remaining regressors are significant at the 1% level and the sign of the
regression coefficients are generally in the expected direction for LIFO firms with
the exception of TLCF. We find that LIFO firms’ taxes are inversely related to tax
shield provided by fixed assets (FIXED), non-debt tax shield (NDTS), available
tax savings (LATS), inventory turnover (INVTS) and positively related to tax
loss carryforwards (TLCF). In Table 5, Column 2, the results for FIFO firms
show that the sign of the coefficients are in the expected direction for each of the
regressors, with the exception of TLCF. Similar to LIFO firms, the positive sign
of the TLCF coefficients suggest that firms paying higher taxes also have higher
values of TLCF on their books. Taken together, these reported sign patterns are
largely consistent with the theoretical predictions listed in Section IV.21
For comparison, Columns 3 and 4 in Table 5 show the corresponding estimates
for the specifications without correcting for self-selection. The results are
generally consistent with those of the selectively adjusted regressions reported in
Columns 1 and 2. The extent of self-selection bias is assessed by comparing the
corresponding coefficients in Columns 1 and 3 vs. Columns 2 and 4 of Table 5.
The implied elasticity of the firms’ tax payments with respect to each of the
independent variables (measured by multiplying the estimated coefficients by their
respective means) is used to estimate the differences in the size of the coefficients
between the selectivity-adjusted and OLS estimates.22 Our calculations reveal that
for LIFO firms the implied elasticity for variables, FIXED and INVTS, is sizably
different between selectivity-adjusted and OLS regressions. For instance, a 10%
increase in inventory to sales (INVTS) is expected to decrease taxes by 7.64%
(−4.603 × 0.166) using selectivity approach and 15.36% (−9.255 × 0.166)
decrease using OLS. Large differences in implied elasticity between selectivity-
adjusted and OLS regressions are also found for FIFO firms. A 10% increase in
254 PERVAIZ ALAM AND ENG SENG LOH

available tax savings (LATS) will decrease taxes by 13.02% (−0.233 × 5.59) un-
der selectivity approach and by 10.96% (−0.196 × 5.59) using the OLS approach.
Also, about 24% difference in implied elasticity is found for the INVTS variable
between the selectivity-adjusted and OLS categories. In addition, we also per-
formed the Wald test for the overall differences in the selectivity-adjusted and OLS
regressions in each of the LIFO and FIFO categories. Our results show significant
differences (p < 0.001) across selectivity-adjusted and OLS estimates.23

Estimates of the Structural Probit and Sensitivity Tests

The estimates of structural-probit (not reported) model are based on the results of
the reduced-form probit. Thus, using reduced-form probit, we derive the estimated
value for predicted tax savings under LIFO and FIFO approaches. These economet-
rically computed predicted tax savings are introduced as an added variable in the
structural probit equations. The results of the estimated structural probit equations
based on Eq. (12) show that the sign and significance of the coefficients are largely
similar to those for the reduced-form probit except the size variable (LGTASST),
which is significant at the 1% level, and the inflation variable (CPRICE) which is
negatively and significantly associated with LIFO use. As previously noted, for the
reduced-form probit the LGTASST was not significant and the CPRICE variable
was significantly positive.
The results of the structural probit (not reported) also show the coefficient for the
predicted tax savings, PRTXSAV, calculated as the difference between predicted
less actual taxes under the two inventory valuation methods for each firm in the
sample. For LIFO firms, it is calculated as the difference between predicted FIFO
taxes and actual LIFO taxes. For FIFO firms, it is the difference between predicted
LIFO taxes and actual FIFO taxes. For identifiability reasons, the structured probit
equations exclude the INCVAR variable. The estimation results (not reported)
reveal that the predicted tax savings variable is the most significant and a positive
coefficient in explaining the use of LIFO, indicating that increases in the tax
benefit of adopting LIFO increase the likelihood that firms will choose the LIFO
method.
Since the results of structural probit may depend on the choice of the variable
excluded from the inventory choice equation, we conducted sensitivity tests by
re-estimating the structured probit model with a different variable excluded in
turn. The results of these tests are reported in Table 6. Each row in the Table
represents a separate estimation of the structured model. The excluded variable
in each case is listed in the first column. Clearly, all the predicted tax savings
Choice of Inventory Method and the Self-Selection Bias 255

Table 6. Sensitivity Test of the Structural Probit Equation.


Variable PRTXSAV t-Value

INCVAR 0.296 35.764***


INVVAR 0.294 35.427***
RELASST 0.295 38.850***
INVM 0.296 35.885***
CI 0.294 35.917***
LGTASST 0.286 36.243***
LEV 0.292 36.363***
CPRICE 0.296 35.780***

Note: Dependent variable equals 1 if the firm is LIFO, 0 if FIFO. Each coefficient reported above
represents predicted tax savings computed as the difference between predicted taxes less actual
taxes in separate estimates of the probit equation. The variable removed from the structural
probit equation is named in the first column. For example, the coefficient on predicted taxes in
Table 6 is reported as the first entry labeled INCVAR when INCVAR is removed.
∗∗∗ Significant at the 0.01 level.

in Table 6 are positive and significant at less than the 1% level and qualitatively
similar in magnitudes in each case. Thus, the positive effect of the predicted
tax savings variable appears to be robust with respect to differences in the
specification of the structural probit equations.

Predicted Tax Savings Under Different Regimes

The implications of self-selection in an inventory method selection study cannot


be easily dismissed. The presence of a significant lambda term in the selectivity-
adjusted model suggests that the tax savings or tax savings foregone could be
potentially large. Thus, the model could then be used to estimate what the tax
savings or tax savings foregone would have been had a firm chosen the alternate
method. As an illustration, we calculate the differences between the actual tax
payments for each LIFO (FIFO) firm and what they would have paid had they
been in the other category, with and without adjustment for self-selection. For
consistency, we define the tax saving as actual LIFO (FIFO) taxes less predicted
FIFO (LIFO) taxes. The calculated differences so obtained are reported in Table 7.
The table provides the answer to the question: how much more or less taxes would
LIFO (FIFO) firms had paid had it been in the other category?
The predicted tax savings or tax savings foregone for LIFO and FIFO firms under
the selectivity-adjusted and the OLS methods are presented in Table 7. Columns 1
256 PERVAIZ ALAM AND ENG SENG LOH

Table 7. Predicted Dollar Tax Savings Under Different Regimes for LIFO and
FIFO Firms.
Selectivity-Adjusted OLS
LIFO FIFO LIFO FIFO
1 2 3 4

Mean −282.20 12.300 40.563 11.301


Median −297.44 1.242 2.259 4.994
Std Dev 190.35 43.000 128.4 43.023
Maximum 709.31 449.45 999.12 498.70
Minimum −723.67 −0.315 −18.320 −13.591
InterQuar-tile range 188.59 5.975 22.589 5.630
No. of obs. 1032 5031 1032 5031

Note: Tax savings for the LIFO (FIFO) firms is the difference between the actual LIFO (FIFO) taxes
and predicted FIFO (LIFO) taxes. The negative sign indicate tax savings and positive sign indi-
cates tax savings foregone. Selectivity-adjusted values are based on Heckman-Lee (1979, 1978)
procedure. OLS estimates are ordinary least squares regression-based estimates. To eliminate
the effect of extreme values 18 observations were dropped from LIFO sample and 9 observations
from the FIFO sample. Our results remain qualitatively the same with or without these outliers.

and 2 shows selectivity-adjusted tax savings/tax savings foregone while Columns 3


and 4 show the corresponding tax savings/tax savings foregone when OLS estimate
is used. Selectivity-adjusted values (Columns 1 and 2) show that for LIFO firms
the mean tax savings are $282.2 million (median $297.4 million) and mean tax
savings for FIFO firms is $12.3 million (median $1.2 million).24 These results
suggest that FIFO firms would have had sizable tax savings had they been LIFO
firms.
Had the non-random nature of the sample been ignored, the corresponding OLS
estimated mean tax savings foregone would have been $40.6 million (median
$2.3 million) for LIFO firms and $11.3 million (median $5.0 million) for FIFO
firms (Columns 3 and 4). Compared to selectivity approach, the OLS results are
consistent for FIFO firms but not so for LIFO firms.25
Compared to the OLS approach, we have greater confidence in the results of the
selectivity-based approach because it is econometrically derived and it utilizes a
set of explanatory variables in the regression models, which are well defined in the
accounting and tax literature. We believe that large selectivity-based tax savings
provide a partial explanation of why a number of firms adopted LIFO during the
mid-1970s. Finally, the results in Table 7 for FIFO firms are fairly uniform. In each
case, the selectivity adjusted or the OLS approach, the FIFO firms would have had
tax savings had they used the LIFO inventory method.
Choice of Inventory Method and the Self-Selection Bias 257

SUMMARY AND CONCLUSIONS


Managerial decision to use an inventory accounting method is based on a number of
variables. Some of these variables may be linked to the tax strategy, compensation
policy, debt covenants, stock prices, and working capital. Which ever inventory
method is used (e.g. LIFO, FIFO), the firm needs to design its managerial ac-
counting system to optimize its choice of the accounting method. For instance, the
decision to adopt LIFO or FIFO suggests that a firm has the ability to forecast cash
flows under either of these choices. In addition, a firm planning to use LIFO expects
to remain profitable and that the investment in inventories is likely to increase due
to expected inflation. Thus, management accounting needs to do careful profit
planning for business units and segments and be able to examine the implications
of the inventory method on firm taxes, managerial compensation, debt covenants,
and inventory management. This discussion demonstrates that the choice of
the inventory method affects a firm’s management accounting system in a wide
variety of ways.
Prior research examining a firm’s choice of either LIFO or FIFO as an inventory
valuation method has concentrated on two major areas. One line of inquiry has
measured the effects of this choice while the other has examined the determinants
of the choice. This study extends LIFO-FIFO research by developing a model
in which the choice of an inventory valuation method and the effects of this
choice are jointly determined. Approaching the subject area in this way makes the
self-selection bias arising from firms’ choice of LIFO and FIFO the central issue
of our analysis. Self-selection occurs when the observed assignment of firms into
these two accounting categories is due to a priori, unobserved decision processes.
Ignoring the self-selection of firms and treating the LIFO-FIFO status as exoge-
nous introduces bias into the empirical estimates. In dealing with this issue, we
applied the two-stage regression procedure developed by Heckman (1976, 1979)
and Lee (1978) to 1973–1981 data, a period during which the incentive to adopt
LIFO was most pronounced. We estimated tax equations for LIFO and FIFO firms
separately, simultaneously correcting for self-selection bias. The unbiased param-
eters from the tax functions are then used to create a measure of the tax benefits
from adopting LIFO.
Self-selection bias arises because sample items have been pre-selected. The
researcher, as a result, has no opportunity to select the sample items randomly.
In case of the selection of LIFO and FIFO firms, we could not select the sample
firms randomly. We used the firms in our sample where the managers had already
selected either LIFO or FIFO as the predominant inventory choice method.
Hence, the researcher ends up using a non-random sample. The self-selection
approach used here attempts to measure whether there is self-selection bias when
258 PERVAIZ ALAM AND ENG SENG LOH

using pre-selected LIFO and FIFO firms. Knowing the self-selection bias, we are
able to estimate the tax savings, which would have been had the firms not been
pre-selected and the sample was randomly derived. The estimated Mills ratio
(lambda term) based on the two-stage method identifies the extent of self-selection
bias and then explicitly computes the best measure of the predicted tax savings in
a structured probit estimate.
Our analysis yields the following results. First, we find strong evidence that
self-selection is present in our sample of LIFO and FIFO firms, consistent with the
hypothesis that the managerial decision to choose the observed inventory account-
ing method is based on a rational cost-benefit calculation. Second, correcting for
self-selection leads to the inference that LIFO firms would, on average, pay more
taxes as FIFO firms, and FIFO firms could have had tax savings had they been
LIFO firms. Our selectivity-adjusted calculation shows that the mean tax savings
is $282.2 million for LIFO firms and that FIFO firms would, on the average, pay
$12.3 million less in taxes if they were LIFO firms. Without correction for selec-
tivity bias, the mean tax benefit foregone is $40.6 million for LIFO firms and $11.3
million for FIFO firms. Overall, the results suggest that the difference between the
LIFO/FIFO tax savings could partly be the function of firm size. LIFO firms in our
sample are on the average larger than the FIFO firms. In addition, the difference
in tax savings may be related to specific industries. Finally, we believe that the
inventory method (LIFO or FIFO) is reflective of the various economic constraints
confronting the firm. Hence, the inventory method used by a firm is a rational
economic decision.
Two weaknesses may be related to our work. Despite controlling for firm size in
the accounting choice function, one may argue that our results are affected by the
size difference between the LIFO and FIFO firms. The LIFO firms in our sample are
on the average larger than the FIFO firms. In addition, the difference in tax savings
may also be specific to selected industries. We do not compute tax savings on
industry-level because of insufficient data. Future studies could explore this issue
further. Future work may also investigate the presence of self-selection bias on
other management accounting issues. For instance, the effect of the selection of the
depreciation method on managerial performance could be studied. An additional
area of work could focus on the effect of the selection of the pooling vs. purchase
method of accounting on the post-merger performance of merged firms.

NOTES
1. The LIFO reserve reported by LIFO firms since 1975 is also an as-if number (see
Jennings et al., 1996).
Choice of Inventory Method and the Self-Selection Bias 259

2. Martin (1992) argues that FIFO method may be a logical tax minimizing strategy for
some firms in the event sales and production grow faster than the inflation rate when idle
capacity exists, and fixed manufacturing costs is relatively large.
3. Sunder (1976a, b) develops models to estimate the differences between the net present
value of tax payments under LIFO vs. FIFO inventory valuation methods. He shows that
the expected value of net cash flows depend on the future marginal tax rates, anticipated
change in the price of inventories, cost of capital of the firm, pattern of changes in the
year-end inventories, and the number of years for which the accounting change will remain
effective. Caster and Simon (1985) and Cushing and LeClere (1992) found that tax loss
carryforward and taxes are significant factors in the decision to use the LIFO method,
respectively.
4. The accounting literature has yet to develop a unified theory explaining managers’
choice of an accounting method. However, an emerging body of accounting literature has
advocated the concept of rational choice as a basis of managerial decision-making (see
Watts & Zimmerman, 1986, 1990).
5. We recognize that a principal-agent problem can arise when self-interest of managers
do not coincide with the interest of firms’ shareholders (Jensen & Meckling, 1976). We
do not incorporate the agency issue into the theoretical framework for two reasons: (1) it
is another source of self-selection in the data in addition to the self-selection caused by
maximizing shareholders’ wealth; and (2) some empirical studies have found that man-
agerial compensation and managerial ownership variables are not significant regressors in
explaining inventory valuation choice (Adel-Khalik, 1985; Hunt, 1985).
6. The Heckman-Lee method has also been used in previous accounting studies. For
instance: Adel-Khalik (1990a, b) applied the Heckman-Lee model to firms acquiring man-
agement advisory services from incumbent auditors vs. other sources, and to the endogenous
partitioning of samples into good news and bad news portfolios of quarterly earnings an-
nouncements, Shehata (1991) uses the Heckman-Lee model to examine the effect of the
Statement of Financial Accounting Standard No. 2 on R&D expenditures, and Hogan (1997)
shows that the use of Big 6 vs. non-Big 6 auditor in an initial public offering depends upon
a strategy which minimizes the total cost of under-pricing and auditor compensation.
7. The error terms in Eqs (10) and (11) are heteroskedastic and a correction must be
made in calculating the correct standard errors of the estimates. In this study, this correction
is achieved by using LIMDEP software in implementing the Heckman-Lee model. See
Greene (1990) for a description of the correction process.
8. Following Dopuch and Pincus (1988), we also estimate using the “as-if” approach
but we do not report the results in this paper.
9. Lindahl et al. (1988) characterize Lee and Hsieh’s (1985) probit model to be compre-
hensive which includes many of the variables used in previous studies.
10. The rationale for the use of the regressors in the probit function is covered extensively
in Lee and Hsieh (1985).
11. Dhaliwal et al. (1992) compute the FIXED variable by including long-term debt in
addition to the market value of equity in the denominator.
12. DeAngelo and Masulis (1980) demonstrated that a firm’s effective marginal tax
rate on interest deductions is a function of the firm’s non-debt tax shields (e.g. tax loss
carryforwards, investment tax credits).
13. Trezevant (1996) investigates the association of debt tax shield to changes in a non-
investment tax shield (cost of good sold) in the post LIFO adoption period.
260 PERVAIZ ALAM AND ENG SENG LOH

14. A similar argument is made by Mackie-Mason (1990) when considering tax car-
ryforward and debt financing. He argues that tax carryforward have a large effect on the
expected marginal tax rate on interest expenses since each dollar of tax carryforward is
likely to reduce a dollar of interest deduction.
15. We included firms with as few as one LIFO (FIFO) year, as long it has not been
a FIFO (LIFO) firm in any other years in this period. There are several reasons for our
choice: (1) specifying a minimum number of years for inclusion in the sample is arbitrary
and introduces potential bias in estimates; (2) the time element is unimportant in the pooled
cross-sectional analysis as each year of the data is treated as an independent observation; and
(3) since firms obtain the benefits from choosing LIFO (FIFO) in the year of adoption, firms
using LIFO (FIFO) for only one year will still provide us with as much relevant information
on the inventory method choice as those using LIFO or FIFO for more than one year.
16. Aside from differences in inventory methods and substantial economic differences,
the differences in t-values between the two groups may also be a function of possible
violation of the assumptions of the t-test.
17. Probit results with industry dummies included are similar to the results reported in
Table 4. In addition, the results indicate that the regression coefficients are positive and
significant for the textile, chemicals, petroleum and coal, rubber, leather, primary metal and
wholesale industries and are negative and significant for electronic and business services
industries.
18. We also used the logarithmic value of income taxes paid (income taxes-total [Compu-
stat item 16] minus deferred income taxes [Compustat item 126] as the dependent variable).
The results are essentially similar to those reported in Table 5. In addition, we examined
the possibility of using effective corporate tax rate as the dependent variable but decided
against its use for lack of consistency in the definition of the effective tax rate measure in
the literature (Omer et al., 1990).
19. The lambda term is based on the predicted value of the error term derived from the
reduced form probit. Hence, the selectivity adjustment reported in Table 5 is based on the
probit reported in Table 4.
20. The t-statistics shown in Table 5 are based on the correct asymptotic variance-
covariance matrix derived in Heckman (1979). Also, the OLS regression reveals that there
is no multi-collinearity among regressors (VIF values are no more than 2.0).
21. The results obtained using a different set of regressors for the tax function, are
qualitatively similar. For instance, we developed a model containing the following variables:
NDTS, TLCF, FIXED, INVVAR, RELASST, and INCVAR. The corresponding coefficients
of the lambda terms are −1.815 (t = −5.292) for LIFO firms and −7.071 (t = −22.462) for
FIFO firms. The sign and significance of other regressors in the tax equations are generally
in the expected direction. Similar results are also found when we drop the INCVAR variable
from the above tax function.
22. Differentiating the dependent variable with respect to the variable, FIXED, for ex-
ample, yields (␦T/T␦F) where T is dollars of tax payments and F represents the FIXED
variable. Multiplying by the mean value for FIXED gives us (F␦T/T␦F), the elasticity of
taxes associated with fixed assets. Note that for the LATS regressor, both the dependent
and independent variables are already in natural logarithmic value. Thus, the estimated
coefficient is itself the elasticity figure.
23. We tested the normality assumption for each of the variables prior to adjusting
for selectivity by either using the Kolmogrov-Smirnov or the Shapiro-Wilks statistic. The
Choice of Inventory Method and the Self-Selection Bias 261

results show that the observed distribution is not significantly different from the normal
distribution. In addition, following Pagan and Vella (1989), we perform a moment-based
test for normality in selectivity models. In this test, the predictions from the probit model are
squared and cubed, weighted by the Mills ratio. Using the two-stage least squares results,
the null hypothesis that squared and cubed terms are zero cannot be rejected.
24. The difference between actual LIFO (FIFO) taxes and predicted FIFO (LIFO) taxes
for LIFO (FIFO) firms is either tax savings or tax savings foregone. The negative difference
suggests tax savings and the positive sign difference indicates tax savings foregone.
25. We also computed tax savings/tax forgone using the “as-if” method described in
Lindahl (1982), Morse and Richardson (1983), Dopuch and Pincus (1988), and Pincus and
Wasley (1996). Our results (not reported show that the average “as-if” tax savings for LIFO
firms is $10.9 million. The FIFO firms’ average tax savings foregone under the “as-if”
approach is $60.0 million. Overall, our results show that the selectivity-adjusted approach
has the highest tax savings for LIFO firms. The selectivity approach takes into consideration
the joint decision of the inventory method choice and the tax effect of the decision.

ACKNOWLEDGMENTS
An earlier version of this paper was presented at the 1998 Annual Meeting of
the American Accounting Association and at the 2003 Management Accounting
Research Conference. We are thankful to C. Brown, D. Booth, J. Ohde, M.
Myring, S. Mastunaga, M. Pearson, M. Pincus, M. Qi, R. Rudesal, and S. Sunder
for comments and suggestions and to D. Lewis and J. Winchell for computer
programming assistance. The first author is thankful to the Kent State University’s
Research Council for providing partial financial support for this project. A
previous version of this paper is also available on ssrn.com.

REFERENCES
Adel-Khalik, A. R. (1985). The effect of LIFO-switching and firm ownership on executives pay.
Journal of Accounting Research (Autumn), 427–447.
Adel-Khalik, A. R. (1990a). The jointness of audit fees and demand for MAS: A self-selection
analysis. Contemporary Accounting Research (Spring), 295–322.
Adel-Khalik, A. R. (1990b). Specification problems with information content of earning: Revisions
and rationality of expectations, and self-selection bias. Contemporary Accounting Research
(Fall), 142–172.
Auerbach, A. J., & Porterba, J. M. (1987). Tax-loss carryforwards and corporate tax incentives. In: M.
Feldstein (Ed.), The Effects of Taxation and Capital Accumulation (pp. 305–338). Chicago:
University of Chicago Press.
Ball, R. (1972). Changes in accounting techniques and stock prices. Journal of Accounting Research
(Suppl.), 1–38.
262 PERVAIZ ALAM AND ENG SENG LOH

Biddle, G. C. (1980). Accounting methods and management decisions: The case of inventory costing
and inventory policy. Journal of Accounting Research (Suppl.), 235–280.
Biddle, G. C., & Lindahl, R. W. (1982). Stock price reactions to LIFO adoptions: The association
between excess returns and LIFO tax savings. Journal of Accounting Research (Autumn),
551–588.
Biddle, G. C., & Martin, R. K. (1985). Inflation, taxes, and optimal inventory policy. Journal of
Accounting Research (Spring), 57–83.
Bowen, R., Ducharme, L., & Shores, D. (1995). Stakeholders’ implied claims and accounting method
choice. Journal of Accounting and Economics (December), 255–295.
Cushing, B. E., & LeClere, M. J. (1992). Evidence on the determinants of inventory accounting policy
choice. Accounting Review (April), 355–366.
DeAngelo, H., & Masulis, R. (1980). Optimal capital structure under corporate and personal taxation.
Journal of Financial Economics, 8, 3–29.
Dhaliwal, D., Trezevant, R., & Wang, S. (1992). Taxes, investment-related tax shields and capital
structure. Journal of American Taxation Association, 1–21.
Dopuch, N., & Pincus, M. (1988). Evidence on the choice of inventory accounting methods: LIFO
versus FIFO. Journal of Accounting Research (Spring), 28–59.
Duncan, G. (1986). Continuous/discrete econometric models with unspecified error distribution.
Journal of Econometrics, 32, 139–153.
Greene, W. H. (1990). Econometric analysis. Englewood, NJ: MacMillan.
Hagerman, R. L., & Zmijewski, M. R. (1979). Some economic determinants of accounting policy
choice. Journal of Accounting and Economics (January), 141–161.
Hand, J. R. M. (1993). Resolving LIFO uncertainty: A theoretical and empirical reexamination of
1974–1975 LIFO adoptions and nonadoptions. Journal of Accounting Research (Spring),
21–49.
Heckman, J. J. (1976). The common structure of statistical models of truncation, sample selection,
and limited dependent variables and a simple estimation for such models. Annals of Social
and Economic Measurement (Fall), 475–492.
Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica (January), 153–161.
Hogan, C. E. (1997). Costs and benefits of audit quality in the IPO market: A self-selection analysis.
Accounting Review (January), 67–86.
Hunt, H. G., III (1985). Potential determinants of corporate inventory decisions. Journal of Accounting
Research (Autumn), 448–467.
Jennings, R., Mest, D., & Thompson, R., II. (1992). Investor reaction to disclosures of 1974–1975
LIFO adoption decision. Accounting Review (April), 337–354.
Jennings, R., Simko, P., & Thompson, R., II (1996). Does LIFO inventory accounting improve
the income statement at the expense of the balance sheet? Journal of Accounting Research
(Spring), 85–109.
Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and
ownership structure. Journal of Financial Economics (September), 305–360.
Kang, S. K. (1993). A conceptual framework for the stock price effects of LIFO tax benefits. Journal
of Accounting Research (Spring), 50–61.
Lee, L. F. (1978). Unionism and wage rates: A simultaneous equations model with qualitative and
limited dependent variables. International Economic Review (June), 415–433.
Lee, J. C., & Hsieh, D. (1985). Choice of inventory accounting methods: Comparative analyses of
alternative hypotheses. Journal of Accounting Research (Autumn), 468–485.
Lindahl, F., Emby, C., & Ashton, R. (1988). Empirical research on LIFO: A review and analysis.
Journal of Accounting Literature, 7, 310–333.
Choice of Inventory Method and the Self-Selection Bias 263

Mackie-Mason, J. K. (1990). Do taxes affect corporate financing decisions? Journal of Finance


(December), 1471–1493.
Maddala, G. S. (1991). A perspective on the use of limited-dependent and qualitative variables models
in accounting research. Accounting Review (October), 788–807.
Manski, C. (1989). Anatomy of the selection problem. Journal of Human Resource, 24, 343–360.
Manski, C. (1990). Nonparametric bounds on treatment effects. American Economic Review, 24,
319–323.
Martin, J. R. (1992). How the effect of company growth can reverse the LIFO/FIFO decision: A
possible explanation for why many firms continue to use LIFO. Advances in Management
Accounting (pp. 207–232). Greenwich, CT: JAI Press.
Ming, X., & Vella, F. (1994). Semi-parametric estimation via synthetic fixed effects. Working Paper,
Rice University (November).
Morse, D., & Richardson, G. (1983). The LIFO/FIFO decision. Journal of Accounting Research
(Spring), 106–127.
Omer, T. C., Molloy, K. H., & Ziebart, D. A. (1990). Measurement of effective corporate tax rates using
financial statement information. Journal of American Taxation Association (July), 57–72.
Pagan, A., & Vella, F. (1989). Diagnostic tests for models based on individual data: A survey. Journal
of Applied Econometrics, 4, S29–S59.
Pincus, M. A., & Wasley, C. (1996). Stock price behavior associated with post-1974–1975 LIFO
adoptions announced at alternative disclosure time. Journal of Accounting, Auditing and
Finance, 535–564.
Ricks, W. E. (1982). The market’s response to the 1974 LIFO adoptions. Journal of Accounting
Research (Autumn), 367–387.
Scholes, M. S., & Wolfson, M. A. (1992). Taxes and business strategy: A planning approach.
Englewood, NJ: Prentice-Hall.
Shehata, M. (1991). Self-selection bias and the economic consequences of accounting regulation: An
application of two-stage switching regression to SFAS No. 2. Accounting Review (October),
768–787.
Smith, C. W., & Warner, J. B. (1979). On financial contracting: An analysis of bond covenants.
Journal of Financial Economics (June), 117–161.
Sunder, S. (1973). Relationship between accounting changes and stock prices: Problems of
measurement and some empirical evidence. Journal of Accounting Research (Suppl.), 1–45.
Sunder, S. (1976). A note on estimating the economic impact of the LIFO method of inventory
valuation. Journal of Accounting Research (April), 287–291.
Sunder, S. (1976). Optimal choice between FIFO and LIFO. Journal of Accounting Research
(Autumn), 277–300.
Titman, S., & Wessels, R. (1988). The determinants of capital structure choice. Journal of Finance
(March), 1–19.
Trezevant, R. (1992). Debt financing and tax status: Tests of the substitution effect and tax exhaustion
hypothesis using firms’ response to the Economic Recovery Act of 1981. Journal of Finance
(September), 1557–1568.
Trezevant, R. (1996). LIFO adoption and tax shield and substitution effect. Journal of American
Taxation Association (Suppl.), 18–31.
Trost, R. P. (1981). Interpretation of error covariances with nonrandom data: An empirical illustration
of returns to college education. Atlantic Economic Journal (September), 85–90.
White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for
heteroskedasticity. Econometrica, 48, 817–838.
CORPORATE ACQUISITION
DECISIONS UNDER DIFFERENT
STRATEGIC MOTIVATIONS

Kwang-Hyun Chung

ABSTRACT
Acquisition is one of key corporate strategic decisions for firms’ growth and
competitive advantage. Firms: (1) diversify through acquisition to balance
cash flows and spread the business risks; and (2) eliminate their competitors
through acquisition by acquiring new technology, new operating capabilities,
process innovations, specialized managerial expertise, and market position.
Thus, firms acquire either unrelated or related business based on their
strategic motivations, such as diversifying their business lines or improving
market power in the same business line. These different motivations may be
related to their assessment of market growth, firms’ competitive position,
and top management’s compensation. Thus, it is hypothesized that firms’
acquisition decisions may be related to their industry growth potential,
post-acquisition firm growth, market share change, and CEO’s compensation
composition between cash and equity. In addition, for the two alternative
acquisition accounting methods allowed until recently, a test is made if the
type of acquisition is related to the choice of accounting methods. This study
classifies firms’ acquisitions as related or unrelated, based on the standard
industrial classification (SIC) codes for both acquiring and target firms. The
empirical tests are, first, based on all the acquisition cases regardless of

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 265–286
© 2004 Published by Elsevier Ltd.
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12012-1
265
266 KWANG-HYUN CHUNG

the firm membership, and then, deal with the firms acquiring only related
businesses or unrelated businesses exclusively.
The type of acquisitions was more likely related to industry growth oppor-
tunities, indicating that the unrelated acquisition cases are more likely to be
followed by higher industry growth rate than the related acquisition cases.
While there were a substantially larger number of acquisition cases using the
purchase method, the related acquisition cases used the pooling-of-interest
method more frequently than in the unrelated acquisition cases. The firm-level
analysis shows that the type of acquisition decisions was still related to acquir-
ing firms’ industry growth rate. However, the post-acquisition performance
measures, using firm’s growth and change in market share, could support
prior studies in that the exclusive-related acquisitions helped firms grow more
and get more market share than the exclusive-unrelated acquisitions. CEO’s
compensation composition ratio was not related to the types of acquisition.

1. INTRODUCTION
For the last three decades, mergers and acquisitions have been important corporate
strategies involving corporate and business development as the capital markets
rapidly expanded. Increased uncertainty about the economy has made it difficult
for firms to resort only to internal growth strategies. Firms diversified through
acquisition to balance cash flows and spread the business risks. They also tried to
eliminate their competitors through acquisition by acquiring new technology, new
operating capabilities, process innovations, specialized managerial expertise, and
market position.
Mergers and acquisitions (M&A), as an increasingly important part of corporate
strategy, enable firms to grow at a considerable pace. Also, firms can quickly
restructure themselves through M&A when they find it necessary to reposition.
M&A provides firms with the creation and identification of competitive advantage.
Porter (1987) identified the following four concepts of corporate strategy from
the successful diversification records of 33 U.S. companies from 1950 through
1986: portfolio management, restructuring, transferring skills and sharing skills.
Those concepts of corporate strategy explain the recent corporate takeovers as
either related diversification or conglomeration. Also, Porter (1996) stressed
that the essence of corporate strategy is choosing a unique and valuable position
rooted in systems of activities that are much more difficult to match, while the
operational techniques, such as, TQM, benchmarking, and re-engineering are easy
to imitate. Thus, many companies take over the target firms with such positions
in the context of operational effectiveness competition, instead of repositioning
themselves based on products, services, and customers’ needs. Firms’ sustainable
Corporate Acquisition Decisions under Different Strategic Motivations 267

competitive advantage through buying out rivals could be another key motivation
for corporate takeovers.
According to Young (1989), it is necessary to identify how M&A will result in
added value to the group, how quickly these benefits can be obtained and how the
overall risk profile of the group will be affected when considering M&A as part of
an overall corporate and business development strategy. The study suggests two
key criteria for the successful corporate takeovers: the level of business affinity and
the business attractiveness (i.e. market size, growth, profitability etc.) of the target
company. In reality, firms acquire either unrelated or related business based on
their strategic motivations: diversifying their business lines or improving market
power in the same business line. These different M&A decisions are hypothesized
to be influenced by the firms’ strategic motivations explained in the prior literature.
This paper attempts to identify the different strategic motivations for acquisition
activities of firms. More specifically, this study tests whether firms’ acquisition
decisions are related to their industry growth potential, post-acquisition firm
growth, market share change, and CEO’s stock compensation composition. It also
tests if the type of acquisition is related to the choice of accounting method.
In light of what prior studies (e.g. Scanlon, Trifts & Pettway, 1989) have
done, this study classifies firms’ acquisitions as related or unrelated, based on the
standard industrial classification (SIC) codes for both acquiring and target firms.
This study utilizes Securities Data Company (SDC)’s Worldwide Mergers &
Acquisition Database from 1996 to 1999. This database includes all transactions
involving at least 5% of the ownership of a company where a transaction was val-
ued at $1 million or more, and each firm may have many acquisition cases over the
test period. Thus, the empirical testing is based on each case as well as on each firm.
The rest of this paper is structured as follows: Section 2 explains how the
hypotheses are developed with regards to the accounting methods, the motivations
of firm’s acquisition decision including industry growth opportunities and CEO’s
compensation as well as the consequences of the acquisition decision in terms of
improved operating performance and increased market presence. The data used
and test variables are explained in Section 3. Section 4 summarizes empirical
results in both all-cases analysis and firm-level analysis. Finally, the concluding
remarks are provided in Section 5.

2. HYPOTHESES DEVELOPMENT
2.1. Acquisition Trends and Industry Growth Opportunities

If a firm’s motivation for acquisition is to balance cash flows and spread the business
risks, it is more likely to acquire a different line of business. Also, when the industry
268 KWANG-HYUN CHUNG

is faced with limited growth potential, firms are less likely to enter new markets
because it would be riskier for the management to manage new unrelated business.
This hypothesis is supported by product-market theory, which suggests that risk
increases as a firm moves into a new unfamiliar area.
The conglomerate type of diversification was often used to fuel tremendous
corporate growth as firms purchased many unrelated businesses during the 1960s,
regardless of what good or service they sold. In the 1970s, managers began to
emphasize diversification to balance cash flows individual businesses produced.
Acquisition was regarded as a diversification to balance between businesses that
produced excess cash flows and those that needed additional cash flows. It was
known as portfolio management to reduce risk. During the 1980s there was a
broad-based effort to restructure firms, scratching out unrelated businesses and
focusing on a narrower range of operations. Expansion through acquisition was
often limited to the vertical integration. In the 1990s, there was an increasing
number of diversification into related businesses that focused on building dynamic
capabilities as an enduring source of competitive advantage and growth. If firms’
motivation for M&A is the use of core competence in the acquired business and/or
to increase the market strength, the new business should have enough similarity
to existing business and this benefit can be augmented in the acquisition of the
same line of business. The use of core competence and/or the increase of market
strength should produce the competitive advantage that consequently increases
market share. Thus, the following hypothesis is made in an alternate form with
regards to each acquisition case:

H1 . Firms’ acquisition type is related to their industry growth potential. More


specifically, under limited growth potential, firms are more likely to seek for
competitive advantage and growth through more related acquisition while firms
under its higher growth potential in their industry are more likely to enter new
markets to balance cash flows, which leads to unrelated acquisition.

2.2. Acquisition Decisions and Firms’ Post-acquisition


Growth and Market Power

The hypothesis above can be rephrased using firm’s own growth rate, instead
of industry growth rate for the firm-level analysis. However, this study uses the
ex-post growth rate after the acquisition, and thus, the post-acquisition growth
rate would be interpreted as a firm’s post-acquisition performance instead of its
own assessment of the growth potential. In fact, many firms acquired both related
and unrelated businesses in each year. Therefore, this study selects two distinct
Corporate Acquisition Decisions under Different Strategic Motivations 269

cases such as those who acquired exclusively the same business lines in each test
period (exclusive-related acquisition) and those who acquired only the different
line of business in each test period (exclusive-unrelated acquisition).1 Prior M&A
studies suggested that changes in the opportunity to share resources and activities
among business units have contributed to post-acquisition performance.2 Most
studies find improved performance in the 1980s acquisition, compared to the earlier
conglomerate acquisition wave of the 1960s, because of increased opportunities
to share resources or activities in the acquired firm (i.e. more operating synergy
effect).3 Diversifying character of the unrelated acquisition could be the reason
for poorer performance,4 especially in the short period, compared to the related
acquisition where we find a high opportunity for shared activities or resources.
Thus, the following hypothesis is formed in an alternate form with regards to a
firm’s post-acquisition growth rate:
H2 . Firms’ acquisition decision, unrelated or related, would lead to the different
post-acquisition growth. More specifically, firms with exclusive related acqui-
sition cases are more likely to have higher growth rate after the acquisition than
firms with exclusive unrelated acquisition cases.
In addition to firms’ post-acquisition growth, I hypothesize that firm’s acquisition
type can lead to the competitive position in its industry using the market share
because one of the crucial motivations for the acquisition may be to increase
market strength. Especially in a more competitive market, firms will be more
likely to acquire the other competing firms to increase the market power, compared
to the less competing market environment. Thus, the firms with exclusive related
acquisition cases are more likely to be motivated by increasing market power
than the firms with exclusive-unrelated acquisition cases. Thus, the following
hypothesis in an alternate form is made with regards to the firm’s change in
market share:
H3 . Firms that acquired the same line of businesses are more likely to
expect higher increase in market share in the industry because they get core
competences and market power than the firms that acquired only the different
lines of businesses.

2.3. Different Accounting Methods in Mergers and Acquisitions

Before June 2002, there were two generally accepted methods of accounting for
business combinations. One is referred to as the purchase method and the other is
known as the pooling of interests method. These two methods are not alternative
270 KWANG-HYUN CHUNG

ways to account for the same business combination. The actual situation and the
attributes of the business combination determine which of the two methods is
applicable. The purchase method would be applicable in a situation where one
company is buying out another. The pooling of interest method would be the case
where the shareholders of one company surrender their stock for the stock of
another of the combining companies.
There has been a combining of ownership interests, which is not regarded as
purchase transaction. Thus, firms can avoid recognizing goodwill. That’s why
the pooling of interest method required certain criteria regarding the nature
of consideration given and the circumstances of the exchange. However, the
problem in the method is to determine the equivalent number of common shares
acquired from the combining companies. If two companies in the business
combination are similar each other, it is perhaps easier to determine the shares
to be exchanged, compared to heterogeneous combinations.5 Thus, the following
hypothesis in an alternate form is stated in terms of two different accounting
methods in the acquisition.

H4 . Firms with exclusive-related acquisition cases are more likely to use the
pooling of interest method than those with exclusive-unrelated acquisition
cases.

2.4. CEOs Compensation Plan and Firm’s


Acquisition Decisions

Previous literature regarding management compensation (e.g. Narayanan, 1996)


suggested that the proportion of equity compensation is higher when the firm’s
growth opportunities are greater. As future growth opportunities increase relative
to assets in place, the value of the stock per unit of managerial ability increases.
Therefore, the management receives fewer shares for the same perceived ability.
To compensate, the proportion of the stock component must increase. If we know
that the top management’s equity compensation ratio is related to the firm’s growth
opportunities, then the ratio may be different between two different types of ac-
quisitions where we see the different assessment of future growth and different
post-acquisition firm growth and changed market power. Thus, the different type
of acquisition decisions is hypothesized with the management’s compensation
proportion as follows:

H5 . The CEOs’ equity compensation ratio is different between the exclusive-


unrelated acquisition and the exclusive-related acquisition.
Corporate Acquisition Decisions under Different Strategic Motivations 271

3. DATA AND TEST VARIABLES


From 1996 to 1999, there were 9,058 mergers and acquisition cases identified in
the Securities Data Company’s (SDC) 2001 Worldwide Acquisitions and Mergers
Database which includes all transactions involving at least 5% of the ownership of
a company where a transaction was valued at 1 million or more. The SIC (standard
industry classification) codes of the acquirer and the target were used to ascertain
degree of relatedness. There are 5,142 cases where the acquiring and acquired firms
had the same first two digit in their SIC codes, identified as related acquisitions,
and there are 3,916 cases identified as unrelated acquisitions. Total sample cases
are classified by the acquirers’ first two digit SIC codes and acquisition years from
1996 to 1999 in Table 1. It shows that year 1998 was the most active in the mergers
and acquisition activities. The all-cases analysis compares two groups’ ex-post

Table 1. Sample Distribution by Industry for Related Acquisition Cases and


Unrelated Acquisition Cases (All Cases).
Two-Digit Related Unrelated Total 1996 1997 1998 1999
SIC Acquisition Acquisition Cases

10 20 4 24 6 8 5 5
13 180 37 217 85 57 38 37
14 17 9 26 1 7 10 8
15 22 19 41 11 5 15 10
16 2 8 10 2 2 3 3
17 2 8 10 2 3 5 0
18 0 3 3 3 0 0 0
20 199 61 260 54 61 67 78
21 5 3 8 3 1 1 3
22 24 22 46 17 11 8 10
23 33 21 54 10 18 14 12
24 20 42 62 8 13 28 13
25 15 9 24 4 9 5 6
26 68 37 105 36 20 27 22
27 152 95 247 55 70 64 58
28 275 217 492 126 129 136 101
29 10 60 70 17 25 16 12
30 20 25 45 7 12 15 11
31 6 4 10 3 3 3 1
32 23 59 82 13 27 19 23
33 57 65 122 31 35 32 24
34 45 73 118 31 33 38 16
35 236 454 690 164 169 177 180
36 270 354 624 107 135 166 216
37 129 185 314 64 83 78 89
272 KWANG-HYUN CHUNG

Table 1. (Continued )
Two-Digit Related Unrelated Total 1996 1997 1998 1999
SIC Acquisition Acquisition Cases

38 202 187 389 122 100 92 75


39 15 20 35 1 8 12 14
40 13 1 14 8 1 3 2
41 7 2 9 5 2 1 1
42 11 6 17 3 3 6 5
44 5 7 12 6 2 1 3
45 13 6 19 1 5 7 6
46 1 2 3 0 1 1 1
47 18 11 29 8 3 10 8
48 197 95 292 46 59 78 109
49 177 155 332 57 80 102 93
50 128 165 293 61 87 90 55
51 74 137 211 68 55 45 43
52 2 7 9 2 3 1 3
53 31 33 64 11 19 20 14
54 27 10 37 14 8 8 7
55 12 2 14 1 0 9 4
56 23 5 28 5 8 9 6
57 12 3 15 4 3 5 3
58 57 3 60 18 14 11 17
59 82 73 155 43 43 30 39
60 466 196 662 179 159 196 128
61 29 22 51 10 9 16 16
62 35 55 90 17 24 24 25
63 151 141 292 74 81 81 56
64 31 18 49 11 12 17 9
67 20 115 135 33 33 38 31
70 33 7 40 11 10 13 6
72 30 26 56 16 8 21 11
73 1010 320 1330 250 323 381 376
75 8 0 8 0 5 2 1
76 2 5 7 1 3 3 0
78 12 11 23 12 6 3 2
79 35 11 46 9 14 12 11
80 228 48 276 110 75 62 29
81 1 4 5 2 0 1 2
82 5 3 8 2 1 3 2
86 0 11 11 5 6 0 0
87 109 119 228 44 60 68 56
Total 5142 3916 9058 2130 2269 2452 2207
Related cases 5142 1239 1285 1412 1206
Unrelated cases 3916 891 984 1040 1001
Corporate Acquisition Decisions under Different Strategic Motivations
Table 2. Sample Distribution by Industry for Exclusive Related Acquisition Firms and Exclusive Unrelated
Acquisition Firms (Year/Total) (Firm-Level Cases).
Two-Digit 1996 1997 1998 1999 Total Firms
SIC
Related Unrelated Related Unrelated Related Unrelated Related Unrelated Related Unrelated

10 3 1 5 0 3 0 2 0 13 1
13 29 4 22 3 17 5 20 1 88 13
14 1 0 1 2 0 2 1 1 3 5
15 1 2 3 2 4 1 1 2 9 7
16 1 1 0 0 0 1 0 2 1 4
17 0 0 0 1 0 1 0 0 0 2
18 0 2 0 0 0 0 0 0 0 2
20 13 1 20 1 16 2 15 4 64 8
21 1 0 0 1 1 0 2 1 4 2
22 3 4 5 1 4 0 3 2 15 7
23 6 2 4 5 6 2 5 2 21 11
24 2 3 1 1 1 1 2 4 6 9
25 0 1 3 1 2 1 2 0 7 3
26 10 6 8 3 7 7 3 5 28 21
27 10 5 9 9 11 5 8 6 38 25
28 31 16 26 22 33 16 25 7 115 61
29 1 8 1 7 2 8 0 5 4 28
30 1 2 4 4 3 4 1 3 9 13
31 2 1 0 1 2 1 1 0 5 3
32 4 4 4 3 2 3 1 4 11 14
33 7 10 6 9 11 4 8 6 32 29
34 6 4 4 9 4 10 3 4 17 27
35 13 18 18 28 17 27 18 28 66 101
36 15 24 32 28 25 23 25 22 97 97

273
37 11 8 12 15 13 18 6 14 42 55
274
Table 2. (Continued )
Two-Digit 1996 1997 1998 1999 Total Firms
SIC
Related Unrelated Related Unrelated Related Unrelated Related Unrelated Related Unrelated

38 23 25 19 16 16 14 20 13 78 68
39 0 1 4 2 2 3 3 4 9 10
40 7 0 1 0 1 1 1 0 10 1
41 0 0 1 0 1 0 0 1 2 1
42 1 1 2 1 2 1 3 2 8 5
44 2 3 1 1 1 0 1 2 5 6
45 1 0 3 1 3 2 2 1 9 4
46 0 0 0 1 0 1 1 0 1 2
47 1 0 1 0 2 1 1 0 5 1
48 17 3 17 9 14 3 12 5 60 20
49 12 11 20 10 24 10 19 15 75 46
50 8 11 10 11 5 14 4 8 27 44
51 10 9 7 9 4 7 3 8 24 33
52 0 2 0 2 1 0 1 2 2 6
53 4 5 6 5 5 6 4 7 19 23
54 5 1 2 0 3 1 2 1 12 3
55 0 1 4 0 2 0 4 0 10 1

KWANG-HYUN CHUNG
56 4 0 1 3 5 0 4 1 14 4
57 1 0 0 1 3 0 3 0 7 1
58 11 0 9 0 7 1 9 1 36 2
59 7 3 5 5 5 2 4 3 21 13
60 46 13 42 8 30 12 23 12 141 45
61 4 2 3 2 5 1 2 3 14 8
62 2 3 1 8 4 5 4 2 11 18
63 17 9 19 17 17 11 14 9 67 46
64 1 2 3 2 0 1 4 0 8 5
Corporate Acquisition Decisions under Different Strategic Motivations
67 1 7 3 11 1 7 0 10 5 35
70 2 1 2 2 7 0 2 0 13 3
72 2 2 5 1 3 2 5 1 15 6
73 47 17 59 7 65 16 63 10 234 50
75 0 0 2 0 2 0 1 0 5 0
76 0 1 0 0 0 0 1 0 1 1
78 3 1 2 2 1 1 0 1 6 5
79 2 1 3 2 3 1 3 3 11 7
80 23 3 17 3 21 5 12 3 73 14
81 0 1 0 0 0 1 1 1 1 3
82 1 1 1 0 0 0 2 0 4 1
86 0 1 0 1 0 0 0 0 0 2
87 6 7 8 5 7 10 6 4 27 26
Total 442 275 471 304 456 282 396 256 1765 1117

275
276 KWANG-HYUN CHUNG

industry growth rates at least one year after the acquisition, which is the proxy for
the industry growth potential.
For further firm-level analysis, I sorted 9,058 cases by the acquiring firms,
and identified two distinct firm groups to test the hypotheses developed in the
previous section. The first group is where the firms acquired exclusively the related
businesses (i.e. firms in the industry of the same two-digit SIC codes), and the other
comparison group is where the firms acquired exclusively the unrelated businesses
in each test period. Table 2 summarizes the 1765 exclusive related-acquisition
firms and the 1117 exclusive unrelated-acquisition firms by year and the two-
digit SIC codes. Every year saw more related acquisition firms than the unrelated
acquisition firms. This firm-level analysis compares such test variables as industry
growth rate as industry growth opportunity, firm’s growth rate as post-acquisition
operating performance, firm’s change in market share as the acquisition motivation
as increasing market power, the accounting method in mergers and acquisition, and
the CEO’s equity compensation ratio as the proxy for firm’s growth potential.
Both all-cases analysis and firm-level analysis have two-group parametric
difference test (univariate t-test) between the related acquisition decision and the
unrelated acquisition decision, and logistic regression using the test variables as
independent variables. The all-cases analysis uses the logistic regression using
the industry growth rates and the accounting method for each acquisition as the
independent variables. The industry growth rate was measured one-year after
from the acquisition to the fiscal year 2001, where the data are currently available
in the Compustat tape, but the logistic regression includes the growth rate at least
over the two years. The industry growth rate was calculated as the annual change
in average net sales of the firms that belonged to the two-digit SIC codes. The
firm’s growth rate and change in market share was also measured one-year after
from the acquisition to the fiscal year 2001, and the logistic regression models
includes them at least over two-year period. The CEO’s compensation ratio is
measured from the Standard & Poor’s ExecuComp by dividing stocks granted
and Black-Scholes value of options granted by his/her total annual compensation.
The higher the variable is, the more dependent on equities their compensation is.
The accounting method is a dichotomous variable where the purchase method is
given 0, and the pooling-of-interest method is 1.

4. EMPIRICAL RESULTS
4.1. All-Cases Analysis

A total 9,058 M&A cases were analyzed each year, and they are divided into
related acquisition cases and unrelated acquisition cases. Table 3 shows the
Corporate Acquisition Decisions under Different Strategic Motivations
Table 3. Comparison Between Related Acquisition Cases and Unrelated Acquisition Cases (All Cases Analysis).
Year 1996 1997 1998 1999
Related Unrelated t-Stat. Related Unrelated t-Stat. Related Unrelated t-Stat. Related Unrelated t-Stat.
Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion
Cases Cases Cases Cases Cases Cases Cases Cases

Observations 1239 891 1285 984 1412 1040 1206 1001


Industry growth rate
5-years 0.5737 0.6654 −3.41a
4-years 0.58 0.6824 −4.77a 0.4085 0.4772 −3.51a
3-years 0.3623 0.4072 −2.81a 0.4053 0.4975 −6.42a 0.2943 0.3257 −2.02b
2-years 0.237 0.2461 −0.85 0.2158 0.2644 −4.55a 0.2843 0.3386 −5.21a 0.1581 0.1536 0.46
1-year 0.1245 0.113 0.38 0.0933 0.1186 −3.89a 0.1138 0.1359 −3.47a 0.1546 0.1753 −3.39a
Accounting method 0.0826 0.0619 1.55c 0.1121 0.0701 3.45a 0.1261 0.0702 4.53a 0.1119 0.0669 3.66a
Remarks: For year 1996
5-years: year 1996–2001
4-years: year 1996–2000
3-years: year 1996–1999
2-years: year 1996–1998
1-year: year 1996–1997
For year 1997
4-years: year 1997–2001
3-years: year 1997–2000
2-years: year 1997–1999
1-year: year 1997–1998
For year 1998
3-years: year 1998–2001
2-years: year 1998–2000
1-year: year 1998–1999
For year 1999
2 years: year 1999–2001
1-year: year 1999–2000
a Significance at the 1% level (two-tailed test).
b Significance at the 5% level (two-tailed test).

277
c Significance at the 10% level (two-tailed test).
278 KWANG-HYUN CHUNG

comparison between the two groups from year 1996 to year 1999. Except in year
1999, the growth rates of the industry to which each acquisition case belongs show
the significant difference between two groups in a hypothesized direction. The
growth rates measured up to 2001 were even smaller in some industries than those
measured up to 2000. This may contribute to the weaker result in the industry
growth rates till year 2001, compared to those till year 2000. This study can’t
expand the industry growth rate beyond 2001 for data unavailability. Also, the
industry growth rate is ex-post instead of ex-ante to surrogate the industry growth
potential. The accounting method supported the alternative hypothesis, indicating
that while most acquisitions are accounted for by the purchase method, firms
are more likely to adopt the pooling-of-interest method in the related acquisition
cases, compared to the unrelated acquisition cases.
The multivariate logistic regression results are provided by years in Table 4.
In year 1996, all industry growth rates except from 1996 to 1998 are statistically
significant in differentiating related and unrelated acquisition cases in a hypothe-
sized direction. As consistent with the univariate comparison, the industry growth
rates in both years 1997 and 1998 are all significant in explaining the firm’s
acquisition decision. The unrelated acquisition decisions are more often made
when the industry growth opportunities are foreseen. The accounting method is
statistically significant all years.

4.2. Firm-Level Analysis

Table 5 provides the comparison of five test variables between exclusive-related


acquisition firms and exclusive-unrelated acquisition firms for the test periods. The
growth rates of the industry to which the firms belong are in general consistent with
the all-cases analysis in the previous section. In other words, the firms in the higher
industry growth potential are more likely to acquire in the different line of business
than those in the lower industry growth potential. However, the level of significance
gets lower, compared to the all-cases analysis. Especially when the growth potential
is measured after 1998, the level of difference gets insignificant. Firm’s growth rate
as the post-acquisition performance is statistically significant only in year 1997.
Also, firm’s change in market share after its acquisition is significantly different
between two groups in years 1997 and 1998. The use of different accounting meth-
ods in each acquisition was evident after 1996. The firms in the similar business
line fervently tried to use the pooling-of-interest method when the accounting rule-
makers started to discuss the abolishment of the method in the late 1990s. CEO’s
equity compensation rate is different between two groups in an expected direction,
but none of the differences in the test periods is statistically significant.
Corporate Acquisition Decisions under Different Strategic Motivations
Table 4. Logistic Regression Results for Related and Unrelated Acquisition Cases (All Cases Analysis).
Year 1996 1997 1998 1999
Observations 2122 2263 2452 2207

Likelihood ratio 14.945 25.937 11.47 4.59 25.788 54.109 32.976 24.579 46.561 14.003
Intercept 0.4595 0.577 0.44 0.349 0.3661 0.5818 0.403 0.3065 0.5013 0.1185
(51.07a ) (61.8a ) (44.87a ) (30.05a ) (36.73a ) (64.61a ) (44.29a ) (31.37a ) (54.9a ) (4.77b )
Industry growth
5-years −0.2377
(10.94a )
4-years −0.4204 −0.323
(21.76a ) (12.12a )
3-years −0.332 −0.8002 −0.2001
(7.57a ) (39.36a ) (3.43c )
2-years −0.1514 −0.7437 −0.8254 −0.116
(0.7) (19.07a ) (24.36a ) (0.36)
Accounting method 0.2979 0.3008 0.3013 0.3098 0.5593 0.565 0.5383 0.6392 0.6261 0.567
(3.48c ) (3.54c ) (3.56c ) (3.78c ) (12.93a ) (13.04a ) (11.95a ) (19.27a ) (18.34a ) (13.19a )
% Concordant 45.5 52 50.5 47.2 51.1 56.4 54.6 45.6 54.5 48.8
Remarks: For year 1996
5-years: year 1996–2001
4-years: year 1996–2000
3-years: year 1996–1999
2-years: year 1996–1998
For year 1997
4-years: year 1997–2001
3-years: year 1997–2000
2-years: year 1997–1999
For year 1998
3-years: year 1998–2001
2-years: year 1998–2000
For year 1999
2 years: year 1999–2001
a Significance at the 1% level.
b Significance at the 5% level.

279
c Significance at the 10% level.
280
Table 5. Comparison Between Exclusive Related Acquisition Firms and Exclusive Unrelated Acquisition Firms
(Firm-Level Analysis).
Year 1996 1997 1998 1999
Test Variables Exclusive Exclusive t-Stat. Exclusive Exclusive t-Stat. Exclusive Exclusive t-Stat. Exclusive Exclusive t-Stat.
Related Unrelated Related Unrelated Related Unrelated Related Unrelated
Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion
Firms Firms Firms Firms Firms Firms Firms Firms

Industry growth rate


Observations (firms) 441 266 466 297 449 268 392 252
5-years 0.6092 0.7041 −1.79a
4-years 0.5943 0.6894 −2.36b 0.4497 0.4821 −0.9
3-years 0.3666 0.4127 −1.55 0.4391 0.4959 −2.19b 0.3275 0.326 0.05
2-years 0.2263 0.2413 −0.8 0.2406 0.2582 −0.88 0.3084 0.3381 −1.57 0.1842 0.1751 0.42
1-year 0.1083 0.1096 −0.18 0.1096 0.1197 −0.83 0.1221 0.1366 −1.3 0.1691 0.1801 −0.86
Firm’s growth rate
Observations (firms) 297–413 180–250 310–427 219–276 323–389 193–229 312–335 195–214
5-years 1.8036 1.4159 0.83
4-years 1.3586 1.2122 0.44 1.0887 0.8398 1.23
3-years 0.8967 0.8083 0.45 0.9319 0.7535 1.25 0.6121 0.3941 1.92b
2-years 0.5789 0.5712 0.06 0.5449 0.4324 1.71a 0.4496 0.3938 0.68 0.2767 0.2694 0.13
1-year 0.2588 0.2866 0.47 0.2724 0.2218 1.53 0.1739 0.164 0.37 0.1893 0.2348 −1.31
Change in market share

KWANG-HYUN CHUNG
Observations (firms) 297–413 175–243 310–426 215–272 323–388 189–225 312–335 192–211
5-years 0.788 0.6124 0.55
4-years 0.553 0.454 0.39 0.5748 0.2541 2.41b
3-years 0.4394 0.3613 0.45 0.4335 0.1805 2.56b 0.2479 0.0703 2.23b
2-years 0.3237 0.3098 0.11 0.295 0.1691 2.01b 0.1214 0.0476 1.41 0.1089 0.0865 0.5
1-year 0.1411 0.1692 −0.51 0.1715 0.1053 1.99b 0.0589 0.0345 0.95 0.0311 0.0494 −0.64
Accounting method
Observations (firms) 0.0973 0.08 0.78 0.1158 0.0757 1.9a 0.1269 0.066 2.77c 0.135 0.0703 2.8c
442 275 475 304 457 288 400 256
Corporate Acquisition Decisions under Different Strategic Motivations
Equity compensation ratio
Observations (firms) 0.3652 0.3431 0.94 0.3921 0.3946 −0.1 0.4065 0.4441 −1.55 0.4393 0.4374 0.07
355 216 383 253 409 247 363 228

Remarks: For year 1996

5-years: year 1996–2001


4-years: year 1996–2000
3-years: year 1996–1999
2-years: year 1996–1998
1-year: year 1996–1997

For year 1997

4-years: year 1997–2001


3-years: year 1997–2000
2-years: year 1997–1999
1-year: year 1997–1998

For year 1998

3-years: year 1998–2001


2-years: year 1998–2000
1-year: year 1998–1999

For year 1999

2 years: year 1999–2001


1-year: year 1999–2000
a Significance at the 1% level (two-tailed test).
b Significance at the 5% level (two-tailed test).
c Significance at the 10% level (two-tailed test).

281
Table 6. Logistic Regression Results for Exclusive Related and Unrelated Acquisition Firms (Firm-Level Analysis).
Year 1996 1997 1998 1999

Observations 381 418 453 504 381 418 453 504 452 498 552 452 498 552 494 528 494 528 492 492

Likelihood 14.8395 14.5963 9.4373 5.0893 13.294 14.4448 10.432 5.2089 10.5679 11.6314 6.5834 14.212 14.0328 7.128 12.3096 11.9747 12.46 12.01 5.9521 6.3144
ratio
intercept 0.6431 0.7855 0.5715 0.4891 0.6579 0.7992 0.5671 0.4908 0.4043 0.729 0.4503 0.3922 0.7099 0.45 0.6168 0.7605 0.6037 0.7517 0.4099 0.4028
(8.81a ) (12.29a ) (8.37a ) (7.18) (9.39a ) (5.62a ) (8.23a ) (7.23a ) (4.61b ) (12.87a ) (6.69a ) (4.32b ) (12.11a ) (6.71a ) (10.77a ) (14.40a ) (10.38a ) (14.08) (5.07b ) (4.87b )
Industry growth
5-years −0.4779 −0.355
(7.86a ) (4.58b )
4-years −0.5957 −0.4297 −0.3052 −0.1216
(8.79a ) (4.89b ) (2.25) (0.35)
3-years −0.6316 −0.411 −0.7261 −0.581 −0.3418 −0.1626
(5.71b ) (2.47) (7.1a ) (4.45b ) (1.48) (0.35)
2-years −0.465 −0.1926 −0.229 −0.057 −0.9106 −0.7725 −0.2747 −0.1884
(1.41) (0.24) (0.44) (0.03) (5.26b ) (4.11b ) (0.61) (0.3)
Firm growth
5-years 0.174
(3.59c )
4-years 0.2199 0.1691
(4.32b ) (4.25b )
3-years 0.2422 0.1068 0.1664
(3.28c ) (2.03) (2.42)
2-years 0.3052 0.1884 0.1237 0.0677
(3.12c ) (2.25) (1.24) (0.18)
Change in 0.241 0.3358 0.3626 0.3612 0.3418 0.2399 0.242 0.2418 0.186 0.1492
market share
c b b c b c c
(3.38 ) (3.94 ) (3.87 ) (3.19 ) (6.14 ) (3.56 ) (2.69 ) (2.53) (1.25) (0.53)
CEO 0.239 0.0596 0.2415 0.0646 0.2445 0.0683 0.2419 0.0608 −0.3481 −0.4206 −0.375 −0.3984 −0.4567 −0.388 −0.2988 −0.1485 −0.297 −0.143 0.0844 0.0777
compensation
(0.34) (0.02) (0.44) (0.04) (0.36) (0.03) (0.44) (0.03) (1.09) (1.79) (1.59) (1.4) (2.09) (1.69) (0.86) (0.23) (0.85) (0.22) (0.07) (0.06)
Accounting 0.0515 0.0271 0.0616 0.2475 0.0649 0.0245 0.0514 0.2568 0.5036 0.355 0.4724 0.4693 0.3395 0.468 0.906 0.7754 0.9136 0.7778 0.6667 0.6584
(0.01) (0.01) (0.03) (0.43) (0.02) (0.01) (0.02) (0.46) (2.3) (1.25) (2.32) (1.98) (1.13) (2.27) (6.02b ) (5.1b ) (6.12b ) (5.12b ) (4.45b ) (4.34b )
% Concordant 57 58.8 58.4 56 56.7 58.6 58.6 56.1 56.9 56.4 54.4 57.3 56.6 54.7 56.3 57 56.1 57.2 54.9 55.5

Remarks: For year 1996

5-years: year 1996–2001


4-years: year 1996–2000
3-years: year 1996–1999
2-years: year 1996–1998

For year 1997

4-years: year 1997–2001


3-years: year 1997–2000
2-years: year 1997–1999

For year 1998

3-years: year 1998–2001


2 years: year 1998–2000

For year 1999

2 years: year 1999–2001

a
Significance at the 1% level.
b
Significance at the 5% level.
c
Significance at the 10% level.
284 KWANG-HYUN CHUNG

The multivariate logistic regression models are formed to explain the firm’s
acquisition decision using the industry growth potential, motivation of increasing
market powers and improving operating performance, CEO’s wealth-increasing
motivation, and the firm’s motivation of avoiding goodwill by using the pooling-
of-interest method. Because of the high correlation between the firm’s growth rate
and change in market share, those variables are not in the model simultaneously for
the potential multi-collinearity. Firm’s industry growth rates as proxy for industry
growth potential are mostly significant in a hypothesized direction in explaining
different types of acquisitions, except for year 1999, possibly because of the
shorter period of measurement. The growth rate as post-acquisition performance
measure was a less significant variable, compared to the industry growth rate.
The post-acquisition performance is a good indicator to explain the firms’
exclusive-related acquisitions in years 1996 and 1997. Also, the change in market
share can explain the types of different acquisition in years 1996 and 1997. On
the contrary, the accounting method can explain the types of different acquisitions
only in the later test periods. This result may be explained by the fervent use of the
pooling-of-interest method in the late 1990s because of the imminent abolishment
of the method. As found in Table 5, none of the CEO’s compensation ratios
can explain the firms’ acquisition types in the models. The multivariate models
explaining the firm’s acquisition decision strategy are not fit, especially in the late
1990s (Table 6).

5. CONCLUDING REMARKS
Through M&A, firms can diversify to balance cash flows and spread the business
risks, or firms can improve efficiency or effectiveness by reducing competition,
and firms can foster their growth by creating more market power. Depending on
their corporate strategic motivations, firms can acquire the unrelated businesses
and/or the related businesses. Diversification motivation is more likely to lead to
the unrelated acquisitions while the related acquisitions are more likely to result
from the motivation of reducing competition and/or creating market power. Thus,
this paper attempts to identify the firms’ different strategic motivations for their
M&A activities by relating the corporate acquisition decisions to their assessment
of industry growth potential, post-acquisition firm growth, market share change,
choice of accounting methods, and CEO’s stock compensation composition.
The empirical tests reveal that in all acquisition cases, the industry growth
opportunities play a key role in choosing between unrelated acquisition cases and
the related acquisition cases, regardless of firm membership. Both univariate com-
parison test and the multivariate logit regression show that we tend to have higher
Corporate Acquisition Decisions under Different Strategic Motivations 285

industry growth rate for unrelated acquisition cases, compared to related acquisi-
tion cases, which is consistent with product-market theory indicating higher risk
under unrelated diversification acquisitions. Also, the choice of accounting method
was different between two types of acquisition cases. Firms under the related
acquisition cases tend to favor the pooling-of-interest method to the unrelated ac-
quisition cases, although most acquisitions were accounted for using the purchase
method. Because this analysis includes all acquisition cases regardless of firm
membership, a firm can have related and unrelated acquisitions in the same year.
However, in the firm-level analysis, I excluded these firms that have both
related and unrelated acquisitions in the same year so that only exclusive-related
acquisition firms and exclusive-unrelated acquisition firms are shown in each
year. This test also shows that the types of acquisition decisions were more
related to acquiring firms’ industry growth rate. The post-acquisition performance
measures, using firm’s growth and change in market share, were consistent in
explaining the types of exclusive related and unrelated acquisitions in the earlier
test periods (i.e. 1996 and 1997). The accounting choice in the firm-based analysis
was compatible with the types of acquisition as in the industry-wide analysis
except the earlier years. However, CEO’s compensation composition was not
related to the different types of acquisition though previous studies suggested that
firm’s growth opportunities affect CEO’s stock compensation composition ratio.
This study has some limitations. First, there is potential misclassification
between related acquisition and unrelated acquisition, because the two-digit SIC
codes of both acquiring and acquired firms were mechanically used. Second, for
the firms’ assessment of industry growth potential, the ex-post industry growth
rate was used instead of the ex-ante variable. Because of the use of ex-post industry
growth rate, the test variables for the recent test periods have a short measurement
time span for the data availability. Last, this study used the compensation data
confined to S&P 1,500 companies while the all acquisition cases cover most of
public firms in the SDC’s Worldwide Acquisitions and Mergers Database.

NOTES
1. Thus, the exclusive related-acquisition firms in a test period could be classified as
the exclusive unrelated-acquisition firms in the other test period.
2. Dess, Ireland and Hitt (1990), Hoskisson and Hitt (1990), Davis and Thomas (1993),
and Brush (1996).
3. Walker (2000).
4. Berger and Ofek (1995).
5. In the late 1990s, the FASB indicated that the pooling-of-interest method is no longer
appropriate accounting principle in business combinations. The potential accounting rule
286 KWANG-HYUN CHUNG

change seems to have pushed many firms in the business combination to use the method
since the late 1990s.

ACKNOWLEDGMENTS
This research was sponsored by Lubin Summer Research Grant (2002). I appreciate
the comments and suggestions from J. Lee and M. Epstein (the editors), as well as
participants at the 2003 AIMA Conference. I also thank Ryan Shin for excellent
research assistance.

REFERENCES
Berger, P. G., & Ofek, E. (1995). Diversification’s effect on firm value. Journal of Financial Economics,
37(January), 39–65.
Brush, T. H. (1996). Predicted change in operational synergy and post-acquisition performance of
acquired businesses. Strategic Management Journal, 17(January), 1–23.
Davis, R., & Thomas, L. G. (1993). Direct estimation of synergy: A new approach to the diversity-
performance debate. Management Science, 39(November), 1334–1346.
Dess, G. G., Ireland, R. D., & Hitt, M. A. (1990). Industry effects and strategic management research.
Journal of Management, 16(March), 7–27.
Hoskisson, R. E., & Hitt, M. A. (1990). Antecedents and performance outcomes of diversification: A
review and critique of theoretical perspectives. Journal of Management, 16(June), 461–509.
Narayanan, M. P. (1996). Form of compensation and managerial decision horizon. The Journal of
Financial and Quantitative Analysis, 31(December), 467–491.
Porter, M. E. (1987). From competitive advantage to corporate strategy. Harvard Business Review,
65(May–June), 43–59.
Porter, M. E. (1996). What is strategy? Harvard Business Review, 74(November–December), 61–78.
Scanlon, K. P., Trifts, J. W., & Pettway, R. H. (1989). Impacts of relative size and industrial relatedness
on returns to shareholders of acquiring firms. The Journal of Financial Research, 12(Summer),
103–112.
Walker, M. M. (2000). Corporate takeovers, strategic objectives, and acquiring-firm shareholder
wealth. Financial Management, 78(Spring), 53–66.
Young, B. (1989). Acquisitions and corporate strategy. Financial Management, 67(September), 19–21.
THE BALANCED SCORECARD:
ADOPTION AND APPLICATION

Jeltje van der Meer-Kooistra and Ed G. J. Vosselman

ABSTRACT
Technological advances and increasing competition are forcing organ-
isations to monitor their performance ever more closely. The concept
of the balanced scorecard offers a systematic and coherent method of
performance measurement that in particular concentrates on assessing
present performance in the light of an organisation’s strategy and takes
into account the importance of the various policy aspects. In this paper we
study the extent to which the concept contributes to the desired improvement
of performance. To this end, we examine the motives for adopting the
concept and the decision-making process around this adoption. We study the
functioning of the balanced scorecard as a means to control performance,
assuming that its functioning is linked to an organisation’s problems and
is influenced by other control instruments used. This is why we have done
case research.

INTRODUCTION

In the management accounting discipline performance measurement and man-


agement has received a great deal of attention over the past few decades. Various
players in this field have developed “new” performance and control systems.
Balanced scorecards (Kaplan, 2001a, b; Kaplan & Norton, 1992, 1993, 1996a, b),

Advances in Management Accounting


Advances in Management Accounting, Volume 12, 287–310
Copyright © 2004 by Elsevier Ltd.
All rights of reproduction in any form reserved
ISSN: 1474-7871/doi:10.1016/S1474-7871(04)12013-3
287
288 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

performance pyramids (Judson, 1990; Lynch & Cross, 1991), integrated perfor-
mance measurement systems (Nanni et al., 1992): these are only some examples
out of many. Some even seem to compete with one another, like the balanced
scorecard and the performance pyramid. In theory and practice especially the
balanced scorecard has been at the centre of interest. Possibly this is partly
because of the authors being rather well-known in the consulting profession.
The balanced scorecard is an instrument with which the performance of
organisations can be measured systematically and coherently. In recent years
attention has shifted more and more from measuring performance towards
managing it. Measuring is a means to achieve eventual performance management
and control. Kaplan and Norton (1996a, b) claim that the aim is discovering
cause and effect relations between the various areas of organisational activity and
the organisational outcomes. Therefore the balanced scorecard concept defines
critical success factors and performance indicators which reflect performance.
The thinking behind this is that critical success factors determine the realisation
of the strategic aims of the organisation and that the performance indicators are
a more detailed concretisation. The indicators indicate which activities now and
in the near future have to be carried out in order to successfully realise the aims.1
This paper is on the adoption and application of the balanced scorecard
concept on the level of a specific organisation. For this organisation the adoption,
implementation and use of the balanced scorecard has been examined. The paper
is theoretically informed by institutional theory as concretised by Abrahamson
(1991, 1996) and Abrahamson and Rosenkopf (1993). Using institutional theory
we will examine the process of adopting, developing and using the balanced
scorecard at NedTrain, a Dutch organisation in the field of public transport.
Furthermore, the NedTrain study is informed by theoretical notions on control
concepts underlying the use of the balanced scorecard concept. Particularly the
paper will draw on control concepts distinguished by Simons (1994, 1995). By
examining the adoption as well as the implementation and use of the balanced
scorecard we hope to get an insight into highly relevant questions to professional
practitioners who are confronted with decision-making connected with perfor-
mance and control systems. Is the adoption indeed a consequence of deliberate
decision-making by professional practitioners? If so, does the balanced scorecard
live up to (either implicit or explicit) expectations in the adoption phase? How
does it affect firms’ operations? And if not, what then drives the adoption? And
what happens with the balanced scorecard after the adoption?
The issues raised in this paper are not only relevant to academics, but also
to practitioners. Our study in particular meets Lukka’s (1998) criticism of much
current management accounting research: it is insufficiently aimed at accounting
and control possibilities for it to be able to intervene in a firm’s operations. We
The Balanced Scorecard 289

would like to add that in current management accounting research the professional
controller (or management accountant) has a very low profile. In this study, being
the professional that is responsible for the economic rationality of all business
processes (including those around the adoption, development and application of a
balanced scorecard) he is given a high profile.
The paper is organised as follows. In the following section we will briefly present
the balanced scorecard’s origin and, drawing on Simons (1994, 1995), go into
choices in the design of the control system around the scorecard, with particular
attention to the role of the controller (or management accountant). In the next
section, drawing on institutional theory, we will describe the motives underlying
the adoption of the balanced scorecard, after which we will justify the case research
method and procedures chosen. In the last section but one we will extensively report
on our case research and describe the adoption and application processes of the
balanced scorecard. Finally we will make some general remarks about the major
findings of the case study.

BALANCED SCORECARD AND


ORGANISATIONAL CONTEXT
Control Concepts

It is already about fifteen years ago that Kaplan and Johnson wrote the book
Relevance lost: The rise and fall of management accounting. It proved to be an im-
portant milestone in what at present is often called the Relevance Lost Movement.
In reality, this movement was already started before 1987 in two papers by Kaplan
in the important journal The Accounting Review (1983, 1984). Kaplan asserts
that systems and procedures of cost accounting and management control were
originally developed for firms manufacturing mass-produced standard products.
They are simple and direct cost information and responsibility accounting systems
mainly aiming at minimising production costs. In his view they are not suitable for
modern industry, which is especially characterised by client-specific production,
short life-cycles, CAD/CAM technology and much “overhead.” The solution
proposed consists of a number of elements, including refining costing and cost cal-
culation techniques, prolonging the time horizon of control techniques and a shift
away from a firm-centred orientation to a value chain approach. Another important
element is the balanced scorecard, involving a widening of the scope of accounting
reports with non-financial information from four perspectives: the customer, the
firm’s internal processes, innovation and financial aspects. From each perspective
a restricted number of critical success factors are formulated on the basis of
290 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

the organisational strategy, after which performance indicators and standards


are determined.
Bjørnenak and Olson (1999) hold that the contribution of the above-mentioned
accounting innovations is that information can be given on more objects and that
cause and effect relations can be described in greater depth. Traditionally there
was only information available about production costs. New techniques, such
as Activity Based Costing, Life Time Costing and Target Costing, also provide
firms with information per buyer, per distribution channel and market segment, as
well as an insight into the permitted costs of future products and the costs made
during the life of products. Another development, called Strategic Management
Accounting, aims at the firm itself also gathering systematic information on the
costs of buyers, competitors and the like, in addition to cost information on the
firm itself, thus looking beyond the firm’s boundaries. In order to acquire an
insight into cause and effect relations financial information is supplemented by
non-financial information, permitting links to be discerned between activities and
financial results.
The balanced scorecard is a performance measurement system which tries to
respond to the growing complexity of the environment and the activities of the
firms. Traditionally the performance measurement systems are finance-centred
and examine realised performance after the event. Realised performance is
compared with norms formulated in advance. An analysis of the difference may
be a reason for taking action. In a stable environment such feedback systems may
be useful, but they are not suitable when the activities of firms keep changing.
An insight into the relation between (expected) activities and (expected) financial
results provides companies with means to manage performance. In this process,
the relations between participants outside the company, such as suppliers, business
outlets and buyers are also considered. The Balanced Scorecard aims at giving
this insight by translating into activities, by means of critical success factors and
performance indicators, objectives with respect to all relevant policy aspects. The
balanced scorecard’s perspective is forward-looking and long-term.
With respect to the design of the balanced scorecard-based control system two
views can be distilled from Kaplan and Norton’s material, without it becoming
quite clear which view they themselves adhere to. A first view is that the control
system, as is the case with “responsibility accounting,” is dominated by cybernetic
“feedback-control” systems. Such systems will undoubtedly result in certain
behaviour patterns. Judging from the findings of much “behavioural accounting”
research dysfunctional behavioural effects and forms of information distortion
should not at all be excluded (e.g. Argyris, 1990). To the extent that the balanced
scorecard is seen as an addition to the traditional systems of “responsibility
accounting” it is consequently also an instrument fitting a formulated strategy.
The Balanced Scorecard 291

For, in Anthony’s (1988) reasoning, “responsibility accounting” is an instrument


for the promotion of an effective implementation of the chosen strategy. In our
opinion, many publications in this connection argue from a rather mechanistic
point of view. The relative importance of the various dimensions of the strategy,
and hence of the various performance indicators, should be defined beforehand
in a consistent planning. In many cases this is a misjudgement of the complexity
of the company’s activities and the uncertainty in the environment. Frequently,
it is only gradually that the major strategic dimensions are beginning to
be grasped.
This takes us to the second view on the design of the balanced scorecard-based
control system. This card might prove to be more effective as part of an interactive
control system than as part of a diagnostic control system. According to Simons
(1994, 1995) interactive systems are future-oriented. Their aim is a continuous
explication of ideas and strategically important information in a (rapidly) changing
environment. So in fact there is a “feed-forward” system. In this sense they have to
be distinguished from diagnostic control systems. The latter consist for example
of “responsibility accounting” systems, where revealing future-oriented strategic
information is not required. On the contrary, there is a focus on accounting
for activities a posteriori, albeit for that matter with the explicit aim to learn
from them.
It is, however, quite well imaginable that balanced scorecard-based control
processes play a major role in the formulation and continual reassessment of
strategy and a much less important role as a means to effectively implement
already determined strategies. Put differently: they may rather be an instrument
for strategy development than a strategic measuring instrument. Another char-
acteristic of an interactive control system is that the strategic discussion takes
place throughout the organisation. It is not only the privilege of the (senior)
management, but the aim is the deployment of everybody’s knowledge and skills
in the organisation. Sharing strategic and operational knowledge and skills is
considered important, ensuring that the relations between views and operational
consequences are discussed and made clear to everyone. The balanced scorecard
is in such a case not an instrument for the management only, but the entire organ-
isation. This requires planning a coherent whole of cards involving everybody in
the organisation.

The Role of the Controller

It is the controller’s core business to prepare and find arguments for particular
choices of control systems. It may be assumed that, depending on the controller’s
292 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

position and conception of duty, the balanced scorecard-based control concepts will
be planned differently. Jablonsky, Keating and Heian (1993) have done research
into the changing role of the “financial executive” in internationally operating
companies in the United States. In view of the job description of the position of
“financial executive” the conclusions are also relevant to the position of controller
in European companies. Interviews in six companies and a survey among over 800
companies, in which the opinions of financial as well as non-financial managers
were elicited, result in a distinction between two profiles: that of the “corporate
policeman” and that of the “business advocate.” The core values of the “corporate
policeman” profile are: “oversight and surveillance, administration of rules and
regulations, and impersonal procedures” (p. 2). The core values of the “business
advocate” profile are: “service and involvement, knowledge of the business, and
internal customer service” (p. 2).
The controller as “corporate policeman” is an extension of the senior man-
agement and will introduce an instrument like the balanced scorecard in the
organisation top-down. The most important part of the balanced scorecard will
be an account of the delegated powers with an emphasis on realising performance
standards. These standards are derived from the strategic policy drafted by the
senior management.
The controller as “business advocate” is a member of the businesses and
supports the business managements. This controller is expected to know about
the activities of the businesses, so that he/she can contribute to the discussions
about the developments and changes in the business. As a team member he/she
advances ideas about the financial organisation that most adequately fit the
changes taking place in the business. The balanced scorecard will in the first place
be an instrument to achieve a joint formulation of a strategy for the businesses and
a more detailed elaboration of strategic policy. By planning and elaborating the
balanced scorecard by mutual arrangement a strategic learning process develops.
This brings about communication of the strategy and enables all participants
to check which contributions they are making to realising the chosen strategy
(also cf. de Haas & Kleingeld, 1999). The balanced scorecard contributes to the
growth of a generally shared view of the organisation and the way this vision
can be realised. By means of determining the standards everyone’s contribution
to this becomes clear. Such a use of the balanced scorecard is aimed much less
at accountability.
The controller as an extension of the senior management fits into the traditional
concept of “responsibility accounting.” The controller in the capacity of “business
advocate” fits much more into the interactive control system discussed above.
Design and functioning of the balanced scorecard will run closely parallel with
the role of the controller.
The Balanced Scorecard 293

DECISION-MAKING WITH RESPECT TO THE


ADOPTION OF THE BALANCED SCORECARD
In this section we will discuss the decision making with respect to the adoption
of the balanced scorecard. We are interested in the motives leading to such a
decision. In addition, we will examine to which extent there is a relation between
these motives and the functioning of the balanced scorecard.
Like March (1978), Abrahamson (1991, 1996) and Abrahamson and Rosenkopf
(1993) we start from the assumption that organisations, like individuals, are not
isolated and hence allow themselves to be influenced by the behaviour in other
organisations. Abrahamson (1991) distinguishes four perspectives which can
explain why organisations do, or do not, decide to adopt new accounting tech-
niques. The first perspective, “the efficient choice,” is especially driven by internal
incentives. The other three perspectives, “forced-selection,” “fashion” (because
leading organisations use certain methods) and “fads,” are in particular inspired by
organisation-external incentives. The efficient-choice perspective will be discussed
in the next subsection “Internal incentives”; the other perspectives will be discussed
in the subsection “External incentives.” The last subsection will examine the influ-
ence of the various motives for adoption on how the balanced scorecard functions.

Internal Incentives

It is questionable if decision-making processes regarding the adoption of the


balanced scorecard can proceed along precisely drawn lines of technical effi-
ciency. Technically efficient decision-making in the first place presupposes that
decision-makers in organisations are relatively certain of their aims. In the case of
the balanced scorecard this might be the case when enhancing the effectiveness
and efficiency of the operations is the goal. But then the decision-maker has
to be comparatively certain of the degree of effectiveness and efficiency with
which the balanced scorecard can realise the objectives as formulated. For
the decision-maker to judge the potential effectiveness and efficiency of the
instrument he will have to have an insight into causal ways and means relations
and to be able to quantify these insights. Thus, he will as a matter of fact have
to know beforehand how precisely the balanced scorecard is going to be designed
and used in the context of a control system, what exactly the effects will be on the
behaviour patterns of managers and other members of the organisation and how
such effects are translated into improving the performance of the organisation.
Only when all this has become clear can a relatively reliable cost-benefit analysis
of the balanced scorecard be made. It is no wild conjecture to suggest that this
294 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

certainty cannot be found. Put differently: owing to the uncertainty and complexity
the professional decision-maker will have to sail into uncharted waters.
The impossibility of making a reliable cost-benefit analysis beforehand
does not mean that in practice there are no demonstrable reasons of technical
efficiency to consider the adoption of a balanced scorecard. One of them being
that the performance measurement systems in use are deemed to be insufficiently
effective. Thus, traditional systems are on the whole strongly financially oriented.
They assess the performance in the short term, and in doing so hardly link up with
strategic policy. The balanced scorecard does link up with long-term policies and
assesses performance in the light of these policies. By assessing the performance
from various perspectives the emphasis is not exclusively placed on financial
performance. Information about non-financial performance moreover provides an
insight into the causes of the financial results.
Moreover, an important reason for adopting the balanced scorecard may be
found in changes in and in the environment of the organisation. It is those changes
which organisation-internally necessitate a reconsideration of existing control
systems. These changes may take place in market conditions, due to which for
example existing products come under pressure. Changes may also be initiated
by technological developments, due to which existing products become obsolete
and new production methods become possible. Developments in information
technology can also bring about changes, because the data gathering and process-
ing possibilities increase and information becomes accessible at all levels and
workplaces in the organisation. The growth of an organisation, in size as well as ge-
ographical extent, can have consequences for the way activities are organised and
hence controlled.
It is therefore at least plausible that Kaplan and Norton developed the balanced
scorecard in reaction to changes in production and service organisations. It may
be assumed that, on the level of the organisation, the effectiveness criterion
completely supersedes the efficiency criterion when adoption is being considered.
Put differently: professional decision-makers will not need precise cost-benefit
analyses in connection with the adoption of the balanced scorecard. They are sim-
ply looking for effective systems. It is only gradually that the “costs” of the system
will appear.

External Incentives

In addition to internal incentives, decision-making in connection with the


introduction of the balanced scorecard can especially be influenced by external
incentives. An organisation may be forced to introduce a new instrument. In
The Balanced Scorecard 295

particular governmental organisations can have so much (legal) power that they
can impose the use of new instruments on other organisations.
Mimetic behaviour can also lead to the adoption of new instruments. Sometimes
organisations belong to the same organisational collective, that is, to the same
group of competing organisations with respect to performance and/or raison
d’être. Imitating this group can lead to a so-called “fad.” Then the decision
to adopt the balanced scorecard is not based on effectiveness and efficiency
considerations concerning the instrument, but on the simple fact that (many)
other organisations have already done the same. In such processes two phases
have been distinguished (DiMaggio & Powell, 1983; Tolbert & Zucker, 1983). In
the first phase efficient choice behaviour is uppermost; especially the assessment
of the technical effectiveness and efficiency is important. In a second phase the
real fad starts to take off. As more and more organisations adopt the balanced
scorecard the attractiveness for a non-adopter increases. This may be connected
with the pressure from “stakeholders” like customers, suppliers and capital
providers. A high incidence of the balanced scorecard can make this instrument
rational for stakeholders: they associate adoption of the balanced scorecard with
rational decision-making. Inversely, non-adoption can make them suspect that
the organisation’s management is incapable of rational decision-making (also cf.
Meyer & Rowan, 1977). This may result in their discontinuing their contributions
to the organisation. Therefore many decision-makers may be expected to keep
on the safe side and decide on adoption after all. For, as more organisations in a
collective have chosen to adopt the balanced scorecard, the continuity of a specific
organisation has an interest in adoption. Political factors that are not or not easy
to calculate will then make the decision to adopt rational after the event.
Decisions to adopt within a collective of organisations for that matter do not
always have to be fad- or fashion-driven. Quite possibly, information from early
adopters may enable late adopters to decide on grounds of technical efficiency.
Such information, for this to happen, does have to be made available and actively
influence decision-making by non-adopters.
Mimetic behaviour may also create a fashion in response to actions by trend-
setting organisations or networks. The latter include researchers at universities
and business schools, and also organisation consultants. They disseminate their
ideas by means of various media, like professional journals, books, seminars and
personal communication and may be considered to be suppliers in a market for
balanced scorecards (and other administrative innovations). They are economi-
cally active actors, whose self-interest is paramount. Processes of fashion setting,
accompanied by standardisation of products, assist the suppliers. This improves
the marketability of the products. The demand in the market is from professionals
from the business world and governmental organisations. They do wish to take
296 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

their decisions to adopt (purchase decisions) on the grounds of efficiency and effec-
tivity motives, but have to move into uncharted waters because there is uncertainty
about aims, aims-means relations and future conditions of the environment.
In short, there is ambiguity (March & Olson, 1976). Fashion setting processes
can help the buyer, because they may give the decision to adopt a semblance of
rationality. According to Abrahamson (1996) fashions and fads only have a chance
when there arises not only the collective belief that adoption of the balanced score-
card is rational, but also when there is progress. This means that, in the view of the
stakeholders, a clear improvement vis-à-vis the original situation must take place.

Adoption Motives and the Functioning of


the Balanced Scorecard

Fashions and fads are not necessarily good or bad for organisations. The adoption of
the balanced scorecard can for example enhance a company’s political image. The
attractiveness for (potential) stakeholders increases in such cases, with inherent
positive economic consequences for the organisation. Adoption is then mainly a
legitimising decision for the stakeholders. However, adoption can be more than an
act of legitimising. It can kick-start a learning process, the fruits of which may be
reaped by the decision-maker.
Decisions to adopt which are mainly externally inspired and are by the
participants hardly deemed to contribute to an increase of the effectiveness and
efficiency of the activities, will lack broad internal support. As long as external
legitimisation has hardly any consequences for what is being done internally
there will be “loose coupling” (DiMaggio & Powell, 1983; Meyer & Rowan,
1977). The balanced scorecard will internally hardly be significant, because the
concept will be elaborated superficially and the information it generates will play
no significant role in decision-making and influencing behaviour.
If the internal participants are of the opinion that performance has to be improved
and are convinced that the balanced scorecard can make an important contribution,
adoption may be expected to be followed by the actual implementation and use of
the concept. The internal participants will be prepared to invest time in designing
scorecards. They will also use information from the cards when taking decisions.

RESEARCH METHOD AND APPROACH


The aim of the study is to acquire an insight into the decision-making process in
connection with the adoption of the balanced scorecard and the way it functions.
The Balanced Scorecard 297

We also aim to look at the role of the controller in the adoption process and the
functioning of this instrument. Such an insight can only be gained to a sufficient
degree if in-depth research is done into a real-life situation. Hence, the choice was
made for a single case-study. In the last subsection we indicated that, when external
legitimisation is the predominant adoption motive, the instrument will only have
a symbolic function. We emphatically want to study the internal functioning of
the instrument and want to know if applying it makes the expected contribution to
improving performance. Therefore, we opted for a situation from practice where
internal incentives also played a role in the decision to adopt. Moreover we are
only interested in a practical case in which the balanced scorecard has been tested
for some years.
This means that the decision to adopt the balanced scorecard took place
some years ago and that we, for information on this ex-ante phase, will have to
rely on the memories of people closely involved with the decision. Hence, an
interpretation of the decision a posteriori cannot be entirely excluded. Such a
distortion of information can be neutralised as much as possible by interviewing
more persons about this phase and by consulting written documents.
People play various roles in organisations and will therefore interpret processes,
activities and information differently. The balanced scorecard is a management
accounting instrument. Such instruments are regarded as being among the tools
of the controller. Nonetheless, the instrument is expected to be a means with
which members of an organisation can control performance. The position of
controller is a supportive one and differs from the line functions. Because
of these different roles acquiring an insight into how the balanced scorecard
functions requires the perspective of the controller as well as the perspective from
the line.
We did the case-study at the Dutch Railways. This company had a long tradition
as a public company. In the early nineties the government decided to change
the Dutch Railways into a private company. Within the Dutch Railways we did
research in the service unit NedTrain, which in the context of the privatisation
introduced the concept of the balanced scorecard. Here we conducted interviews
with the central controller and line managers and controllers of the separate units.
These interviews were conducted from October 1998 to February 2000. Moreover
written material has been studied.
In the following section we will report on the NedTrain case-study. We will start
by describing the activities of NedTrain. Further, we will discuss the changing
positioning of NedTrain due to the privatisation process Dutch Railways has
been undergoing since 1995. Then we will describe the changes NedTrain have
made in order to function as a business unit responsible for its own results. In this
change process the Balanced Scorecard has played an important role.
298 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

ADOPTION AND APPLICATION OF THE


BALANCED SCORECARD AT NEDTRAIN
NedTrain Activities

NedTrain is an independent operating unit within the Dutch Railways. NedTrain is


an internal service unit, whose purpose is developing and designing, maintaining,
cleaning, refurbishing and overhauling rolling stock. Its most important products
are technical advice during procurement and construction, inspections and
repairs, cleaning, refurbishment and overhaul. NedTrain has an engineering unit,
NedTrain Consulting, two overhaul companies incorporated in the Refurbishment
and Overhaul business unit, and five maintenance companies and forty service lo-
cations incorporated in the Maintenance and Services business unit. The overhaul
companies provide long-term maintenance including overhaul and refurbishment
of existing stock and equipment. The maintenance companies and the service
locations provide the short-term maintenance. NedTrain mainly offers service to
the Public Transport business unit, which is directed at transporting people. Until
the end of 1999 NedTrain also offered internal services to Goods Traffic, the unit
providing goods transport in the Netherlands. This service is still being provided,
even though this unit left the Dutch Railways by the end of 1999 and has since
been part of the Railion company. NedTrain’s staff comprises about 3800 ftes.

From State-Run Organisation to Profit Organisation

The Dutch Railways as a state-run organisation was bureaucratic, centralist and


functionally organised. Customer-supplier relationships were absent or almost
non-existent and marketing was given little attention. Attention was focussed
entirely on technology and safety. The Dutch Railways were a budget-driven
organisation and had to make sure that the costs remained within the budgets
agreed with the government. For the Dutch Railways the government was the most
important stakeholder. The government played a regulating and financing role.
They determined the prices, the extent of the services and the budget. The Dutch
Railways were the country’s only providers of rail transport. The Dutch Railways
units, including the service unit NedTrain, were considered to be cost-centres. They
too had to ensure that their costs did not exceed the annually agreed budgets. If,
contrary to expectations, this did not work out adjustments were negotiable. Conse-
quently, there was no or hardly any controlling by means of financial information.
In the early nineties a commission of experts advised the government on the
place of the Dutch Railways. In the framework of the tendency started in the
eighties towards hiving off and privatising numerous state-run activities this
The Balanced Scorecard 299

commission’s remit was to study a Dutch Railways hive-off. Further, to find out
how the Dutch Railways’ monopoly could be transformed into a market position
in which competition plays a role. According to its recommendations the Dutch
Railways should develop into an independent, enterprising and customer-oriented
company. It would eventually be listed on the stock exchange in order to cut the fi-
nancial links with the government as well. This recommendation had far-reaching
consequences for the position of the Dutch Railways and its internal functioning.
It implied a radical change with respect to its protected position in the past. The
proposal was to cut up the Dutch Railways into an operating part, to be privatised,
and an infrastructural part, which was to remain state-owned. This split, which was
also required by European regulations, has by now been put into effect. Afterwards,
additional agreements were made in order to curb Dutch Railways’ monopoly
position. The Dutch Railways were to concentrate on the core network. Regional
lines were opened up to independent transport companies. After the departure of
Goods Traffic by the end of 1999 the Dutch Railways has further restricted itself
to passenger transport only.
The Dutch Railways operating units were also confronted with these radical
changes. The service unit NedTrain became an independently operating unit,
which had to pay its own way. It had to become result-responsible, enterprising
and customer-oriented. It was not only to receive internal customers, but would
also quite clearly have to behave as a market party and try to attract customers
in competition with others. Before this, NedTrain had owned all rolling stock
and decided what maintenance had to be carried out. Public Transport and Goods
Traffic, the users of stock and equipment, had hardly any influence on all this. This
situation now changed. Public Transport now became the owner and was hence-
forth to be buyer of maintenance services. Thus, a customer-supplier relationship
arose which was unthinkable before. The owners of the stock and equipment can
now freely decide to buy external services. NedTrain now clearly has to take into
account the wishes of the customers and be able to offer its services in conformity
with the market. This turnaround had to be completed in a couple of years.

Changes in the Organisational Structure

In 1993 NedTrain, on the eve of the hiving-off of the Dutch Railways, started
to reconnoitre the changes to take place. To this end a business plan had been
drafted, defining new markets. However, quantified objectives were all but absent.
The process of creating a result-responsible unit was described and the necessity
of developing and implementing a market was discussed. Result-responsibility
requires knowledge of costs and benefits, the various buyers and their wishes
and an insight into inter alia one’s market-position and competitors. The starting
300 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

point was that NedTrain did not possess the accounts with which the financial
results could be related to operational activities. For this, the necessary basic
accounts were lacking. There were no assets and debtors accounts, nor was there
a profit and loss account. Nor was information available about buyers, markets
and competitors. In the past, technical aspects used to be exclusively focussed
upon. This was also reflected in the composition of the company’s management,
where financial and commercial expertise did not feature.
NedTrain’s management realised that without external assistance the changes
would not be successful. Therefore, an external consultant was hired, who made
an important contribution to the design of the changeover process. In addition, in
the course of time, new kinds of expertise have been taken on board. NedTrain’s
management was supplemented with a financial and commercial manager. In
NedTrain’s business units, too, a number of new people were appointed on the
managerial level.
The structure of the organisation was adjusted. NedTrain used to be an internal
service unit within the Dutch Railways. It had its own budget and was responsible
for the costs, which should stay within the budget. Now NedTrain became a
business unit with responsibility for its own results. Before the privatisation
NedTrain delivered its services only internally and, as owner of the rolling
stock, determined itself the quality and level of services. After the privatisation
NedTrain was no longer the owner of the rolling stock. Now it delivered its
services to both internal and external customers, who decided about the quality
and level of services. In order to make all the participants aware of the changing
position of NedTrain its management introduced own-result responsible units.
This was a radical change, as NedTrain used to have a functional structure with
many operating units. Thus, the business unit Overhaul and Service was created
for short-term maintenance, the business unit Refurbishment and Overhaul for
long-term maintenance and the business unit NedTrain Consulting for advice for
the purchase of new, and the conversion of existing, stock and equipment. Each
unit was given its own management, including a controller. Since a withdrawal to
the core network, and hence a decrease in service to the internal buyers, had been
anticipated, the new policy planned for an increase in service to third parties.
Moreover, by the end of 1999 Goods Traffic left the Dutch Railways and became an
external customer. Figure 1 gives an overview of the new organisational structure.

Adoption of the Balanced Scorecard

NedTrain had to become a profit oriented unit with its own customers. In the
past performance was expressed in technical terms and, as the unit determined
The Balanced Scorecard 301

Fig. 1. Organisational Structure NedTrain.

the quality and level of services itself, it did not need to be aware of customers’
wishes. The new orientation required information about costs and revenues of
the various services and per customer, and information about customers’ wishes
and satisfaction. Because the level of internal services was expected to decrease
NedTrain was also expected to focus on external customers. Therefore, information
about market developments and competitors had to be collected.
The external consultant suggested the use of the balanced scorecard as one of
the vehicles of change for NedTrain. As discussed before, the entire complex of
changes should bring about a decrease in technical domination and place financial
and commercial consideration more at the centre. The aim was for the engineers
to be made to realise as quickly as possible that NedTrain’s operations had to
generate a profit and contribute to Dutch Railways’ profitability. To achieve this,
it is no longer enough to only look at technical aspects and strive for technical
perfection. The wishes of customers and the financial consequences of the
technical proposals must also be taken into consideration.
In addition to this an important objective was to force the internal orientation
towards a more external one: customers, competitors and market developments
should play an important role in decision-making. The balanced scorecard fits these
objectives excellently. This concept is based on the thinking that performance is to
be examined from various perspectives, in which process not only internal perspec-
tives are relevant but also external ones as mentioned above. Further, the concept
recognises the importance of learning and adjusting to new developments. This
was an important feature as NedTrain’s management knew that the change pro-
cesses would create an unstable situation for a longer period in which continuous
adaptations to new insights and environmental changes would be at the fore.
The eventual decision to adopt the balanced scorecard was taken in common
consultation by the NedTrain management team, which includes the central
302 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

management and the management of the business units, the central controller
and the consultant. At the moment of adoption there was no clear vision on the
design of the control system connected with the balanced scorecard. Nor had
the management team developed a clear strategic vision on NedTrain’s future
position in the market and within the Dutch Railways. It was thought necessary
for the organisation to become as quickly as possible fully aware of the fact that
technology is also expensive and that there should be an external orientation. The
balanced scorecard was considered a good instrument for putting these subjects
on the agenda. Design, implementation and usage costs of the instrument played
no role in the decision-making. According to the central controller: “the feeling
was to start tentatively; then you cannot go wrong over that.”

Implementation

The following process was followed for the implementation of the balanced score-
card. By the end of 1993 a “kick-off meeting” was organised, with the management
team and all managers and controllers of the various units being present. In the
spring of 1994 the concept of the balanced scorecard was presented unit by unit to
all managers in workshops. Per unit so-called tandems were used: a local controller
and an external consultant acted as pioneers of the balanced scorecard concept.
Next, working groups were formed in each of the several companies to formulate
critical success factors, which were drafted per perspective. So there was no inte-
grated approach for drafting the critical success factors for the various perspectives.
Occasionally, the sheets with critical success factors per perspective were simply
stapled together. Dozens of critical success factors and their indicators were not
unusual. The local management including the local controllers made a selection
from all these factors. NedTrain’s managing director and the central controller, in
cooperation with the managers of the various units, determined the critical success
factors and their indicators per business unit, there being about 15 such indicators
per business unit and between two and three applying across the board. In these
discussions some important indicators were removed, being those providing an
insight into the capacity utilisation of staff, the amount of service and logistic
performance. NedTrain’s central controlling staff further elaborated the balanced
scorecards, that is, the layout of reports and systems for drawing them up.
The implementation process took place without any major problems. People
knew that the structure, working methods, management and control of NedTrain
would have to change. It was also known that the changes would have to take
place over a relatively short period. It was realised that henceforth financial
aspects would have to be paramount and that an external orientation is necessary.
The Balanced Scorecard 303

Moreover, the analytical character of the balanced scorecard strongly appealed to


technicians: “they understand the cockpit metaphor which stresses the importance
of looking at various aspects” (central controller). Add to this that the non-financial
assessment moments matched settled (technical) practices. This does not remove
the fact that the management of some units had some trouble accepting the bal-
anced scorecard. This resistance was especially due to the organisation becoming
more transparent, which created a perception of one’s independence possibly
diminishing and one’s own actions becoming the target of unwelcome attention.

Use of the Balanced Scorecard

Because the introduction process was rather rapid it turned out afterwards that the
design of the cards had not been clear on all points. Thus, there were problems
with the nature of some of the performance indicators, with the uniformity of the
concepts used and the accessibility of the reports. Some important performance in-
dicators turned out to be absent: the ones referring to customer satisfaction, logistic
performance and innovative capability. These problems have been addressed in the
course of time, starting by defining the concepts unambiguously and improving
the reports.
It was also rather tricky to define performance standards. In fact, the discussion
about NedTrain’s future and the objectives derived from this had not even started
yet, nor were people used to working with clear performance standards. Therefore,
initially attention was exclusively focussed on performance measurement, without
feasibility and desirability being considered. The scorecards only comprised
information about the existing situation without any linkages to targets, because
they did not exist. The balanced scorecard was, at first, mainly used to acquire
an insight into the relation between technical and financial aspects and to make
people in the organisation ponder the critical success factors of their own units.
This also supported the strategic discussion about the internal and external
positioning of NedTrain as a whole, as well as also the strategic discussion within
the business units themselves. Following this discussion the units’ management
developed targets, which over time were included in the scorecards.
After the appointment of the commercial manager the market perspective has
received much more attention. With the aid of so-called customer dashboards for
the various markets the relationship with the customers is systematically charted.
This process is still going on. An insight into market position and characteristics
of customers provide relevant information for the strategic discussion. In this
discussion the positioning of NedTrain and the development of strategic alliances
are addressed. Issues were discussed, such as what are the activities we should
304 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

carry out, which activities can be outsourced to external parties, what is the posi-
tioning of companies building rolling stock, were discussed. The last-mentioned
companies are in the process of acquiring rolling stock maintenance companies,
so that they not only can sell the rolling stock but are also able to take care of its
maintenance. This enables them to conclude contracts with railway companies
during the lifetime of the rolling stock. NedTrain’s key asset is that they possess
knowledge about the relation between the rolling stock and the rail infrastructure,
i.e. rail tracks and energy. The tendency is towards concluding long-term contracts
with customers through which NedTrain guarantees the functioning of the
customer’s rolling stock. This leads to another way of bidding and pricing, and
subsequently other cost information. The role of the controllers is to support their
line managers in developing these types of contracts and to deliver the relevant
cost information. Further, the NedTrain management discusses the deployment
of subcontractors for specific activities in a more structured way.
Meanwhile, the internal process perspective has also been picked up. The
various units have taken over the concept of the balanced scorecard and have
started their own interpretation. The units argue that the central scorecard is too
financially oriented and hence less suitable for their own management. Moreover,
for a report to the top management underlying information is required. Starting
from their own critical success factors the units scrutinise the operational activities.
Doing so, some units descend to the shop-floor. Thus, the NedTrain Consulting
business unit has developed cards on two levels within their own organisation,
with which all processes and activities are assessed. The controller of this business
unit has played a prominent role in developing these cards. He involved all the
organisational units in the development of appropriate performance indicators and
targets. All echelons contributed to the strategic discussion and the thinking about
consequences for processes and activities. In this way everybody knows what is
going on and responsibilities are shared. A benchmark and customer satisfaction
study have revealed more about their own functioning and helped in formulating
the objectives and targets. There are discussions about a further elaboration of the
innovation perspective and people find it hard to concretise this. Beside the concept
of the balanced scorecard the quality model EFQM is being used. The advantage
of the EFQM model is that it is more complete and has development stages,
allowing one to see where one stands at this moment and which steps have to be
taken. The information derived from the scorecards and the quality model lays the
foundation for the central scorecards.
The central scorecards are the business unit management’s monthly means
of reporting to the central management and largely determine the topics of the
central management team’s meetings. Within the units the cards are viewed as an
important management tool. They make action-oriented management possible,
The Balanced Scorecard 305

with possible mutual monitoring of agreements. The information in the cards


determines the agendas. Within the NedTrain Consulting business unit this
management method has been generally accepted. In other units the balanced
scorecard is used less generally; especially lower down in the organisation its
influence on operational decisions is more restricted. There the concept functions
more as a report and less as a management tool.

DISCUSSION AND CONCLUSIONS


In the theoretical sections we discussed the adoption of new accounting practices
as an answer to the growing complexity of the environment companies face. In ad-
dition to this efficiency motive institutional theory points at motives of fashion and
fad which could also lead to the adoption of new accounting practices. Institutional
theory claims that the motives for adopting accounting practices will influence their
use. We argued that the balanced scorecard is an accounting practice which can
handle complexity by giving both financial and non-financial information about
the relevant aspects of companies’ performance. As this information is closely
related to companies’ strategy, this concept encourages strategic discussions and
fits within an interactive control concept. We argued that such a control concept
requires controllers playing the role of business advocates. Subsequently we de-
scribed the adoption and use of the balanced scorecard at NedTrain. In this section
we intend to analyse these processes in the light of the theoretical claims made.
NedTrain has used the balanced scorecard as a vehicle of change. It is one of
the instruments deployed in the changeover process. Forced by the government
NedTrain had to transform itself from a governmental organisation into a profit
organisation: a huge turnabout with consequences for all aspects of business man-
agement. This reversal had to take place in a relatively short time. This decision
of the government created a great deal of uncertainty. NedTrain had to change its
technical orientation into a profit orientation, which required an insight into costs
and revenues, customers’ wishes, competitors and market developments. As the
company could not provide this information it needed new accounting practices
which were capable of dealing with its new positioning in a changing environment.
Moreover, changing an existing orientation can only be realised by making the
participants throughout the organisation aware of the consequences of this change
for their activities and decision-making. This is a complex process, which is
why NedTrain recruited an external party for support in the changeover process.
Like most consulting agencies this party was very experienced in introducing the
concept of the balanced scorecard. The central controller defines the role of the
consultant as follows:
306 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

At that time the balanced scorecard concept was a hype. We needed a new instrument in order to
emphasise the financial consequences of our activities in the first place. As the consultant advised
the use of the balanced scorecard, we accepted this advice without deliberate discussions.

Thus the adoption motive “fashion” as mentioned by Abrahamson (1991) certainly


played its role here. But, as indicated by the central controller, the perspective
of an efficient choice has emphatically played a role as well. Internally people
were convinced that effectiveness and efficiency of performance would have to
be scrutinised once again, from the point of view of a profit organisation dealing
with customers and competitors. They were also convinced that the existing
instruments were not able to provide the required information and that they needed
new instruments for giving this insight. When the decision to adopt was taken it
was not known if the concept of the balanced scorecard would be the most suitable
one here. For this, the organisation relied on the external consultant’s advice, but a
cost-benefit analysis was not made. More important was the expected contribution
in the short term to realising a change in behaviour on the part of the technicians.
In retrospect, it is admitted that another tool of performance assessment and
control would also have been possible. The NedTrain management team were
agreed that it was necessary to make the engineers financially aware and that the
internal orientation would have to be transformed into an external one.
There was, in advance, no clear idea of the consequences of the concept of the
balanced scorecard for the activities and the way they were managed. A start was
made in the expectation that gradually everything would become clearer. Vague
ideas and expectations may be expected when an organisation does not know
how its market position will develop, what the demands of the buyers are, which
competitors it will have to fear and what internal position it will assume. Gradually,
a clearer picture has emerged in connection with all this, with the concept of the
balanced scorecard playing a role. The central controller commented:
We introduced the concept in an evolutionary way. We asked the business units to describe
the information they needed for managing their businesses. They proposed the performance
indicators. At the beginning we stressed the importance of financial indicators. Over time we
realised that the internal operations are the drivers of financial results. The units had to indicate
what the most important operational processes were. We started from the actual situation
because we did not know our strategy. We were not used to talking about strategy and respond-
ing to competitors’ actions and customers’ wishes. The balanced scorecard has helped us in
making the actual situation more clear. We needed this information before starting the strategic
discussion.

The design processes have encouraged people to ponder the critical success factors
and the contributions to this from the internal units. The adoption of the concept
by the several units and the drill-down processes to the shop floor have sparked
off across-the-board discussions in these units. Thanks to these discussions the
The Balanced Scorecard 307

operational processes have been charted more satisfactorily, providing a clearer


insight into the mutual relationships. These discussions have also contributed
to more clearly delineating NedTrain’s future plans and the roles of the several
business units. Here, the information generated by the cards and the ensuing
discussions within the organisation’s various echelons also make a contribution.
The role of the central and decentral controllers is a supportive one here, in the
sense of ensuring the availability of information and joining the discussions about
the significance of the information for the decisions to be taken. The balanced
scorecard is not employed as an accounting mechanism, but as an opportunity to
hold each other to account and thinking about planned actions.
Commenting on the use of the balanced scorecard concept the controller of the
business unit NedTrain Consulting reflected:

In consultation with the business units the central management has determined the central
scorecard. Each month we report about a brief set of indicators to the central management.
Quarterly we report about the whole set of indicators followed by an in-depth discussion with
the central management. Using this information about the current situation we are able to
discuss the environmental developments and their implications for the business units’ activities.
I have advised using the concept also within our business unit. We were convinced that people
throughout the unit should be informed about the operational processes and their financial
consequences. Our scorecards are much more focussed on operational processes, in particular
the scorecards used within our units. The scorecards have helped us to discuss our internal posi-
tioning, both within NedTrain and the Dutch Railways, and our external positioning. NedTrain
Consulting used to be more externally focussed due to its advisory role about introducing new
technological developments and determining the technical requirements of new rolling stock.
As the demand of internal customers is decreasing we are widening our focus on the external
market. Therefore, we have paid a lot of attention to customers’ satisfaction and its influence
on the internal operations and processes. We have developed service specifications for each
of the processes, which can be measured. Further, we have put much effort in measuring
the performance of our Research & Development unit as a means to manage their activities.
I have regular meetings with people of this unit in order to discuss the most appropriate
performance indicators.
We use the scorecards for discussing the current situation and whether we are on the right
track to realise our strategic goals. We do not have a culture of “settling accounts” but of
“talking to.” What is very important is that all the people are informed about what is going on
and that there is a feeling of shared responsibility.

The central and decentral management are satisfied with the concept. The financial
perspective is claimed to have been put on the agenda and to have contributed to
a more external orientation, the two objectives deemed urgent at the beginning of
the change process. At present the concept functions as an important management
instrument. It determines the topics of discussion centrally as well as within the
several units. The success of the implementation is ascribed to a number of factors.
In the first place, the problems were widely acknowledged as was the necessity
308 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

of carrying through changes. Moreover, the concept had – and still has – the
backing of the entire management and moreover very well matched the technical
orientation of NedTrain’s personnel, making the concept easily accessible and
widely supported. A lot of energy has also been lavished on the introduction and
implementation of the concept and attempts have been made to involve as many
people as was possible in organising this. In this process the central controller and
the business unit controllers played a pivotal role. In the implementation process
they functioned as intermediaries between the management and the people within
the units. They supported them in developing their scorecards. They also paid
attention to the linkages between the various performance perspectives and were
not only focussed on the financial aspects. They participated in the monthly and
quarterly discussions about the scorecards within the various management teams.
We can conclude that the role controllers play is that of a business advocate.
At the beginning the concept played a central role in changing the content of the
agenda: it introduced financial, customer and market aspects. The information of
the scorecards was very helpful for conducting the strategic discussion throughout
the organisation. At present the concept plays a central role in managing perfor-
mance and in the ongoing strategic discussion. This is, because the information
from the cards determines to a large extent the agendas of the meetings of the
central and decentral management teams. Nevertheless, now that the strategy
has become much clearer the concept is changing to a means of reporting and
discussing actions for the coming period. Although the scorecards are regularly
discussed, this does not remove the fact that there are certainly differences between
the various units with respect to the significance of the concept for the way things
are done. In the NedTrain Consulting business unit the balanced scorecard has
been accepted by the entire organisation. In the Refurbishment and Overhaul
business unit this is less so. Here cards have been drafted on various levels, but they
have less meaning the lower one descends in the organisation. The difference in ac-
ceptance is ascribed to the difference in knowledge and perhaps the larger number
of personnel plays a role.
We asked ourselves whether using the concept of the balanced scorecard
produces the effects on performance anticipated during the adoption phase. Some
remarks are called for here. In the specific case of NedTrain we are observing a
radical change. It turns out that initially the management did not have any clear
ideas of all consequences of such a change for the organisation. There were ideas
about the direction of the changes and there were more well-defined thoughts
about the changes that would at any rate have to take place as quickly as possible.
Steps were taken without being able to survey beforehand their consequences.
Gradually, things became clearer, without for that matter making the picture
complete. We can also conclude that in a radical change process it is not one
The Balanced Scorecard 309

instrument that is deployed but a whole complex of them. It is all interlinked


forces within this complex which make a contribution to a change process, and
this makes it impossible to isolate the contribution of one single instrument in such
a complex.

NOTE
1. Some of the basic assumptions underlying the balanced scorecard concept have been
criticized by Nörreklit (2000). Part of her criticism concerns the causality concept of the
balanced scorecard. She concludes that there is not a causal but rather a logical relationship
among the areas analysed (p. 82). Rather than viewing the relationship between non-financial
measures as causal, the focus should be on coherence between measurements. Coherance
focuses “on whether the relevant phenomena match or complement each other” (p. 83).

REFERENCES
Abrahamson, E. (1991). Managerial fads and fashions: The diffusion and rejection of innovations.
Academy of Management Review, 16, 586–612.
Abrahamson, E. (1996). Management fashion. Academy of Management Review, 21, 254–285.
Abrahamson, E., & Rosenkopf, L. (1993). Institutional and competitive bandwagons: Using mathe-
matical modelling as a tool to explore innovation diffusion. Academy of Management Review,
18, 487–517.
Anthony, R. N. (1989). The management control function. Boston: Harvard Business School Press.
Argyris, C. (1990). The dilemma of implementing controls, the case of managerial accounting.
Accounting, Organizations and Society, 6, 503–511.
Bjørnenak, T., & Olson, O. (1999). Unbundling management accounting innovations. Management
Accounting Research, 10, 325–338.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and
collective rationality in organizational fields. American Sociological Review, 48, 147–160.
Haas, M., & de Kleingeld, A. (1999). Multilevel design of performance measurement systems:
Enhancing strategic dialogue throughout the organization. Management Accounting Research,
10, 233–261.
Jablonsky, S. F., Keating, P. J., & Heian, J. B. (1993). Business advocate or corporate policeman?
Morristown, NJ: Ferf Research, Financial Executives Research Foundation.
Judson, A. S. (1990). Making strategy happen, transforming plans into reality. London: Basil
Blackwell.
Kaplan, R. S. (1983). Measuring manufacturing performance: A new challenge for managerial
accounting research. The Accounting Review, 686–705.
Kaplan, R. S. (1984). The evolution of management accounting. The Accounting Review, 3, 390–418.
Kaplan, R. S. (2001a). Transforming the balanced scorecard from performance measurement to
strategic management. Part 1. Accounting Horizons, 15(1), 87–105.
Kaplan, R. S. (2001b). Transforming the balanced scorecard from performance measurement to
strategic management. Part 2. Accounting Horizons, 15(2), 147–161.
310 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN

Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard-measures that drive performance.
Harvard Business Review (January–February), 71–79.
Kaplan, R. S., & Norton, D. P. (1993). Putting the balanced scorecard to work. Harvard Business
Review (September–October), 134–147.
Kaplan, R. S., & Norton, D. P. (1996a). Using the balanced scorecard as a strategic management
system. Harvard Business Review (January–February), 75–85.
Kaplan, R. S., & Norton, D. P. (1996b). Strategic learning & the balanced scorecard. Strategy &
Leadership (September–October), 18–24.
Lukka, K. (1998). Total accounting in action: Reflections on Sten Jönsson’s accounting for
improvement. Accounting, Organizations and Society, 3, 333–342.
Lynch, R. L., & Cross, K. F. (1991). Measure up! Yardsticks for continuous improvement. London:
Blackwell.
March, J. G., & Olson, J. (1976). Ambiguity and choice in organizations. Bergen, Norway:
Universitetsforlaget.
March, J. G. (1978). Bounded rationality, ambiguity and the engineering of choice. Bell Journal of
Economics, 587–608.
Meyer, J., & Rowan, B. (1977). Institutional organizations: Formal structure as myth and ceremony.
American Journal of Sociology, 83, 340–363.
Nanni, A. J., Dixon, J. R., & Vollmann, T. E. (1992). Integrated performance measurement: Man-
agement accounting to support the new manufacturing realities. Journal of Management
Accounting Research (Fall), 1–19.
Nörreklit, H. (2000). The balance on the balanced scorecard – A critical analysis of some of its
assumptions. Management Accounting Research, 11, 65–88.
Simons, R. (1994). Control in an age of empowerment. Harvard Business Review (March–April),
80–88.
Simons, R. (1995). Levers of control: How managers use innovative control systems to drive strategic
renewal. Boston: Harvard Business School Press.
Tolbert, P. S., & Zucker, L. (1983). Institutional sources of change in the formal structure of
organizations. Administrative Science Quarterly, 28, 22–39.

Вам также может понравиться