Вы находитесь на странице: 1из 30

Is Self-Regulated Peer Review Effective at Signaling Audit Quality?

Jeffrey R. Casterella
Colorado State University

Kevan L. Jensen
University of Oklahoma

W. Robert Knechel
University of Florida

September 2006

Helpful comments were received from Clive Lenox, Eddy Vaasen, Barry Lewis, and participants at the
2006 International Symposium on Audit Research. Special thanks also to the insurance company that
provided the data for the study.

1
Is Self-Regulated Peer Review Effective at Signaling Audit Quality?

Abstract

This paper examines whether peer review conducted under the AICPA’s self-regulatory regime

has been effective at signaling audit quality. In spite of the long-standing debate about self-

regulated peer review in the auditing profession, there is a surprising lack of research evidence as

to whether such reviews are effective at signaling or improving audit quality. Prior research has

examined whether the information contained in peer-review reports is associated with perceived

audit quality (Hilary and Lennox 2005). We examine whether the information contained in peer-

review reports is associated with actual audit quality. Our results suggest that self-regulated peer

review does appear to provide effective signals regarding audit-firm quality. Specifically, we

find that the number of weaknesses identified in peer-review reports is associated with other

potential indicators of weak quality control or risky practices within accounting firms such as

selling tax shelters, overworking staff, and taking on risky clients—even after controlling for

changes in the peer-review environment over time. We also find that the number of weaknesses

identified in peer-review reports is useful in predicting audit failure (i.e., malpractice claims

alleging auditor negligence), and that certain types of peer-review findings (engagement-

performance weaknesses, personnel-management weaknesses) are particularly useful in this

regard.

2
1. Introduction

This paper examines whether self-regulated peer review is an effective mechanism for

differentiating quality among audit firms. Peer review has long been a part of the AICPA’s

program for enhancing audit quality in the accounting profession. Since its beginnings in the

1970s, peer review has sought primarily to improve audit quality by identifying significant audit-

firm weaknesses, and by communicating those weaknesses to the reviewed firms who can then

take corrective actions (White et al. 1988; AICPA 2004). The AICPA has also recognized that

the general public (including regulators) uses peer review reports for their own decision-making

purposes (AICPA Peer Review Board 2004). Assuming audit clients (and regulators) value audit

quality, this provides additional market pressure on audit firms to maintain adequate quality-

control systems. This has also led to a renewed emphasis on peer-review transparency, and to

much debate regarding the information content of peer review reports and the disclosure of audit-

firm weaknesses to the public (e.g., Bunting 2004; Snyder 2004).

In this paper, we focus on the information content of the peer-review report itself. In

order for peer review to have any impact on audit quality, it must effectively identify weaknesses

in lower-quality audit firms and communicate this information in the report. Without this,

corrective action cannot be taken, and related market pressure cannot be brought to bear. Recent

decisions by regulators have implied that self-regulated peer review is no longer viewed as an

effective mechanism in this regard. For example, the Sarbanes-Oxley Act of 2002 now requires

audit firms with public clients to have PCAOB inspections rather than traditional peer reviews

(US House of Representatives, 2002). This change was partly a reaction to the observation that

most audit failures involved audit firms who had received unmodified peer-review reports.

Unfortunately, few empirical studies of peer-review effectiveness were available at the time of

3
the change to shed light on the issue. In fact, little research to date has examined whether peer-

review reports credibly capture audit-firm quality.1 This is unfortunate since many US audit

firms, and most audit firms in other countries, continue to rely on peer review as part of their

quality-control programs (Hilary and Lennox 2005).2 Moreover, understanding the effectiveness

of peer review is imperative to the debate about whether self-regulation is the appropriate

approach for the auditing profession.

Using audit-firm data from an insurance company, we examine directly the link between

peer review reports and audit-firm quality in two ways. We first test whether the information

contained in peer-review reports is calibrated with other potential firm-specific indicators of

lower audit quality across a wide range of audit firms. Our results show that the number of

weaknesses identified in peer review reports is indeed associated with the existence of these

firm-specific attributes--even when controlling for an evolving peer-review environment over

time. We then test whether the detailed information communicated in peer-review reports is

helpful in predicting actual audit quality. Using malpractice claims as evidence of poor audit

quality (on average), we find that the number of weaknesses identified in peer-review reports

appears to be helpful in predicting audit quality. We also find that some types of weaknesses

identified in peer-review reports appear to be helpful in predicting audit quality (e.g., personnel

management, engagement performance) while others do not (e.g., independence, client

acceptance, and monitoring). We interpret our results to be supportive of self-regulated peer

review being an effective mechanism for differentiating audit quality among firms. Moreover,

1
It is too early to determine if the PCAOB inspection process will reduce the number of audit failures of public
companies in the US. However, it would be unrealistic to expect the incidence of such failures to ever drop to zero.
A difficulty that will arise when it comes time to evaluate the efficacy of the PCAOB’s approach is the lack of data
on base rates in the pre-PCAOB regime. This paper provides some basis for benchmarking current efforts at
regulation against the self-regulatory regime in existence prior to 2002.
2
Peer review continues to be a requirement for membership in the AICPA. Many state regulators also require peer
review for CPA licensure.

4
since every firm in our study received an unmodified peer-review report, the ability to

differentiate audit quality appears to hold even among the vast majority of firms deemed

“acceptable” under the self-regulatory model.

The remainder of this paper is organized as follows. In the next section we discuss the

background and objectives of peer review, and develop hypotheses regarding peer review’s

ability to signal audit-firm quality. In the third section, we discuss our research method and data.

The fourth section presents the results of our hypothesis tests, followed by a summary and

discussion of our results.

2. Peer review and audit quality

The AICPA has for many years incorporated peer review as one of its primary methods

of controlling quality among CPA firms. Even before mandatory peer review was adopted by

the AICPA, there was a system of voluntary peer review that started in the US in the 1970’s.

This was implemented primarily as part of the profession’s response to a wave of audit failures

that caused the public to question the effectiveness of audits. The voluntary phase of peer review

eventually gave way to a form of mandatory, yet self-regulated, peer review that was instituted

by the AICPA’s membership in the late 1980’s at the prodding of the SEC (Berton 1987; White

et al. 1988). Self-regulated peer review remained basically intact from that time until the

creation of the PCAOB in 2002. Although the creation of the PCAOB implies that self

regulation failed, the AICPA recently reasserted its faith in and commitment to peer review—

albeit a more transparent form of peer review—for its membership (AICPA 2004).

The AICPA’s peer-review program has always been open to controversy. While

commentators have been enthusiastic in their support of the program (e.g., Mautz 1984; Kaiser

1989; Felix and Prawitt 1993), critics have made reasonable arguments over the years as to why

5
self-regulated peer review cannot work. Some point to the anecdotal evidence that peer-reviews

identify relatively few weaknesses in reviewed firms (e.g., Wallace and Cravens 1994), that

almost all peer-review engagements result in unmodified reports (e.g., Hilary and Lennox 2005),

and that most audit failures involve peer-reviewed firms (e.g., Fogarty 1996). Others argue that

peer review cannot be effective because of the general lack of independence among reviewers

and reviewees (Grumet 2005), and because the formality of the process allows firms to develop

explicit compliance plans based on charts and checklists (Atherton 1989) that have little impact

on the conduct of audits (Austin and Lanston 1981). Fogarty (1996) argues that the AICPA’s

peer review program may be nothing more than “ceremonial logic” because, among other things,

(1) the program was created by a trade organization focused more on maintaining the

profession’s image than on improving actual audit quality, (2) reviews focus on the quality-

control process rather than actual audit quality, and (3) reviews focus on documentation of the

process rather than the nature or appropriateness of audit decisions.

The key to determining whether peer review is effective is to examine whether it

successfully identifies weaknesses in lower-quality audit firms. In other words, do peer-review

reports credibly reflect audit quality? Empirical studies shed some light on this question, but

most of this evidence is indirect. For example, three studies examining ex-post assessments of

audit quality find that audit firms required to undergo peer review are associated with higher-

quality audits (Deis and Giroux 1992; Giroux et al. 1995; Krishnan and Schauer 2000). On the

other hand, studies using audit fees as a proxy for audit quality are less clear on this question.

Francis et al. (1990) find no evidence that auditors subject to peer review are able to charge

higher fees. However, Giroux et al. (1995) find that such firms may indeed charge higher fees,

but not on a per-hour basis. Similarly, a survey by Schneider and Ramsay (2000) suggests that

6
while loan officers claim to have more confidence in clients audited by peer-reviewed firms, they

are not more likely to approve loans or offer lower interest rates for those borrowers.

Wallace (1991) was the first to examine the actual information provided by peer-review

reports—including the accompanying Letters of Comments (LOCs). She finds that 90 percent of

the reports filed during the 1980-86 period were unmodified, with an average of 3.47 weaknesses

per engagement. She also finds that this relatively high number of weaknesses was invariant to

the type of reviewer, type of reviewee, or year of review. She interprets these findings as

supporting the contention that the peer review process is effective, in that it is not subject to

moral hazard problems surrounding the choice of the reviewing firm. Hilary and Lennox (2005)

provide the most thorough analysis to date of the information content of peer-review reports by

examining the audit-market reactions to reports filed during the 1997-2003 period. Their

experimental design makes two assumptions: (1) that audit-clients perceive peer-review reports

to reflect actual audit quality, and (2) that any market reaction to peer-review reports is due to

audit-clients’ demand for high-quality audits. The authors find that over 95 percent of the

reviews during this period resulted in an unmodified report, with an average of 1.12 weaknesses

identified per engagement. They also provide strong evidence that the audit market does react to

the information signaled by peer-review reports. Specifically, firms receiving unmodified

reports without an LOC tend to gain clients following the review, while firms receiving modified

or adverse reports tend to lose clients after the review. Shifts in the audit market also appear to

be related to the number of weaknesses identified in the LOC.

The results in Hilary and Lennox (2005) are consistent with peer-review reports being

associated with perceived audit quality. Indeed, it has been suggested that peer review’s greatest

impact may lie in the realm of perception rather than reality (Fogarty 1996). Nevertheless, if

7
market participants are correct in their perceptions in this regard, then it seems likely that the

peer-review signal should be associated with other indicators of audit-firm quality as well. In

other words, firms having observable attributes suggestive of weak or risky quality-control

practices at the time of the review should receive reports indicating lower quality, all other things

equal. Given that virtually all firms receive unmodified peer-review reports (Hilary and Lennox

2005), this differentiation can only come from specific comments in the accompanying LOC.

This leads us to our first hypothesis:

Hypothesis 1: Peer-review findings are associated with the presence of audit-firm


attributes indicative of lower audit quality.

Ultimately, whether peer review provides an effective signal of audit quality depends on

whether its results are well calibrated with actual audit quality. While audit quality is generally

unobservable for specific audits (O’Keefe et al. 1994), poor quality is observable with hindsight

if an engagement results in litigation or a claim of malpractice against an audit firm (Palmrose

1988). That is, a firm experiencing a legitimate malpractice claim (i.e., audit failure) should

generally have received weaker peer-review reports in the past. A large literature exists

concerning the precedent conditions underlying audit failure (see Latham and Linville 1998) and

the ability of auditors to identify management fraud (see Nieschwietz et al. 2000). However

while research has established an empirical link between the presence of peer review and

perceived audit quality, virtually no research exists directly linking detailed peer-review findings

and actual audit quality.3 This leads us to our second hypothesis:

Hypothesis 2: The likelihood of audit failure (i.e., poor audit quality) is associated with
peer-review findings.

3
Hilary and Lennox (2005) provide a sensitivity test showing an association between peer-review findings and the
existence of Accounting and Auditing Enforcement Releases (AAERs). However, only 3.5 percent of the firms they
examine had clients subject to AAERs. In most of these cases, reviews were conducted after the SEC investigations
had begun, which may have placed unusual pressure on peer reviewers to identify weaknesses in those cases.

8
3. Research design and data collection

The data for this study are drawn from the proprietary files of an insurance company

specializing in professional liability coverage for local and regional accounting firms. The

insurance company is a subsidiary of a broader professional services organization and is not

publicly-owned.4 As part of the underwriting process, accounting firms applying for coverage are

required to have had a peer review and must have received an unmodified report.5 The insurer’s

underwriting files contain copies of the peer review reports—including the accompanying LOCs

(if any)—for each audit firm. These LOCs describe weaknesses or deficiencies identified during

the peer-review process. Our peer-review variables are thus extracted directly from the peer

review reports and LOCs.

Since we use audit failure as an indicator of low audit quality, we first identified all the

audit-related claims involving accounting firms covered by the insurer during the period 1987

through 2000. Each observation represents a unique claim for deficient audit services for which

the insurance company made a nontrivial settlement (greater than $5,000). This process yielded

79 separate audit malpractice claims. A control group of 79 non-claim observations was

constructed by matching each claim firm with a similar accounting firm having no audit-related

malpractice claims.6 Each non-claim firm was covered by the insurance company during the

same period as the claim firm, experienced no audit-related claims for the five years preceding

the year of the observed claim, and was similar in size to the claim firm based on total fees. The

resulting sample included 158 observations.

4
The company is subject to state regulation, reporting requirements, and inspection. It has been in existence for
over 20 years and sells direct to its clients. Its clients range in size from small local firms with a single office to
large regional firms with several offices. The largest accounting firms are generally self insured.
5
This requirement results in all firms in our sample having unmodified reports. It also suggests that the insurance
company perceives peer-review reports as being useful in assessing risk.
6
Previous studies of audit failure have also employed matched-pair designs (e.g., Stice 1991; Lys and Watts 1994).

9
Underwriting applications are updated once a year. They contain information regarding

the structure of the accounting firms, the services they provide, the nature of their clienteles, the

professional activities of their owners, and some details regarding their recent histories.

However, they contain no information about specific audit clients—including those involved in

the audit failures. Our data comes from the underwriting files for the two years prior to the

claim. Data for the nonclaim firms are drawn from the same calendar periods. Most of the data

was hand collected by a research assistant working directly under the supervision of one of the

authors. The entire data set was then reviewed by a different author who had not been directly

involved with initial data collection. Discrepancies were resolved by re-examination of the

documents in the appropriate files.

4. Peer review results and firm-quality attributes

4.1. Model development

Our first hypothesis predicts that peer-review outcomes are associated with observable

audit-firm attributes indicative of low audit quality. We test this hypothesis using a model

wherein we regress the number of peer-review weaknesses identified in the LOC against a series

of firm-specific variables associated with likely audit-firm quality. We extract these variables

from the applications used by the insurer to make risk assessments about each audit firm. This

same information is used specifically to determine insurability, policy limits, premiums, and

deductibles—in other words, to assess the risk of a claim being filed against a client. Since the

likelihood of a claim being filed against a client is essentially the same as the likelihood of

alleged audit failure, these variables appear to be reasonable predictors of audit-firm quality—

particularly given the insurance company’s care and expertise in the underwriting process.

10
For organizational purposes, we categorize these variables using the five quality-control

elements defined by the AICPA: (1) independence, integrity, and objectivity; (2) personnel

management; (3) client acceptance and continuation; (4) engagement performance; and (5)

monitoring (AICPA 1996). We define our dependent variable (FINDINGS) as the total number

of weaknesses identified in the LOC. We then estimate the following model using an ordered

logit model regression:

FINDINGS = b0 + b1TAXSHLT + b2CPAS + b3WKLOAD + b4CLNTRSK (1)


+ b5HISTORY + b6LFEES + [control variables]

with explanatory variables as follows.

TAXSHLT: (Independence) Dummy variable indicating whether a firm’s owners are actively
involved in organizing, managing, receiving compensation from, or otherwise
promoting tax shelters to their clients. Regulators have recognized the detrimental
effects on independence if tax shelters are marketed by audit firms to their clients
(PCAOB 2005). Hence, auditors participating in such activities may have more lax
attitudes in general about independence.

CPAS: (Personnel Management) The percentage of professional staff at an audit firm who
are CPAs. Such a measure reflects the fact that some firms are more successful
than others in hiring high-quality personnel, in providing the necessary training and
experience for certification, or both. Lys and Watts (1994) describe audit structure
in similar terms and find evidence of a negative relation between structure and
litigation risk.

WKLOAD: (Personnel Management) The natural log of the ratio of total fees to the number of
professional staff in the firm. Personnel management relates not only to hiring and
training practices, but also to the assignment of staff to engagements. We include
this second measure to address this fact. Its intent is to represent the workload
imposed on professional staff and should reflect how strained they are to fulfill their
professional obligations.

CLNTRSK: (Client Acceptance) The percentage of a firm’s clients who are financial institutions
or are in the entertainment industry. The insurer identifies firms having clients in
these specific industries as their experience suggests that such clients tend to be
more complex and more risky than other types of clients (see Palmrose, 1988).
Having such clients may thus indicate less stringent screening of clients in general.

HISTORY: (Engagement Performance) A dummy variable indicating whether a firm has been
investigated by state/professional boards for violations of professional regulations

11
or practice standards. A history of regulatory problems is a good indicator of lower
standards of engagement performance. Hilary and Lennox (2005) use a similar
variable representing pending litigation against reviewed firms and find a negative
link with peer-review outcomes.

LNFEES: (Monitoring) The natural log of the audit firm’s total fees. Larger firms generally
have more resources to invest in quality control, including monitoring of their
quality control programs. They also have more to lose by providing poor quality
services to clients (DeAngelo, 1981).7

Two variables are included in the model to control for other circumstances that might

affect the outcome of the peer review process. SECPS is a dummy variable indicating whether

the firm was a member of the AICPA’s SEC Practice Section at the time of the review. This

variable controls for the possibility that peer reviews for SECPS members may be more rigorous

than other forms of peer review (e.g., PCPS), and for the possibility that results in previous

studies might be peculiar to SECPS reviews.8 Also, DISTANC is a dummy variable indicating

whether the offices of the reviewing and reviewed firms are more than 100 miles apart. This

variable controls for the possibility that reviewing firms that are near the firms they review may

be competitors, influencing the objectivity of the reviewer and giving them an incentive to

understate the quality of the reviewed firm (Hilary and Lennox, 2005).

4.2 Results

Descriptive data for the peer reviews in our sample are presented in Table 1. Reviews are

well-distributed over the period 1986-1999. The mean number of weaknesses identified in the

reports for the 158 firms in our sample is 1.44. The majority of these weaknesses are related to

engagement performance, with 87 firms receiving such findings (mean of .99 per firm), followed

7
We note that the five categories are neither independent nor mutually exclusive. Any observable proxies are likely
to cross over to multiple categories. Nevertheless, we test the validity of our measures by examining the simple
correlations between them and the number of peer-review weaknesses identified in each respective category. The
only instance where the predicted correlation is not significant at conventional levels is between LNFEES and the
number of monitoring weaknesses.
8
For example, Hilary and Lennox (2005) examine only SECPS members.

12
by monitoring (29 firms, mean of .23 per firm) and personnel management (17 firms, mean of

.13 per firm). Few firms in our sample have independence or client-acceptance findings. Table

2 presents descriptive results for the firms in our sample. Total fees for firms in the sample

range from $62 thousand to almost $16 million, and there appears to be considerable variation in

most of the variables. Table 3 reveals no evidence of collinearity problems in the data.

----------------------------------------
Insert Tables 1-3 here
----------------------------------------

Results of the ordered logistic estimations used to test H1 are presented in Table 4,

column A. The model is significant at p<.001 and control variables are significant in their

predicted directions. The coefficient on SECPS is significantly positive (p=.016), suggesting

that SECPS reviews may be more rigorous than other types of reviews. Also, consistent with

Hilary and Lennox (2005), the coefficient on DISTANC is significantly negative (p=.017)

suggesting that reviews provided by non-competitors (i.e., firms located far apart) may be less

rigorous than reviews performed by competitors.9

The results in column A are also consistent with peer review findings being associated

with other indicators of low-quality audit firms (H1). Reviews of firms whose owners are

actively involved with tax shelters tend to identify more weaknesses (TAXSHLT, p=.025), as do

reviews of firms who accept clients in risky industries (CLNTRSK, p=.002), firms with more

intense workloads (WKLOAD, p=.083), and firms with a history of being investigated for failure

to follow industry standards (HISTORY, p=.033). In addition, we observe fewer weaknesses in

firms that employ a high percentage of CPAs (CPAS, p=.006) and firms that have more

resources available for quality control (i.e., larger firms) (LNFEES, p=.019).

9
Hilary and Lennox (2005) report a positive coefficient using two measures of distance: 50 miles and 150 miles.
They define their dummy variable as equal to one if the reviewer is a competitor firm, meaning nearby, so their
positive coefficient is consistent with our negative coefficient.

13
----------------------------------------
Insert Table 4 here
----------------------------------------

A comparison of Hilary and Lennox (2005) and Wallace (1991) suggests that the peer

review environment has evolved over time such that fewer modified reports are issued and fewer

weaknesses are identified with each review. This finding is consistent with Bremser and

Gramling (1988) and Colbert and Murray (1998), who find that the number of weaknesses

identified during peer review has decreased over time as firms have had additional reviews.

These studies both assume that peer review has remained consistently effective over time and

thus interpret these results to mean that audit quality is improving. However, these results would

also be expected if peer-review’s effectiveness has deteriorated over time, perhaps for the

reasons described earlier. For this reason, we also estimate Equation 1 after inserting a

continuous variable representing the year the related peer reviews were performed.

Results of this estimation are shown in Table 4, column B. As expected, the coefficient

on the year variable is negative and highly significant (p<.001). However, while including the

year variable in the model appears to result in smaller coefficients on most of the explanatory

variables, each of the coefficients remains significant in the predicted direction, excepting

WKLOAD which remains positive but is no longer significant. Together, we interpret the results

in Table 4 to indicate that peer-review outcomes are indeed associated with other firm-specific

attributes suggestive of firm quality (support H1), and to indicate that this association obtains,

even after accounting for the evolution of the peer-review environment over time. We also

believe this to be consistent with peer review’s effectiveness not having deteriorated over time.10

5. Peer review results and auditor malpractice claims (audit failure)

5.1. Model development


10
Results using Poisson regression are similar to those in Table 4.

14
Our second hypothesis asserts that peer-review findings are useful in predicting actual

audit quality. To test this hypothesis, we estimate two logistical regression models to predict

accounting firms associated with a malpractice claim alleging negligent, or low-quality audit

work. The dependent variable in these models (CLAIM) takes a value of one if the firm is

subject to a malpractice claim; zero otherwise. The test variables in these models reflect the

findings described in the respective LOCs. Since peer-review findings apply to different aspects

of an organization’s activities, we examine them both in total (model 2) and separated into the

five quality-control categories (model 3) used by the AICPA (AICPA 1996):

CLAIM = b0 + b1TOTFIND + {control variables} (2)

CLAIM = b0 + b1INDEP + b2ACCEPT + b3PERSNL + b4ENGAGE (3)


+ b5MONITOR + {control variables}

where:

TOTFIND: Total number of weaknesses identified in the peer-review report,


INDEP: Dummy variable with a value of 1 if the peer-review report identifies at least one
weakness related to independence policies; 0 otherwise,
ACCEPT: Dummy variable with a value of 1 if the peer-review report identifies at least one
weakness related to client acceptance and continuation practices; 0 otherwise,
PERSNL: Dummy variable with a value of 1 if the peer-review report identifies at least one
weakness related to personnel management; 0 otherwise,
ENGAGE: Dummy variable with a value of 1 if the peer-review report identifies at least one
weakness related to engagement performance; 0 otherwise,
MONITOR: Dummy variables with a value of 1 if the peer-review report identifies at least one
weakness related to monitoring of professional practices; 0 otherwise.

Models 2 and 3 include several additional variables to control for factors not related to

audit quality that may be associated with the likelihood of a claim being filed with the insurance

company. For example, while a link has been established between the size of a CPA firm and

audit quality (Stice 1991), larger firms may experience more claims simply because they have

more clients. To control for this possibility, we include the natural log of total fees for the firm

in the year prior to the claim incident (LNFEES), as well as the percentage change in total audit

15
firm staff in the year of the claim incident (GROWTH). We include the percentage of total firm

fees from SEC clients (SECCLNT) to control for the possibility that public audit clients may be

more litigious than nonpublic audit clients. We also include a dummy variable indicating

whether a CPA firm is located in either Arizona or Texas (JURIS) to control for the possibility

that, ceteris paribus, claims are more likely in plaintiff-friendly jurisdictions (Esho et al., 2004).11

Finally, to control for the possibility that CPA firms may be more likely to file insurance claims

if their deductibles are relatively low, we include a variable measuring the policy deductible

divided by the number of firm owners (DEDUCT).

5.2 Results

Descriptive results for the 140 observations used to estimate Equations 2 and 3 are

included in Table 5.12 As expected, claim firms tend to have more total weaknesses (TOTFIND)

than non-claim firms. They also tend to be larger (LNFEES), have more SEC clients

(SECCLNT), and are more likely to be found in plaintiff-friendly jurisdictions (JURIS). Once

again, correlations among the independent variables in the models (see Table 6) do not suggest

collinearity problems in the data.

----------------------------------------
Insert Tables 5 and 6 here
----------------------------------------

Equation 2 contains a continuous test variable representing the total number of

weaknesses identified in the LOC (TOTFIND). Results from this estimation are shown in Table

7, column A. The model is reasonably well specified, with a pseudo-R2 of 18.9 percent.

Coefficients on the control variables are all significant as predicted: Claims are more likely for

11
When asked which states were the most plaintiff friendly, the insurance firm identified these two as being, in their
experience, the most plaintiff friendly states in the US. In these states, legal precedent and court rules make it
relatively easy to bring litigation against accountants.
12
Eighteen of the 158 original observations were dropped because of missing data items for either the claim firm or
the matched non-claim firm.

16
larger firms (LNFEES, p<.001), firms that are growing rapidly (GROWTH, p=.044), firms with

a higher percentage of SEC-related fees (SECCLNT, p=.090), and firms operating in Texas or

Arizona (JURIS, p=.004). We also find that claims are less likely for firms that carry larger

deductibles per firm owner (DEDUCT, p=.067). Consistent with H2, the likelihood of audit

failure does appear to be associated with the number of weaknesses identified in the peer-review

report (TOTFIND, p=.039).

Equation 3 examines whether the type of weakness identified in the peer review report is

informative in regards to audit quality. Dummy variables are used to indicate whether a

particular type of weakness is identified in the report. Results are shown in Table 7, Column B.

The model has a pseudo-R2 of 22.3 percent. Control variables continue to be significant in the

predicted direction. Consistent with H2, the likelihood of audit failure also appears to be

associated with certain types of weaknesses being identified in the peer-review report. Firms

having weaknesses related to personnel management (PERSONL) and/or engagement

performance (ENGAGE) are both more likely to experience an audit failure (p=.049 and p=.014,

respectively) while firms having weaknesses related to independence (INDEP), client acceptance

(ACCEPT), and monitoring (MONITOR) are not.13 Taken together, these result provide support

for the assertion that peer review findings are informative as to actual audit quality displayed by

accounting firms.14

----------------------------------------
Insert Table 7 here
----------------------------------------

13
It is important to note that independence and client-acceptance weaknesses are very rare (seven and six
observations, respectively). Given this fact, we hesitate to make inferences regarding their significance.
14
Several additional analyses were performed for sensitivity purposes. First, Equations 2 and 3 were estimated
using only the test variables (omitting control variables) and including size only along with the test variable. Results
were similar to those in Table 7. Second, Equation 3 was estimated using continuous variables for each type of
weakness (instead of dummy variables). Results were similar to those in Table 7. Finally, although collinearity is
not indicated, Equation 3 was estimated using one test variable at a time. Coefficients remained significant for
PERSONL and ENGAGE and insignificant for the other three test variables.

17
6. Summary and Conclusions

The purpose of this paper was to examine the effectiveness of the AICPA’s voluntary

peer review regime for accounting firms performing audits. While current PCAOB rules in the

US require all auditors of public registrants to be inspected by the PCAOB, many firms and

many countries still operate in a voluntary or self-regulated system. Furthermore, the rush to

impose mandatory inspections following the accounting scandals in the US may not have

adequately considered the actual effectiveness of self-regulated peer review in the wake of

demands for reform.

We contribute to this debate by examining whether peer reviews in a self-regulatory

regime are informative regarding audit-firm quality. We tested the effectiveness of voluntary

peer review in two ways. First, we examined whether the information in peer-review reports in

the form of reviewer comments are associated with other observable indications of low quality in

an audit firm. We found that there does appear to be a link between the number of weaknesses

identified in the peer review report and firm-quality attributes such as participation in tax

shelters, professional certifications, the riskiness of the firm’s clientele, historical regulatory

problems, and size. There may be a similar link with staff workload, but these results are not as

strong. Second, we examined the relationship between the information in peer-review reports

and actual audit quality as measured by audit failure. We found that firms having weaknesses

related to personnel-management and engagement-performance are more likely to experience an

audit failure in terms of having a malpractice claim filed against them. We also found that audit

firms having more weaknesses in general identified in their peer-review reports are more likely

to experience audit failure.

18
Taken together, we interpret our findings as supporting the hypothesis that voluntary

peer-review reports provide reliable signals as to the actual quality of an audit firm. These

results complement previous studies showing a link between peer review and perceived audit-

firm quality (Hilary and Lennox 2005). These results are also encouraging and supportive of the

effectiveness of the self-regulatory peer-review model. However, we make no assertions as to

whether a voluntary regime is more effective than a mandatory regime. Indeed, there are many

benefits to a mandatory regime such as universal application, greater independence in the

inspection process, and the potential for a more in-depth examination. We also note that a

mandatory regime does involve significant costs to society and markets (Stigler 1971), and that

the benefits of a voluntary regime may be underestimated—especially in the wake of the

notorious audit failures in recent years. Our results suggest that the benefits of the voluntary

regime were more extensive than recently believed, and that this might be taken into account

when future regulations are adopted or modified. Our results also suggest that additional

research into the effectiveness of the self-regulatory model is needed to cast light upon, and aid

in the continued scrutiny of the auditing profession.

19
References

American Institute of Certified Public Accountants (AICPA), 1996. System of quality control
for a CPA firm' s accounting and auditing practice, New York, NY.

American Institute of Certified Public Accountants (AICPA), 2004. AICPA standards for
performing and reporting on peer reviews, New York, NY.

AICPA Peer Review Board, 2004. White paper on AICPA standards for performing and
reporting on peer reviews AICPA, New York, NY.

Atherton, D.R., 1989. Quality and peer review: An update. Ohio CPA Journal (Sept/Oct): 49-
51.

Austin, K.R., and D.C. Lanston., 1981. Peer review: Its impact on quality control. Journal of
Accountancy (July): 78-82.

Berton, L., 1987. SEC to rule on peer review for accountants. Wall Street Journal (Jan 8): 1.

Bremser, W.G., and L.J. Gramling., 1988. CPA firm peer reviews: Do they improve audit
quality? The CPA Journal (May): 75-77.

Bunting, R.L., 2004. Transparency: The new peer review watchword. The CPA Journal
(October): 2-3.

Colbert, G., and M. Murray, 1998. The association between auditor quality and auditor size: An
analysis of small CPA firms. Journal of Accounting, Auditing, and Finance (Spring):
135-150.

DeAngelo, L.E., 1981. Auditor size and audit quality. Journal of Accounting and Economics
(December): 183-199.

Deis, D.R. Jr., and G.A. Giroux, 1992. Determinants of audit quality in the public sector. The
Accounting Review (July): 462-479.

Esho, N., A. Kirievsky, D. Ward, and R. Zurbruegg., 2004. Law and the determinants of
property-casualty insurance. Journal of Risk and Insurance 71: 265-283.

Felix, W.F., and D.F. Prawitt., 1993. Self-regulation: An assessment by SECPS members.
Journal of Accountancy (July): 20-21.

Fogarty, T.J., 1996. The imagery and reality of peer review in the U.S.: Insights from
institutional theory. Accounting, Organizations and Society (Feb/Apr): 243-267.

Francis, J., "What Do We Know About Audit Quality?" The British Accounting Review
(December 2004), Vol. 34, No. 4, pp. 345-368.

20
Francis, J.R., W.T Andrews, Jr., and D.T. Simon, 1990. Voluntary peer reviews, audit quality
and proposals for mandatory peer reviews. Journal of Accounting, Auditing and Finance
(Winter): 369-377.

Giroux, G.A., D.R. Deis, Jr., and B. Bryan, 1995. The effect of peer review on audit economies.
Research in Accounting Regulation 9: 63-82.

Grumet, L., 2005. Rethinking the ‘peer’ in peer review. Accounting Today (September 26): 6.

Hilary, G., and C. Lennox, 2005. The credibility of self-regulation: Evidence from the
accounting profession’s peer review program. Journal of Accounting and Economics
(December): 211-229.

Kaiser, C. Jr., 1989. The mandatory SECPS membership vote. Journal of Accountancy
(August): 40-44.

Krishnan, J., and P.C. Schauer, 2000. The differentiation of quality among auditors: Evidence
from the not-for-profit sector. Auditing: A Journal of Practice & Theory (Fall): 9-25.

Latham, C.K., and M. Linville., 1998. A review of the literature in audit litigation. Journal of
Accounting Literature 17: 175-213.

Lys, T., and R.L. Watts, 1994. Lawsuits against auditors. Journal of Accounting Research
(Supplement): 65-93.

Mautz, R., 1984. Self-regulation: Criticism and a response. Journal of Accountancy (April): 56-
66.

Nieschwietz, R.J., J.J. Schultz, and M.F. Zimbelman, 2000. Empirical research on external
auditors’ detection of financial statement fraud. Journal of Accounting Literature 19:
190-246.

O’Keefe, T.B., D.A. Simunic, and M.T Stein, 1994. The production of audit services: Evidence
from a major public accounting firm. Journal of Accounting Research 32: 241-261.

Palmrose, Z., 1988. An analysis of auditor litigation and audit service quality. The Accounting
Review (January): 55-73.

Public Company Accounting Oversight Board (PCAOB), 2005. Ethics and Independence Rules
Concerning Independence, Tax Services, and Contingent Fees, Washington DC.

Schneider, A., and R.J. Ramsay, 2000. Assessing the value added of peer and quality reviews of
CPA firms. Research in Accounting Regulation 14: 23-38.

Snyder, A., 2004. Increasing transparency in peer review: Members speak out. Journal of
Accountancy (December): 22-23

21
Stice, J.D., 1991. Using financial and market information to identify pre-engagement factors
associated with lawsuits against auditors. The Accounting Review 66: 516-534.

Stigler, G., 1971. The theory of economic regulation. Bell Journal of Economics and
Management Science (Spring): 3-21.

U.S. House of Representatives, 2002. The Sarbanes-Oxley Act of 2002, Washington DC.

Wallace, W.A., 1991. Peer review filings and their implications for evaluating self-regulation.
Auditing: A Journal of Practice & Theory (Spring): 53-68.

----------, and K.S. Cravens, 1994. An exploratory content analysis of terminology in public
accounting firms’ responses to AICPA peer reviews. Research in Accounting Regulation
8: 3-32.

White, G.T., J.C. Wyer, and E.C. Janson, 1988. Peer review: Proposed regulation and current
compliance. Accounting Horizons (June): 27-30.

22
Table 1
Descriptive Peer Review Information (n=158)

Panel A: Observations by Peer-Review Year:

Year Observations a

1986 13
1987 7
1988 9
1989 8
1990 17
1991 10
1992 18
1993 12
1994 8
1995 22
1996 13
1997 11
1998 8
1999 2

Total 158

Panel B: Peer Review Findings:


Range of Observations
Type of Finding Mean Median findings with comments

Total number of findings 1.437 1 0-9 100

Independence .051 0 0-2 7


Client acceptance/continuance .038 0 0-1 6
Personnel management .127 0 0-3 17
Engagement performance .987 1 0-5 87
Monitoring .234 0 0-3 29
a
Observation year represents the year of the peer review. Odd numbers occur because firms are matched on claim year rather
than the year of their most recent peer review.

23
Table 2
Descriptive Data for Independent Variables in Equation 1 (n = 158)

Std.
Variable Mean/% Median Deviation Min Max

Continuous:

CPAS .50 .50 .125 .125 .833


WKLOAD a $140,582 $130,403 $45,063 $20,970 $300,136
CLNTRSK 1.49 0 3.167 0 28
LNFEES (000) a $2,872 $2,007 $2,521 $62 $15,793

Discrete:

TAXSHLT 15.2%
HISTORY 9.5%
SECPS 24.1%
DISTANC 31.6%
a
Dollar amounts are shown here. These are transformed by taking the natural log for analysis purposes.

Variable definitions:

TAXSHLT (+): Dummy variable with a value of 1 if the firm owners are actively involved in
organizing, managing, receiving compensation from, otherwise promoting tax
shelter to their clients; 0 otherwise.
CPAS (-): Percentage of professional staff at the firm who are CPAs.
WKLOAD (+): Natural log of the ratio of firm fees divided by the number of professional staff
in the firm.
CLNTRSK (+): Percentage of firm clients that are financial institutions or entertainment
companies.
HISTORY (+): Dummy variable with a value of 1 if the firm has been investigated by a state or
professional board for violations of professional regulations or practice
standards; 0 otherwise.
LNFEES (-): Natural log of total firm fees.
SECPS (+): Dummy variable with a value of 1 if the firm audits companies subject to SEC
regulation; 0 otherwise.
DISTANC (-): Dummy variable with a value of one if the reviewing firm is more than 100
miles away from the reviewed firm; 0 otherwise.

24
Table 3
Correlation Matrix for Independent Variables in Equation 1 (n=158) a

TAXSHLT CPAS WKLOAD CLNTRSK HISTORY LNFEES SECPS

CPAS b -0.010
WKLOAD 0.084 -0.169
CLNTRSK 0.017 -0.014 0.101
HISTORY -0.017 -0.020 0.012 0.013
LNFEES 0.157 -0.112 0.435 0.204 0.070
SECPS 0.133 -0.010 0.136 0.178 0.070 0.229
DISTANC -0.098 -0.029 -0.126 -0.057 -0.081 -0.235 -0.000
a
Correlation greater than .15 are significantly different from zero at the .05 level.
b
See Table 2 for variable definitions

25
Table 4
Analysis of the Relation Between Peer-Review Findings and Audit-Firm Attributes Indicative of
Lower Audit Quality Using Ordered Logit (n=158)

Model: FINDINGS b = b1TAXSHLT + b2CPAS + b3WKLOAD + b4CLNTRSK + b5HISTORY


+ b6LNFEES + {control variables}

A B
Predicted
Sign a Estimate Wald 2
Estimate Wald 2

Test Variables:

TAXSHLT + 0.80 3.86** 0.72 2.95**


CPAS - -3.08 6.21*** -2.77 5.04**
WKLOAD + 0.69 1.92* 0.26 0.28
CLNTRSK + 0.13 8.05*** 0.08 3.02**
HISTORY + 0.90 3.36** 0.91 3.38**
LNFEES - -0.40 4.30** -0.25 1.66*

Control Variables:

SECPS + 0.76 4.61** 0.65 3.34**


DISTANC - -0.71 4.49** -0.57 2.83**
Year - -0.16 12.63***
2
Model 31.34*** 41.01***

a
All p-values are one-tail where signs are predicted.
b
Dependent variable (FINDINGS) equals total number of peer review comments. See Table 2 for additional variable
definitions.
***, **, * indicates significance at p < .01, .05, and .10 respectively.

26
Table 5
Descriptive Data for Claim Firms (CLAIM=1) and Non-Claim Firms (CLAIM=0)
(n=140)

Non-claim firms Claim firms


n=70 n=70
Variable a Mean Mean
SD Min Max SD Min Max
TOTFIND 1.00 0 4 1.47** 0 8
1.19 1.63
INDEP 0.01 0 1 0.04 0 1
0.12 0.20
ACCEPT 0.03 0 1 0.06 0 1
0.17 0.23
PERSNL 0.03 0 1 0.14** 0 1
0.17 0.35
ENGAGE 0.43 0 1 0.61** 0 1
0.50 0.49
MONITOR 0.17 0 1 0.11 0 1
0.38 0.32
LNFEES 14.18 11.05 16.18 14.85*** 13.039 16.58
0.92 0.80
GROWTH 0.06 0.00 0.57 0.07 0.00 0.67
0.10 0.14
SECCLNT 0.41 0.00 9.00 0.80* 0.00 5.00
1.51 1.51
JURIS 0.09 0 1 0.23** 0 1
0.28 0.42
DEDUCT 2951 714 10000 2633 833 833
2025 1655
a
see variable definitions on next page
***, **, * indicates mean greater at p < .01, .05, and .10 respectively

27
Table 5 (continued)

Variable Definitions:

TOTFIND (+): Total number of weaknesses identified in the peer review report/LOC.
INDEP (+): Dummy variable valued as 1 if the peer review report identified at least one weakness
related to independence; 0 otherwise
ACCEPT (+): Dummy variable valued as 1 if the peer review report identified at least one weakness
related to client acceptance and continuance; 0 otherwise
PERSNL (+): Dummy variable valued as 1 if the peer review report identified at least one weakness
related to personnel management; 0 otherwise
ENGAGE (+): Dummy variable valued as 1 if the peer review report identified at least one weakness
related to engagement performance; 0 otherwise
MONITOR (+): Dummy variable valued as 1 if the peer review report identified at least one weakness
related to monitoring; 0 otherwise
LNFEES (+): Natural log of firm fees
GROWTH (+): Percentage change in total audit firm staff in the year of the claim incident
SECCLNT (+): Percentage of total fees from SEC clients
JURIS (+): Dummy variable valued as 1 if the firm practiced in either AZ or TX; 0 otherwise
DEDUCT (-): Deductible included in insurance policy divided by number of firm owners

28
Table 6
Correlation Matrix for Independent Variables in Equations 2 and 3 (n=140) a

TOTFIND INDEP ACCEPT PERSNL ENGAGE MONITOR LNFEES GROWTH SECCLNT JURIS
INDEP 0.539
ACCEPT 0.529 0.387
PERSNL 0.405 0.407 0.061
ENGAGE 0.833 0.078 0.203 0.038
MONITOR 0.524 0.175 0.115 -0.052 0.187
LNFEES 0.029 0.089 0.080 0.039 0.028 -0.110
GROWTH 0.032 0.154 -0.058 0.162 -0.012 0.047 -0.179
SECCLNT 0.058 0.016 0.055 -0.106 0.033 0.120 0.124 -0.040
JURIS -0.057 -0.074 -0.091 0.148 -0.058 -0.176 0.014 0.002 -0.105
DEDUCT -0.045 0.083 -0.057 -0.064 -0.066 -0.052 -0.038 0.224 0.022 0.079
a
Correlations with absolute values greater than or equal to .16 are significantly different from zero at the .05 level.
b
see Table 5 for variable definitions

29
Table 7
Analysis of the Relation Between Peer Review Findings and the Likelihood
of Audit Failure Using Logistic Models (n=140)

Model A: CLAIM b = b0 + b1TOTFIND + {control variables}


Model B: CLAIM = b0 + b1INDEP + b2ACCEPT + b3PERSNL + b4ENGAGE + b5MONITOR +
{control variables}

A B
Predicted
Sign a Estimate Wald 2
Estimate Wald 2

Test Variables:

TOTFIND + 0.27 3.12**


INDEP + -0.46 0.08
ACCEPT + 0.29 0.08
PERSNL + 1.82 2.74**
ENGAGE + 0.91 4.78**
MONITOR + -0.53 0.73

Control Variables:

INTERCEPT -15.37 17.13 -15.94 16.25


LNFEES + 1.03 16.63*** 1.05 15.48***
GROWTH + 2.81 2.91** 2.62 1.95*
SECCLNT + 0.18 1.79* 0.23 2.75**
JURIS + 1.55 6.96*** 1.41 5.32**
DEDUCT - -0.01 2.26* -0.01 1.71*

Pseudo R2 18.9% 22.3%


Log Likelihood 78.69*** 75.45***

a
All p-values are one tail where signs are predicted.
b
Dependent variable (CLAIM) equals 1 if firm had an audit related claim filed against it; zero otherwise. See Table 5 for
additional variable definitions.
***, **, * indicated significance at p < .01, .05, and .10 respectively.

30

Вам также может понравиться