Вы находитесь на странице: 1из 25

Running Head: PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

Performance Measures in Economic Development Sean M. Maguire The Rockefeller College of Public Affairs & Policy, University at Albany

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

Abstract This paper examines the use of performance measures on economic development in state and local governments within the United States. For the purpose of framing the discussion, a brief review of the history of and use of general performance measures in government is provided, focusing on the early work by C.E. Ridley for the International City Managers Association in 1943. Next, this paper will discuss how and where performance measures are used in economic development. This will present some of the challenges that exist in applying performance measures in the field of economic development. While the type of information needed to evaluate program performance can be agreed upon, across the field of economic development there is a lack of consensus on the type of data that should be utilized in developing outcomes. Generally, there are different types of measures used in economic development: outputs and outcomes. Sources differentiate between the two with outputs defined as what was done (i.e. money spent, hours worked, calls received) and outcomes defined as what was the consequence (i.e. jobs created, investment made). Outputs are best used to measure efficiency versus inputs dedicated to the activity (e.g. jobs created for each dollar expended). Outcomes can be a

challenge to collect and accuracy requires a significant level of due diligence, but typically provide the best measure of how well a program is achieving its goals (e.g. the role an economic development program play in decisions to locate or expand a business). Finally, this paper will review attempts by New York State in evaluating two of its primary economic development programs available to local governments: Empire Zones and Industrial Development Agencies.

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

Performance Measures in Economic Development Introduction of Performance Measurement A century ago, government was considered commendable if it was honest with the people (Ridley, 1943, p.1). However, more than 70 years ago, the International City Managers Association identified a demand that government become more accountable. Accountability is a fusion of both the honesty demands of a century ago and the efficiency demands of a industrial and post-industrial society. It was the post-industrial society that was well accustomed to a private sector focused on good performance measured by profits. This experience was translated into the public sector at the turn of the century with the establishment of the ICMA and focused attention on performance. Government, in contrast to the private sector, does not evaluate or measure performance by profits or growth. Its performance is typically evaluated on how efficient essential services are provided. Because of this, public sector professionals were faced with identifying appropriate metrics for how well municipal services are being provided. The need for and importance of local government performance measurement is quite evident by the body of information that exists generally on the subject. A rudimentary search on the subject will return the user with any number of resources from academic to professional and from public opinion to public policy. At the present time, governments are facing limited fiscal resources across federal, state and local governments. Unlike the federal government which can carry budget deficits, many other levels of municipal government cannot or constituents will not tolerate it. In making difficult decisions, officials are increasingly looking to performance measurement to evaluate services and programs. By using performance measurement, officials can make a regular measurement of the results (outcomes) and efficiency of services or

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

programs (Hatry, 2006). Doing so allows officials to make recommendations and decisions based on objective data collected on programs. Building on Hatrys assumption, Czohara and Melker offer their own assumption on how that information will be used. The indicate that the rationale behind the adoption of performance measures is that better management, program and budgeting decisions will be made (2004, p. 6). Of course, this assumes that adequate steps are taken to appropriately evaluate programs and policies. Those results, or outcomes and outputs, are used by public officials to make decisions on where potential opportunities exist to improve efficiencies and service delivery. These efficiencies often have a goal of reducing the cost to users or improving overall service delivery to the end user or consumer. An example of this could be the number of jobs created by an economic development program. The evaluator may consider other inputs to develop a per job created measurement, such as staff hours per job created or municipal cost per job created. Governments typically collect a great deal of information on programs. It is correct to believe that this information is not necessarily collected for the purposes of performance measurement. In fact, methods to ensure honest in the administration of government and delivery of services have long been devised and are similar to techniques used in performance measurement. Techniques such as audits, legal checks and balances, and the decentralization of authority have been used to ensure the public that government is honest in how it operates and delivers services. However, these techniques alone do not address how efficient and effect government is in the delivery of necessary public services (Ridley, 1943). Generally, traditional audits may uncover vulnerabilities to efficiency but do not reveal performance measures unless an audit was specifically targeted to do so in its scope.

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

History of Performance Measurement Academics and professionals alike typically agree that the landmark monograph on municipal performance measures was issued by the International City Managers Association (ICMA) and authored in 1938, entitled Measuring Municipal Activities. In the second addition, the author acknowledges the validity of the first edition noting that in the literature that had appeared since 1938, there have been no developments that would necessitate a need for an substantive changes to the document. Rather, the second addition was merely an update with the inclusion of new developments since 1938. Since the second addition is simply an expansion, it is still important to note that Ridley continued to immediately acknowledge the challenge in measuring the activities of our governments. He states that governmental activities cannot be measures in terms of simple units or indices which automatically evaluate results (1943, p. iii) in the forward to the first edition. For example, governments cannot simply run a sales report to determine how well it is performing, as some in the private sector can do. This statement is part of the foundation that the field of performance measurement has established itself on. Noting from the previous page, the statements from Hatry, Czohara and Melker are in line with Ridleys position. Ridleys landmark work continues to be a source of information more than 70 years later. Municipal governments can collect many different types of data through processes such as audits as previously noted and in its day-to-day activities. For example, a budget department may track income and expenses; a social service department may track clients moving in and out of its care; or a fire department may track how many calls it has responded to. This information can be used in appraising municipal government (Ridley, 1943, p.1).

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

Ridley, in focusing in on data useful for performance measurement, notes that one of the oldest and at the time, the common measure, was a municipalitys tax rate. He notes, however, that there are two significant challenges to using such information to measure the performance of municipal government. The first is that it was, and today still is, common practice to assess property at less than its full or market value. Assessed property values do not reflect current market conditions and do not respond to things such as seasonal fluctuations. Ridley continues by acknowledging that in order to understand this as a measure and the significance of the tax rate, we would have to know the basis of assessment (1943, p.2). The second challenge Ridley presents is that municipal income comes from sources other than the property tax (1943, p. 2). For example, municipalities typically receive other income from state and federal aid. Municipalities may also receive revenue from fines, sale of assets and interest earned on short term investments when a public body has enough liquidity to place funds in such accounts. To better understand the number of different sources of municipal income, one only need review a municipal budget. In New York State, local municipal budgets are filed with the Office of the State Comptroller (OSC) A basic review of Level One municipal financial data from the OSC reveals multiple revenue categories beyond real property taxes, including: Real Property Taxes and Assessments Other Real Property Items Sales and Use Tax Other Non Property Taxes Charges for Services Charges to Other Governments

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

Use and Sale of Property Other Local Revenues State Aid Federal Aid Proceeds of Debt Other Sources (2008)

Ridley continues to argue against other common attempts to measure local government activities. Arguing against using a municipalitys total expenditures, Ridley analogizes this with someone who assumes that keeping spending low is a measure of good performance; however Ridley argues that this method of measurement only demonstrates restraint in spending. Limiting municipal expenses is not an indicator of efficiency because it does not address things like quality of life or level of service within a community. If two communities expended the same amount of money each year on recreation, additional research may find that one city may have supervised playgrounds, another may not (Ridley, 1943, p. 2). In other words, there is no simple formula that can be applied across all of the public services which we demand and provide that can provide an accurate measure of effectiveness and efficiency. The scope and breadth of municipal services warrants individualized attention and evaluation for each type of program. Ridley supports this position by stating, the term measurement should be construed to mean any technique which seeks objectively to appraise the results of a program of action or to compare the results of alternative programs (Ridley, 1943, p. iii).

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

Ridley identifies a basic framework for developing a records system for any municipal function which includes: 1. Determination of the information desired; 2. Definition of units; 3. Construction of summary report forms; 4. Specifications of the records procedure; 5. Installation and adaptation of records in demonstration cities, and; 6. Observation of the uses to which records are put and the benefits accruing therefrom [sic]. (Ridley, 1943) In addition to ICMA, which this paper has focused primarily on, it is important to note that the Government Accounting and Standards Board (GASB) is perhaps the second source of information regarding performance measurement. Established in 1984, GASB is perhaps best known for its accounting guidelines which municipalities adopt in their management practices, known as generally accepted accounting principles (GAAP). Local municipalities often look to GASB for accounting and fiscal reporting standards. Building on this data-driven experience, GASB has been involved with its project on service efforts and accomplishments (SEA), better known as performance measurement, since its inception (GASB, n.d.). It has developed voluntary guidance to assist local governments in evaluating its performance. Performance Measures in Economic Development One of the most significant challenges in measuring the performance of programs and efforts in the realm of economic development is devising objective, verifiable and accountable criteria to build a framework for such a metric. Economic development, which is frequently

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

paired with functions such as planning and finance, suffers from the same challenges. Ridley indicates that the financeand planning functions of municipal government remain the most difficult of measurement and little progress can be reported (1943, p. xiii). This is perhaps due to the fact that these functions are not directly related to services consumed by the public. Finance is important as an internal control and planning is important in its role as a regulating function. These types of services are not necessarily consumer driven. This is similar to economic development. Economic development programs have consumers, but tend to be business which can vary in its composition, size and purpose. Because of the difficulty in sixing up the consumer, it is then somewhat difficult to make appropriate measurements. Czohara and Melker, in citing Clarke and Gaile, note that efforts to evaluate economic development policy have become a quagmire of good intentions and bad measures (2004, p. 3). While there are vast resources on performance measures and service efforts and accountability reporting for areas of general government, the literature on this topic in economic development appears to be quite limited. The best sources are limited to a handful at best and is primarily centered on the work of Hatry. That is then complicated by the fact that as a central resources to the field, Hatry is typically cited by other authors writing on the topic. Another problem that arises according to Czohara and Melkers is that there is no agreed upon methodology for measuring performance in public economic development programs (2004). Taking from personal experience in economic development, methodology is often driven by what the service provider is attempting to show success in. For example, a program attempting to show its progress in creating or retaining jobs in a community will focus on only

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

10

those figures. It is common, based on experience, for reports to fail to report on the net impact on jobs created, retained or lost. That finding is one that appears later in this paper with the New York State Empire Zone program. It was only after recommendations from the State Comptroller were received that the program began tracking net change in jobs as a result of the program. In Warrens piece about the role of performance measurement in economic development, he credits Hatry as an expert in the field of performance measurement and with the concept that performance measures should be aimed at achieving clearly identified outcomes (2005). In taking the example from the Empire Zone program, where program goals are job creation and investment, the outcome would be measures such as the net change in jobs, the amount of money invested in the local economy through construction or other avenues and the overall change in the economic climate of the subject community. Policies need to identify the outcomes it seeks to achieve before undertaking the activity to ensure an objective analysis program performance. Finally, n his work on local economic development performance measurement, Linblad set out to identify why some municipalities use performance measurement in economic development and some do not (2006). Using data from the International City Managers Associations (ICMA) 1999 economic development survey to U.S. municipalities with populations of more than 10,000 persons, Linbald set out to examine his two hypotheses: 1. Struture and agency will explain performance measurement in economic development, and; 2. Compared to structure, agency will have more impact on performance measurement in economic development (Linbald, 2006).

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

11

In his findings, he determined that those variables with the strongest relationship in determining the likelihood of using performance measurement in economic development were population size and city expenditures. Economic Development Performance Measures in Practice In their work on performance measurement in economic development, Hatry et al. compiled 12 key characteristics as criteria for the development of measurement procedures. According to the authors, the performance monitoring system should focus on service outcomes and quality and helping program mangers improve their operations. As for conducting the analysis, the authors indicate that procedures should provide frequent and timely performance information and the programs should focus on the outcomes realized by the end user or client. Performance indicators should not be limited to a single or couple of measures, but rather they should include multiple indicators to assess service quality and outcomes (Hatry et. al., 1990, p. 4) . Hatry et al. continued to list recommended criteria for performance measurement, discussing data. The authors believe that measures should utilize nontraditional data sources such as client surveys and unemployment insurance datato help assess service quality and outcomes (Hatry et. al., 1990, p. 5). This data supplements traditional sources used by government such as the decennial US Census, the quinquennial Economic Census, or community surveys. As for outcomes, Hatry et al. believe that measurement procedures should evaluate outcomes and include both intermediate and end outcomes. Also, it should include some measurement of how much influence the contribution made by the agency or program had on the

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

12

clients outcome. Continuing its outcome focus, the measurement should also identify and report service quality and outcome indicators by client characteristics (Hatry et. al., 1990, p. 5). The authors indicate that measures should provide for the ability to compare previous year performance over different categories with explanatory factors (Hatry et. al., 1990, p. 5). This allows users of the information to make appropriate comparisons to the program across different years. Doing so allows for informed conclusions on changes in program performance and empirical support for policy decisions made in direct response to such changes. When data changes from year to year, it is difficult to use such data without manipulation of the data, if that is even possible. Consistency provides an appropriate benchmark for performance data. Finally, Hatry et al. states that the design of data collection and management procedures should be as inexpensive as possible and to keep demands on personnel time to a minimum (1990, p. 6). After all of this discussion of efficiency and performance, Hatrys final point is a natural progression in developing a performance measure for economic development, or for any activity for that matter. Consistent with how performance measurement is used in various fields, Warren identifies and then defines those interrelated elements organizations used to measure performance of an operation (2005). The elements include: Inputs, which are resources such as money, staff time and other items used to produce outputs and outcomes. Inputs indicated the amount of a particular resources that is actually used to produce a result; Activities, which are the actions a program takes to achieve a particular result;

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

13

Outputs, which are the amounts of products created and services delivered in a report period, such as number of training programs conducted, number of classes, taught, or the number of clients served;

Outcomes, which are changes in knowledge, skills, attitudes, values, behavior, or condition that indicate progress toward achieving the programs mission and objectives. Outcomes can be short term, intermediate, or long term. Outcomes are linked to a programs overall mission (2005). Warren also provides a model for developing performance measures based on work by

the United Way of America (2005, p. 2). In his model, he includes examples for each element of what may be considered for each measure element. For example, he suggests that inputs may include resources such as money or staff time; outputs may include what was created or how much training was delivered; and outcomes should reflect changes in skills, values or behavior related to the programs goals and objectives. Taking from Warrens interpretation of the United Way model, the performance measurement model is best summarized by defining each of the elements in a simple manner. For inputs, the element is best summarized when asked what resource was used. The activity element can be summarized by asking what was done. In the outputs element, the evaluator would ask what happened. Finally, for outcomes, the element can be summarized by asking what the consequence was. The most comprehensive listing of economic development recommended performance measures compiled comes from the National Center for Public Productivity at Rutgers, the State University of New Jersey. It has referenced the work by Hatry and indentified inputs, outputs,

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

14

outcomes and measures of efficiency. The information generally agrees with Warrens work on the role of performance measurement, but does not include the activities element and adds efficiency as an element. Below are those measures from its Recommended Service Efforts and Accomplishments (SEA) Reporting Indicators for Economic Development. Inputs o Dollars spent on the programs activities (current and constant dollars). o Number of staff-hours expended by the program. Outputs o Number and percentage of business prospects identified that may be interested in locating. o Number of businesses from target industries that are interested in locating. o Number of contracts made with firms interested in locating. o Number of firms that reveid assistance from the program (by type of assistance) o Percentage of leverage (non-governmental) funds used to finance the project. Outcomes o Intermediate Outcomes Number of visits by interested businesses that received assistance Number and percentage of response to advertising or direct mail solicitations. o Longer-term Outcomes Number and percentage of firms that received ascetic [sic] and located elsewhere.

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

15

Number and percentage of firms receiving assistance that located in jurisdiction and that felt that assistance contributed to their location decision.

Number of actual jobs created by assistance 12 months/24 months after their initial contact with the program (and comparison with projected number of jobs created).

Average wage of jobs created by locating firms that received assistance. Dollars of capital investment made by locating firms receiving assistance 12 months/24 months after the announcement of their location decision.

Amount of added tax revenue relating to assisted firms that located in the jurisdiction.

Percentage of clients rating the timeliness of each service they received as excellent, good, fair, or poor.

Percentage of clients rating the helpfulness of each service they received as excellent, good, fair, or poor.

Percentage of clients locating elsewhere for reasons over which the agency had some influence.

Efficiency

Estimated number of workers displaced by assisted forms that located.

o Program expenditures per actual jib [sic] created at 12 months/24 months after receiving assistance. o Program expenditures per estimated tax dollar generated by client firms (The National Center for Public Productivity, 2004).

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

16

Performance Measurement in New York State Economic Development Programs. In at least the past five years, economic development programs in New York State have fallen under increased scrutiny. This section will focus on two programs that have been the subject of such scrutiny: Empire Zones and Industrial Development Agencies. While Industrial Development Agencies have existed since legislation was enacted in 1969 by New York State providing for the creation of such agencies as public benefit corporations (Hevesi, 2006), it was the Empire Zone program that was subject to the first statewide assessment by the Office of the State Comptroller in 2004. Empire Zones In 1986, the New York State Legislature enacted legislation that implemented an Economic Development Zones program, similar to the Enterprise Zone program created in the dock districts of London to spur economic development in economically challenged areas. According to the Office of the State Comptroller, in New York State, the Empire Zone program was intended to bring jobs and economic opportunity to areas characterized by pervasive poverty, high unemployment, and overall economic distress (Hevesi, 2004, p. 9). According to Hevesi, between 1986 when the program was established in New York State (then called the Economic Development Zones or EDZ) to 1988, this type of zone-based economic development incentive program expanded from 37 states in the nation to every state in the nation (2004, p. 9). This development pitted each state against one another in competing for new economic development opportunities from new and expanding businesses nationwide. Realistically, the competition was regionally based with New York State competing with programs such as Pennsylvanias Keystone Opportunity Zone program.

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

17

The Empire Zone program was initially developed with a limited number of tax incentives for business including a wage tax credit for new jobs created and an investment tax credit for monies invested in capital expenditures related to the growth of businesses. By 2004, zones provide essentially tax-free locations for businesses to locate or expand (Hevesi, 2004, p. 1). However, tax-incentive based economic development programs are often difficult to measure because the impact to a state or municipality is not how much money was given outright, as a grant may provide, but rather the amount of lost tax revenue. In 2004, the Office of the State Comptroller estimated the Empire Zone program will cost the State $291 million in forgone tax revenue (Hevesi, 2004, p. 1). So measuring what the impact of $291 million in lost revenue meant to the state economy in 2004, and every year that the program has existed, has been the challenge. Despite a program that had grown and evolved, the Office of the State Comptroller identified one fundamental flaw to the program. The EDZ program lacked reliable data and evidence of the programs success (Hevesi ,2004, p. 1). The problem of measuring the performance of the Empire Zone program had been previously highlighted in 1995 through a formal audit of the Department of Economic Development by the State Comptroller. The audit caused the Departments Commissioner Charles Gargano to acknowledge the need to develop and accumulate measurable indicators of the programs success (Hevesi, 2004, p. 1). However, according to Hevesi, during a second audit through 2004, the Department of Economic Development noted the need to implement programmatic changes delayed the implementation of evaluation systems that the DED agrees are necessary (2004, p. 1).

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

18

The conclusion of the 2004 report by the Office of the State Comptroller included a reform proposal for the Empire Zone program. In the proposal, nearly 30 reforms in 6 categories were suggested to the program. Of those, the flowing reforms could be considered directly related to performance measurement within the program: Development and implement standards of performance for recipients of zone benefits; Rely on Department of Economic Development [DED] Staff to gather necessary data to evaluate the performance of each Zone and the certified businesses it contains; The [Empire Zone Oversight and Designation] Board should seek ways to apply corporate governance and public accountability standards to its functions; responsibility for evaluating the accuracy of reported data should clearly lie with the DED; Local Zone Administrative Boards and Zone Coordinators would be required to formally review participating businesses every two years to ensure that they are meeting stated goals; Zone Coordinators would be required to identify, for DED staff and ultimately for the Empire Zone Oversight and Designation Board, certified businesses with outstanding performance records, as well as those that might not be meeting performance standards; DED staff would be responsible for training Zone Coordinators and Zone Administrative Boards to create uniform reporting standards and methods of data collection. Staff would provide ongoing assessments of the quality of information submitted; DED staff would be responsible for compiling a Zone Annual Report (ZAR) for each zone based on BARs [Business Annual Reports], and would present the Zone

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

19

Administrative Board and Zone Coordinator with zone-specific findings contained in the reports (Hevesi, 2004, pp. 5-8). While the report identified a number of performance measurement-related reforms, perhaps most valuable was the identification of consistent inputs to be used in evaluating the effectiveness of the programs through outputs and outcomes. The recommended reforms identified a need to change the Business Annual Report to capture more relevant data including: the average wage for new jobs created, the number of targeted (low-income) employees hired, the net change in employment from one reporting period to the next (Hevesi, 2004, p. 7). The reform also sought to collect that actual amount a business claimed in tax credits in the prior year in addition to projected credits for the future year. In a follow-up report, the Office of the State Comptroller revisited the Empire Zones program again in 2007. This report came approximately 2 years after legislative changes were put into place in 2005. Those changes instituted requirements for a business development plan, cost-benefit analysis on businesses prior to certification, approval of all applications for certification by local Zone Administrative Boards and a requirement that businesses submit certified annual reports to local Empire Zone offices (DiNapoli, 2007, p. 5). The follow-up report concluded that local Empire Zone officials have made only limited progress in correcting the problems identified in our initial report and that out of the 12 recommendations, only four were fully implemented by all Zones and others have either fully or partially implemented three other recommendations (DiNapoli, 2007). As a result, the program still lacked the ability to adequately develop and present performance measures. Industrial Development Agencies

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

20

The New York State Legislature in 1969 enabled the creation of Industrial Development Agencies (IDA) as public benefit corporations intended to improve the economic conditions within their communities. Since that time, 177 IDAs were established and at the time of the Office of the State Comptrollers report, 115 were considered to be active (Hevesi, 2006, p. 6). In many communities, the IDA is one of the best known economic development programs. It is also the one program that has a presence in each county across New York State. To understand the attempts to measure the performance of Industrial Development Agencies in New York State, it is important to first understand the structure of an IDA and what it is empowered to do. Industrial Development Agencies can be structured and operated in various ways. The governing structure of an IDA is a board of directors with three to seven members. Industrial Development Agencies do not have taxation powers and thus typically maintain their operations by charging various fees to the businesses that participate in their projects (Hevesi, 2006, p. 7). Industrial Development Agencies may have staff assigned to assist in their work or may contract with an outside firm for assistance, including legal counsel. A common misconception in the general public is that Industrial Development Agencies provide loans to businesses under the auspices of economic development. Because IDAs are often a part of an economic development package that may include financing for a business, this is understandable. However, IDAs do not lend money directly. Instead the role of the IDA is to issue bonds which are purchased by financial institution which generates the capital that an IDA can use to provide financing to a business or project. It is though this process that IDAs are able to directly issue debt for the benefit of economic development (Hevesi, 2006, p. 7).

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

21

Industrial Development Agencies are also able to acquire, own or dispose of property with that property owned by an IDA being exempt from property taxes and mortgage recording taxes (Hevesi, 2006, p. 7). This ability allows for IDAs to develop lease-back programs to economic development projects that can reduce or eliminate property taxes and the mortgage recording tax. This benefit is utilized by having the IDA purchase and own a property for the benefit of an economic development project or business. The IDA then leases the property to the project or businesses involved. Finally, IDAs are able to make purchase in support of approved projects and be exempt from paying state and local sales tax (Hevesi, 2006, p. 7). This can translate into a savings of at least 4% of the state sales tax and up to the amount added as the local sales tax or more on certain purchases depending on the locality where the purchase was made or delivered. This benefit is usually limited to capital improvements and equipment purchased through an umbrella sales tax exemption the IDA extends to a project. For example, a company purchasing a machine used in its manufacturing process will be exempt from state and local sales tax on the purchase of that equipment by working through the IDA. Industrial Development Agencies have been subject to increased accountability following reforms enacted in the late 1980s and early 1990s according to the Office of the State Comptroller. The changes enacted in 1989 included requirements for an annual financial statement to be filed with the State Comptroller, Commissioner of Economic Development and the governing body of the municipality for whose benefit the IDA was created for. The requirements, which can be considered directly related to performance measurement, included that these statements include data concerning assistance provided and jobs created/retained for each project (Hevesi, 2006, p. 11).

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

22

In 1993, additional reforms were intended to make Industrial Development Agencies more accountable to its benefiting municipality and the public at large. These changes included the adoption of a uniform tax exemption policy and to allow the State Comptroller to make a determination if an IDA has filed a substantially complete financial statement (Hevesi, 2006, p. 11). Arguably the most significant development in performance measurement for Industrial Development Agencies in New York State came in 2006. Governor George Pataki signed the Public Authorities Accountability Act which led to the launch of the Public Authority Reporting and Information System (PARIS) in 2007. PARIS became the primary reporting application for various public authorities across New York State, including IDAs. In its 2009 report, the Office of the State Comptroller reported that the quality of data reported by IDAs for the 2007 year improved due to enhanced oversight [and the implementation of] PARIS, which was launched in November 2007 (DiNapoli, 2009, p. 2). The challenge that existed with Industrial Development Agencies, as identified by the Office of the State Comptroller, was determining the actual number of jobs created and retained and the cost of those jobs to taxpayers in the form of forgone local taxes (DiNapoli, 2009, p. 2). Since these two items, along with capital investment, are potentially the most significant performance measures for IDAs, accuracy and reliability of this information is critical. Conclusion While some attempts have been made in larger government settings to measure performance in economic development programs, such as those discovered by Linbald and audits by New Yorks Office of the State Comptroller, there is still a great deal of room to improve the

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

23

practice. First and foremost is to standardize measurement as Hatry provides for, which would allow for better program comparison. As Czohara and Melkers highlighted, it is imperative that leaders in the field of economic development must agree on a basic set of performance measurement criteria. Also, performance measure in economic development must become a standard and voluntary practice across all levels, not just those predicted by Linblad. Forced program evaluations by outside agencies do uncover vulnerabilities as proved by the work of Hevesi and DiNaopli in their roles as State Comptroller. However, those weaknesses and threats should have been discovered long ago through a regular performance measurement process. Effective performance measures in economic development will assist officials and professionals in making responsible public policy decisions moving forward.

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

24

References Ammons, D.N. (2001). Municipal Benchmarks: Assessing local performance and establishing community standards. Thousand Oaks, CA: Sage Publications. Czohara, L., & Melkers, J. (2004). Performance Measurement in State Economic Development Agencies: Lessons and Next Steps for GDITT. Retrieved from Georgia State University website: http://aysps.gsu.edu/report92.pdf DiNapoli, T.P. (2007). The Effectiveness of Empire Zones: Follow-up report (Report 2007MS-2). Retrieved from the New York State Office of the State Comptroller: http://www.osc state.ny.us. DiNapoli, T. P. (2009). Annual Performance Report on New York States Industrial Development Agencies: Fiscal Year Ending 2007. Retrieved from the New York State Office of the State Comptroller: http://www.osc state.ny.us. Hatry, H.P. (2006). Performance Measurement: Getting Results (2nd edition). Washington D.C.: The Urban Institute Press. Hatry, H.P., Fall, M., Singer, T.O., & Liner, E.B. (1990). Monitoring the Outcomes of Economic Development Programs. Washington D.C.: The Urban Institute Press. Hevesi, A.G. (2004). Assessing the Empire Zones Program: Reforms needed to improve program evaluation and effectiveness (Report 3-2005). Retrieved from the New York State Office of the State Comptroller: http://www.osc.state.ny.us. Hevesi, A.G. (2006). Industrial Development Agencies in New York State Background, issues and recommendation. Retrieved from the New York State Office of the State Comptroller: http://www.osc.state.ny.us.

PERFORMANCE MEASURES IN ECONOMIC DEVELOPMENT

25

Ingraham, P.W., Joyce, P.G., Kneedler Donahue, A. (2003). Government Performance. Baltimore: The Johns Hopkins University Press. Lindblad, M.R. (2006). Performance Measurement in Local Economic Development. Urban Affairs Review, 41(5), 646-672. doi: 10.1177/1078087406286462. The National Center for Public Productivity. (2004). Recommended Service Efforts and Accomplishments (SEA) reporting Indicators for Economic Development. Retrieved from the National Center for Public Productivity website: http://andromeda.rutgers.edu /~nccp/cdgp/econdev.htm. The New York State Office of the State Comptroller. (2008). Financial Data for Local Governments (Level One). Retrieved from http://www.osc.state.ny.us/localgov /datanstat/findata/index_choice.htm Ridley, CE. (1943). Measuring Municipal Activities. Chicago: The International City Managers Association. Warren, J. (2005). The Role of Performance Measurement in Economic Development. Retrieved December 1, 2009 from Angelou Economics website: http://www.angeloueconomics.com/ measuring_ed.html.

Вам также может понравиться