Вы находитесь на странице: 1из 36

Digital Technology and the Productivity Paradox: After Ten Years, What Has Been Learned?

by

Paul A. David Stanford University & All Souls College, Oxford

Draft: 20 May, 1999

This paper has been prepared for presentation to the Conference Understanding the Digital Economy: Data, Tools and Research, held at the U.S. Department of Commerce, Washington, D.C., 25-26 May 1999.

Please do not reproduce without authors permission.

Contact Author: Professor Paul A. David, All Souls College, Oxford OX1 4AL, UK Tele: 44+(0)1865+279313 (direct); +279281 (secy: M. Holme); Fax:44+(0)1865+279299; E-mail:<paul.david@economics.ox.ac.uk>

Digital Technology and the Productivity Paradox: After Ten Years, What Has Been Learned?

1. The Computer Revolution and the Economy Over the past forty years, computers have evolved from a specialised and limited role in the information processing and communication processes of modern organisations to become a general purpose tool that can be found in use virtually everywhere, although not everywhere to the same extent. Whereas once "electronic computers" were large machines surrounded by peripheral equipment and tended by specialised technical staff working in specially constructed and air conditioned centres, today computing equipment is to be found on the desktops and work areas of secretaries, factory workers and shipping clerks, often side by side with the telecommunication equipment linking organisations to their suppliers and customers. In the process, computers and networks of computers have become an integral part of the research and design operations of most enterprises and, increasingly, an essential tool supporting control and decision-making at both middle and top management levels. In the latter half of this forty year revolution, microprocessors allowed computers to escape from their 'boxes', embedding information processing in a growing array of artefacts as diverse as greeting cards and automobiles and thus extending the 'reach' of this technology well beyond its traditional boundaries. Changes attributed to this technology include new patterns of work organisation and worker productivity, job creation and loss, profit and loss of companies, and, ultimately, prospects for economic growth, national security and the quality of life. (Genetic engineering may one day inherit this mantle.) Not since the opening of the atomic age, with its promises of power too cheap to meter and threats of nuclear incineration, has a technology so deeply captured the imagination of the public. It is not surprising, therefore, that perceptions about this technology are shaping public policies in areas as diverse as education and macroeconomic management. One indication of the wider importance of the issues motivating the research presented in this volume can be read in their connection with the rhetoric, and, arguably, the substance of U.S. monetary policy responses to the remarkable economic expansion that has marked the 1990's. For a number of years in mid-decade the Federal Reserve Board Chairman, Alan Greenspan, subscribed publicly to a strongly optimistic reading of the American economy's prospects for sustaining rapid expansion and rising real incomes without generating unhealthy inflationary pressures. His conviction that monetary tightening was not called for by the signs of accelerating output growth and new lows in the unemployment rate appears to have been based in large part upon the anticipation of big productivity payoffs from the investments that have been made in information technology. Like many other observers, the Chairman of the Fed viewed the rising volume of

3 expenditures by corporations for electronic office and telecommunications equipment since the late 1980s as part of a far-reaching technological and economic transformation in which the U.S. economy is taking the lead:1 We are living through one of those rare, perhaps once-in-a-century events....The advent of the transistor and the integrated circuit and, as a consequence, the emergence of modern computer, telecommunication and satellite technologies have fundamentally changed the structure of the American economy. During 1996, when he was resisting calls within the Fed to raise interest rates in order to check the pace of expansion and damp down incipient incipient, Chairman Greenspan stressed just those structural reasons for believing that the relationship between the rate of real output expansion, unemployment rate and the underlying rate of price inflation had been altered, and the persisting buoyancy of the economy did not call for the imposition of monetary restraints:2 The rapid acceleration of computer and telecommunication technologies can reasonably be expected to appreciably raise our productivity and standards of living in the 21st century, and quite possibly in some of the remaining years of this century.

Testimony of Alan Greenspan before the U.S. House of Representatives Committee on Banking and Financial Services, July 23, 1996. 2 Alan Greenspan, Remarks before the National Governors Association, Feb. 5, 1996, p. 1, quoted by B. Davis and D. Wessel, Prosperity: The Coming 20-Year Boom and What It Means to You, (forthcoming in 1998), Ch. 10.

4 Yet, there were other members of the Feds Board of Governors who demurred -- albeit ineffectually -- from the Chairmans relaxed monetary policy prescriptions, and there has been no lack of dissent from the forecast of an imminent productivity bonanza. Soon after Professor Alan Blinder of Princeton ended his term as Vice Chairman of the Fed, he and his colleague Richard Quandt co-authored a paper3 debunking the notion that the IT revolution already had greatly boosted productivity. According to Blinder and Quandt (1997: pp. 14-15), even if the information technology revolution has the potential to significantly raise the rate of growth of productivity in the long-run, the long-run is uncomfortably vague as a time-scale in matters of macroeconomic managment. Going further, they put forward a thoroughly skeptical appraisal of the prospects for a coming IT-based productivity surge -- likening the optimists on that question to the characters in Becketts play Waiting for Godot.4 In their view we may be condemned to an extended period of transition which the growing pains change in nature, but dont go away. Some diminution of skepticism of this variety has accompanied the quickening of labor productivity growth since 1997, and especially the very recent return of the rate of increase in real GDP per manhour to the neighborhood of 2 percent per annum. But it remains premature to try reading structural causes in what may well be transient, or cyclical movements which, in any case, have yet to materially alter the U.S. economys productivity trend over the course of the 1990's.

2. The Making and Unmaking of a Paradox -- And the Persisting Puzzle Disagreements among economists about the underlying empirical question hardly are a new thing. More than a decade has now passed since concerns about the relationship between the progress in the field of information technology and the improvement of productivity performance in the economy at large crystallized around the perception that the U.S., along with other advanced industrial economies, were confronted by a disturbing productivity paradox. The precipitating event in the formation of this view was the rather offhand observation made in the summer of 1987 by Robert Solow, Institute Professor at M.I.T., and Economics Nobel Laureate. In the course of a book review Solow remarked: You can see the computer age everywhere but in the productivity statistics.5 This characteristically pithy comment soon was being quoted by the business press, repeated in government policy memos, and quickly became the touchstone for proposals to convene academic conferences to assess the state of knowledge on the subject of technology and productivity. In no time at all the question of why the information technology revolution had not sparked a surge in productivity improvements and a consequent supply-drive wave of economic growth was universally referred to as "the Productivity Paradox." Almost overnight it reached the status of being treated as the leading economic puzzle of the late twentieth century.

Alan S. Blinder and Richard E. Quandt, Waiting for Godot: Information Technology and the the Productivity Miracle?, Princeton University Department of Economics Working Paper, May 1997. 4 Samuel Beckett, Waiting for Godot, A Tragicomedy in Two Acts, 2nd ed., London: Faber and Faber, 1965. 5 Robert M. Solow, Wed Better Watch Out, New York Review of Books, July 12, 1987, p. 36.

5 All this commotion was not undeserved. Professor Solows remark had served to direct public attention to what he and some other observers took to be a perplexing, and rather ironic conjunction of two conflicting impressions about the rate of technological advance in the industrialised West. On the one hand, excitement in the business press had persisted since Time Magazine declared the computer as the 'person of the year' in 1982. With the continuing rapid pace of innovation and rapid growth of investment in computer technologies, industry and academic economists were preoccupied not only with the immediate effects of the computer but with the potentially even larger implications of the accelerating convergence of telecommunication and computer technologies. On the other hand, in the U.S. Private Domestic Economy the average annual pace of increase of labor productivity had been cut by well more than half between the periods 1948-1966 and 1966-1989, tumbling from 3.11 to 1.23 per cent per annum. 6 This was sufficient to pull the secular trend rate for the entire post-WW II era (1948-1989) back down to 2.05 per cent per annum, which was about the same pace that had been maintained thoughout the preceeding half-century. In other words, during 1966-1989 labor productivity grew at an anomalously sluggish pace, only a bit more than half the 2 percentage points per annum rate that has been characteristic of the epoch of modern economic growth, and barely half the (2.52 percent per annum) average rate that was achieved in the U.S. private economy over the 1929-1966 interval. But, even more disturbing was the evidence that the slowed rise in labor productivity reflected a collapse in the growth rate of total factor productivity (TFP): a measure of the crude TFP residual, that is the portion of the output per manhours growth rate that is not attributable to contribution of increasing capital per manhour, fell to 0.66 percentage points per annum during 1966-89. That slow a total productivity growth rate had not been experienced in the US Private Domestic Economy for a comparably extended period since the middle of the nineteenth century. Recent calculations by Abramovitz and David (1998) show that during 1966-1989 the refined TFP growth rate (adjusting for composition-related quality changes in labor and capital inputs) plummeted to 0.04 percent per year. Contrast that with the average annual rate of 1.4 percent that had been maintained over the whole of the 1890-1966 period. The slowdown was so pronounced that it brought the TFP growth rate all the way back down to the low historical levels that statistical reonstructions of the mid-nineteenth century economys record reveals. Superior estimates of real gross output for the private non-farm business economy have become available from the BLS (May 6, 1998:USDL News Release 98-187). These provide a more accurate picture, by deflating the current value of output using price indexes that re-weight the prices of the component goods and services in accord with the changing composition of the aggregate. These chain-weighted measures lead to productivity growth measures which reveal two further points about the slowdown. The first is that rather than
The figures cited are based on estimates of real gross product per manhour, assembled by M. Abramovitz and P. A. David in American Macroeconomic Growth in the Era of Knowledge -Driven Progress: A Long-Run Interpretation, Unpublished memorandum. December 1998 .
6

6 becoming less marked as the oil shock and inflationary disturbances of the 1970's and the recession of the early 1980's passed into history, the productivity growth rates deviation below the trend that had prevailed during 1950-1972 interval had grown even more pronounced during the late 1980s and early 1990's. Labor productivity increased during 1988-1996 at 0.83 percent per annum, fully 0.5 percentage points less quickly than the average rate during 1972-1988. Correspondingly, the refined TFP growth rate estimate (without allowance for vintage effects) sank to 0.11 percent per annum, which represented a further drop of 0.24 percentage points from the 1972-88 pace. Perceptions of the productivity paradox as a failure of new technology to accelerate the growth of the task efficiency of resource use emerged as much from "the bottom up" view of the economy as from a "top down" view of highly aggregated categories provided by national product accountants. Indeed, the notion that there is something anomalous about the prevailing state of affairs drew much of its initial appeal from the apparent failure of the wave of innovations based on the microprocessor and the memory chip to elicit surges of growth in labour productivity and multifactor productivity from those very branches of of the U.S. service sector that were investing so heavily in computers and office equipment. Worse still, despite the hopes that many continued to hold that the "information technology revolution" would soon spark a revival, little support for such optimism could be found in the first wave of empirical studies that looked into this question systematically. Some saw no association between greater use of computers and higher productivity, whereas others reported that heavy outlays on equipment embodying new information technologies appeared to be associated with decreases in productivity or subnormal rates of return on investment (see, e.g., Roach (1987, 1988, 1991), Loveman (1988), Morrison and Berndt (1990). The economics profession's reactions to these anomalous and perplexing developments have tended to be expressed in one of three characteristic positions: either the Solow paradox is an artefact of inadequate statistical measure of the economys true peformance, or there has been a vast over-selling of the productivity enhancing potential of IT, or the promise of a profound impact upon productivity has not been mere hype, but the transition to the techno-economic system in which this will be fulfilled should be understood to be a more arduous, costly, and drawn out affair than was initially supposed. We shall first consider these positions more fully, before turning in the Part II to assess the light that each has been able to shed on recent experience, and therefore on future prospects.

2.1 Doubts About the Data -- Distortions in the Mirror of Economic Reality First to be considered are the numbers skeptics who see the problem to reside mainly in the productivity statistics themselves. The suggest that Professor Solow's formulation of the supposed paradox contains its own resolution. The economy could well be experiencing substantial real gains in "productivity" from succeeding waves of ICT innovations, but their impact are masked by defects and deficiencies in the available measures of economic performance. By this account, faulty measures have seriously understated the growth of real output, thereby creating the appearance of spuriously low rates of growth in both labour productivity and total factor productivity.

7 Within the camp of economists who pointed to "measurement errors" as the resolution of the productivity paradox there was no clear consensus as to when these errors became dominant. For example, should the same explanation be applied to account for the slowdown observed in the pace of productivity since the mid-1970's? By the same token, comparatively little attention has been given to drawing a tighter connection between the suspected severity of the measurement error problem in recent years, and the nature of developments characterising the information revolution itself which would support the proposed direction of causation. The productivity slowdown emerged by the early 1980s, and thus before the personal computer's contribution to the rapid advance of the information revolution during the 1980s. This raises the possibility that mechanisms responsible for the slowdown also could have a major role in the creation of a computer productivity paradox, inasmuch as it was the proliferation of personal computers to which Professor Solow was referring in 1987. During the past twenty years, several significant structural changes have occurred in the U.S. economy including the continued growth of service activities share of national output, the proliferation of new products and the enlargement of the role of knowledge-based industries in the economy. Each of these structural changes as well as accompanying organisational changes are also linked to information and communication technologies. It is therefore possible that the productivity slowdown and the computer productivity paradox are different and mutually reinforcing features of the same set of phenomenon. 2.2 Delusionment with IT -- Maybe Godot Doesnt Exist and Wont Be Coming In a second explanatory category, as our previous reference to the views of Professor Blinder indicates, there are "computer revolution sceptics" who maintain that there really is nothing paradoxical about the flagging post- 1973 pace of productivity advance because notion of a "computer revolution" has been based more upon "hype" than concrete evidence promising enormous efficiency gains in the branches of the economy where most productive resources are employed. Quite the contrary, they catalogue the many perverse consequences that have followed the introduction of computers into the workplace, especially in the service sector. Those who, like Robert Gordon (1997, 1998) have swung round to a scepticism about actual impact, certainly can justly point to the existence of essentially transient factors which had contributed to the high measured rates of total factor productivity growth that characterised the performance of the post-WWII U.S. economy (and other industrial economies to an even more marked degree). Therefore, they maintain, a slowdown was not only inevitable, but an economy-wide resurgence in the size of "the residual" due to the information technology revolution remains highly implausible for the foreseeable future. This is the case, they contend, because the old technologies have been exhausted as a source of further efficiency improvement in many lines of tangible commodity production industry, whereas the application of ICT in other, "service" industries has fallen far short of expectations. Furthermore, other highly promoted sources of commercial innovation, such as advances in biotechnology, high temperature superconductivity and new materials remain at best rather far off in the future as major generators of conventional productivity improvement.

8 2.3 From the Regime Transition Hypothesis to Cautious Optimism: Third, there are those who approach the problem of accounting for what has been happening with what might be labelled a spirit of cautious optimism. Explanations of this kind have seized upon the idea that we are involved in the extended and complex process of transition to a new, information intensive techno-economic regime based, a transition that involves the abandonment of the extensive transformation of some, and renewal of other, aspects of the former, mature "Fordist" technological regime. This technological regime, based upon techniques of mass production and mass-marketing of standardised goods and services produced by capital-intensive methods requiring dedicated facilities, high rates of through-put, and hierarchical management organisations, had assumed its full-blown form first in the U.S. during the second quarter of the present century. In its mature stage it underlay the prosperity and rapid growth of the post-WWII era, not only in the U.S. but in other industrial countries of western Europe, and Japan, where its full elaboration had been delayed by the economic and political dislocations of the 1920's and 1930's, as well as by WWII itself. Regime transitions of this kind, it is argued, involve profound changes, whose revolutionary nature is better revealed by their eventual breadth and depth than by the pace at which they achieve their influence. Exactly because of their breadth and depth, they require the development and coordination of a vast array of complementary tangible and intangible elements: new physical plant and equipment, new kinds of workforce skills, new organisational forms, new forms of legal property, new regulatory frameworks, new habits of mind and patterns of taste. For these changes to be set in place requires decades, rather than years, and while they are underway there is no guarantee that their dominant effects upon macroeconomic performance will be positive ones. The emergence of positive effects is neither assured nor free from the exploration of blind alleys (trajectories that proved economically nonviable and were abandoned). Moreover, the rise of a new techno-economic paradigm can have dislocating, backwash effects upon the performance of surviving elements of the previous economic order. In short the "productivity paradox" may be a real phenomenon, paradoxical only to those who suppose that the progress of technology is autonomous, continuous and, being "hitchless and glitchless" translates immediately into cost-savings and economic welfare improvements. Those in the cautious optimist camp suggest that the persistence of slow rates of productivity growth should be viewed as a slip between cup and lip which might well be corrected with the appropriate attention and co-ordination. If so, the future may well bring a resurgence in the measured total factor productivity residual that could be clearly attributed to advances in ICT. They are therefore intent to divine the early harbingers of a more widespread recovery in productivity growth. But they acknowledge that such a renaissance is not guaranteed by any automatic market mechanism and argue that it is foolish simply to wait for its arrival. They point, instead, to the need to concurrently address many of the technological drawbacks that already have been exposed, as well as the difficult non-technological, organisational and institutional problems that must be overcome in both the economy's public and private spheres in order to realise the full potential of the "information revolution".

9 The foregoing caricatures of the three main explanatory camps into which the economists seem to have organized themselves are sufficiently overdrawn that it would be inappropriate to firmly attach names to the various positions. Yet, together they do convey the spirit of much of the recent discussion in the scholarly journals, government studies, and the business press. Rather than approaching the task of interpreting the evidence as one of evaluating these as alternative and conflicting hypotheses among which it is necessary to choose a prime resolution for the productivity paradox, an effort at synthesis appears far more appropriate. From the evidence that has been accumulating during the past decade it seems increasingly clear, first, that there is less that is really paradoxical in the economic record, but considerable basis remains for puzzlement regarding the sources of the total factor productivity slowdown. Secondly, while each of the proposed lines of explanation has something to contribute towards explaining the latter development, greater weight can be placed upon a combination of adverse productivity concommittants of the regime transition, and several measurement problems that have been exacerbated by the digital technology revolution itself.

3. Unmaking the Paradox -- and leaving a Persisting Puzzle That there is a quite simple and positive linkage between new technological artefacts and economic growth remains a view widely held among economists today. Improved technological artefacts are likened to better tools and better tools are supposed make workers more productive. Moreover, the prospect of higher productivity for the users of tools embodying the new technologies ought to encourage investment in those assets, and thereby stimulate demand. If the relative price of these tools are falling, should we not be observing growth in both the volume of sales of the tools and the productivity achieved by using them? There is nothing particularly complicated about such a process. New organizations of production techniques may simply permit adaptation to changing relative input prices; they could allow the substitution of inputs whose relative prices are declining for those whose relative prices are rising, thereby averting upward pressure on real costs per unit of output -without necessarily increasing multifactor productivity. If relatively rapid technological progress is taking place in the sector that is providing these better tools, the consequent fall in their price vis-a-vis other inputs will drive the process of substituting them for those factors of production in a variety of applications, and their contribution to the growth of output will come in the form of the yields on the necessary investments required put them in place. It is plain that the first part of this causal chain is strikingly substantiated by the data for the case electronic computers, and by the larger category of equipment to which they belong: office, accounting and computing machinery, abbreviated as OCAM. 7 According the the
The historical continuity of the U.S. NIPA (National Income and Product Accounts) has been preserved during the computer and telecommunication revolution by the assignment of computers to the increasingly heterogeneous standard industrial classification computer and office equipment. At this 3 -digit level of aggregation, general purpose digital computers are counted along with paper punches and pencil sharpeners. Telecommunication equipment at the three -digit level is no less discriminating as a category, mingling cellular
7

10 estimates developed by Dale Jorgenson and Kevin Stiroh (1995: Table1), over the period 1958-1992 the volume of OCAM investment in the U.S. was increasing at an average yearly rate of 20.2 percent, while the associated (constant quality) index of prices was falling at an average rate of 10.1 percent per annum; the corresponding rates for computers alone were still more spectacular, with investment quantities sky-rocketing at an average rate of 44.3 percent per annum while prices were plunging 19.1 percent per year on average. 8 Nothing fore-ordained that that once the expansion of computer power had begun it should proceed at such an explosive pace, but, the still-small size of the computerized sector in the economy during this phase in a sense limited the scope for its breath-taking growth to set up complicating, "back-wash" effects at the microeconomic and macroeconomic level, thereby jeopardizing its rapid continuation. But, by the same token, rather than supposing that the resulting 20 and 47 percentage point per annum trend rates of growth found for the real OCAM stock and the computer equipment stock, respectively, 9 would translated into commensurately rapid productivity increases, it is necessary in that connection also to take into account the comparatively small weight that these forms of capital carry the total flow of capital services. Furthermore, the productive contribution made by the growth of capital services to the rate of output growth depends upon magnitude of the "elasticity" of the latter with respect to the former, and that too lies well below unity. Both considerations should lead one to expect that the contribution made by the rise of computer capital services to the growth of output would be tightly circumscribed. Following this familiar line of growth accounting analysis, two path-breaking empirical assessments have been made of the contribution to the growth of aggreate output in the U.S. by investments embodying computer technology. Oliner and Sichel (1994), and Sichel (1997:Ch. 4) have produced one such set of measurements for he private non-farm business sector from "computer hardware services"over the long interval 1970-1992, adding estimates for "computer software services" in the years 1987-1993. Jorgenson and Stiroh (1995: sect. 4, especially) have implemented essentially the same conceptual approach to gauge the contribution of computer capital services (hardware only) within the framework of a more elaborate set of growth accounting estimates for the domestic economy as a whole in the years between 1958 and 1992.
radio telephones with radio and television broadcasting equipment, grouping private branch exchange equipment with telephone answering machines, and throwing in police sirens for completeness. The seeming anachronism of these product classifications may yet be reversed by penetration of digital technology into simple artefacts: answering machines have already acquired intelligence; perhaps, with the insertion of microprocessor chips, we will soon see paper punches, pencil sharpeners and police sirens smarten up as well. 8 These movements reflect the corrections made by the substitution of hedonic price indexes which in effect measure the change in the nominal price-performance ratio of computer equipment. Given the fact that the estimates available for the other components of producers durable equipm ent do not reflect such quality corrections, comparisons of the relative growth of the constant dollar value of computers and OCAM within that larger investment category are distorted, as are the resulting figures tracing the relative growth of the comput er capital stock vis-a-vis the rest of the capital stock. Further discussion of the introduction of hedonic prices for computer equipment, and its consequences for statistical comparisons, one should consult Wykoff (1995) as well as Jorgenson and Stiroh (1995). 9 See, the estimates presented by Jorgenson and Stiroh (1995:Table 2).

11 Not surprisingly, the results of these studies concurr in showing a secularly rising contribution being made to the output growth rate from this source. Starting from negligibly low initial levels, these rise to reach approximately similar significant dimensions by the late 1980's and early 1990's. But, "significant" in this context can still means a contribution of amounting to only a fraction of a percentage point in the annual output growth rate: Sichel (1997: Table 4-2) gives a computer equipment service contribution for 1987-93 of 0.15 percentage points, and a combined hardware plus software contribution of 0.26 percentage points, whereas Jorgenson and Stiroh's calculations put the hardware capital contribution during 1985-92 at the considerably higher level of 0.38 percentage points per years annum. The relative "contributions" estimated for hardware alone thus lie in the modestly low range representing 7.5 to 15 percent of the corresponding output growth rates. But, the implied range for the relative contributions of hardware and software capital services combined is more substantial, representing between 13 and 24 percent of the output growth rate.10 The circumscribed focus of these studies upon investment in computers does risk leaving out the broader significance of the micro-processors advent. In recognition of this Kenneth Flamm (1997) presents a variant story, one that undertakes to quantify the semi-conductor revolution rather than the computer revolation. Considering the rapid rate of fall in the price-performance ratio of micro-processors, and their rapid and extensive penetration of many areas of application in the economy, Flamms calculations nevertheless point to consumer surpluses on this account that represents only 8 percent of annual GDP growth. In other words the measured direct contribution to GDP growth, aside from what the vendors of micro-processors appropriate through revenues, is found to be about 0.2 percentage point per annum.

Jorgenson and Stiroh's higher estimate for computer hardware capital services implies a considerably more generous proportional contribution to output growth would have been made by hardware and software services combined. Using the relationship between the two components reported by Sichel (1997: Table 4.2) for approximately the same years, the implied contribution from software services could be put at 0.28 percentage points. The latter must be added to the 2.49 percentage point rate of output given by Jorgenson and Stiroh (1995 : Table 4) for 1985-92, as service flows from software capital are were not included on either the output or input sides of their growth accounting. The combined relative contribution from hardware and software on this basis leads to the figure reported as 24 percent in the text: (.38 + .28)/(2.77) x100 = 23.8 percent.

10

12 There are several ways in which these results may be read. The fact that by the early 1990's the growth of computer capital services was contributing possibly as much as fifth of the U.S. economys yearly output growth certainly could be taken as a strong, quantative refutation of the argument that the investments poured into this sort of capital formation had been squandered, that no comensurate payoff in terms of output and labor productivity, were forthcoming.11 At the same time, these results can be welcomed as dispelling the widely optimistic belief that the spectacular pace at which computer power has been growing must have induced a productivity bonanza. The latter confusion stems from over-looking the fact that no matter how rapidly a productive asset may be accumulating, what also enters into the determination of its impact upon output is the size of the flow of productivity services it yields, in relation to the other inputs required for production. In the case of computer services this share has been found still to be rather small; even in relationship to just the total flow of capital services (gross of depreciation) in the economy, the computer stock's share had only reached the 1.0 percent mark by 1985, before the personal computer investment boom enlarged it to 4.5 percent in 1992.12 It is worth noting that the modern conjuncture of high rates of innovation and slow measured growth of total factor productivity is not a new, anomalous development in the history US economy. Indeed, most of the labor productivity growth rate during the interval extending from the 1830's through the 1880's was accounted for by the increasing capital-labor input ratio, leaving residual rates of total factor productivity growth that were quite small by the standards of the early twentieth century, and a fortiori, by those of the post-WW II era. During the nineteenth century the emegence of technological changes that were biased strongly in the direction of tangible-capital using, involving the substitution of new forms of productive plant and equipment involving heavy fixed costs and commensurately expanded scales of production, induced a high rate of capital accumulation. The capital-output ratio rose without forcing down the rate of return very markedly, and the substitution of increasing volumes of the services of reproducible tangible capital for those of other inputs, dispensing increasingly with the sweat and the old craft skills of workers in the fields and workshops, along with the brute force of horses and mules, worked to increase real output per manhour.13
The counter-argument by technological pessimists would point out that the growth accounting calculations of Oliner and Sichel (1994), and Jorgenson and Stiroh (1995) merely assume that the computer equipment investments have been earning the same net rate of return as the other elements of the capital stock (on an after-tax basis, in the case of the capital accounting framework developed by Jorgenson (1990) and implemented in his work with Stiroh). In view of the continuing investments, that does seem a plausible minimum assumption, but, it is nonetheless a supposition, rather than a point that the growth accounting has proved. As will be seen (from the contributions by Brynolfsson and Hitt (1995) below, efforts to estimate the rates of return earning on computer capital econometrically from data at levels of aggregation below that of the economy, do refute the suggestion that the private rates of return have been sub-normal. [This is reinforced by subsequent findings reported by Bynolfsson and Hitt (1997, 1998).] 12 These "shares" are calcuated from the estimates in Jorgenson and Stiroh (1995 [Chapter 10]: Table 3). The corresponding shares of computer equipment services in the flow of services from non-residential producer durable capital are, of course, considerably larger, reaching 18 percent by 1992. 13 See Abramovitz and David 1973, David 1977, Abramovitz 1989. Thus, most of the labor productivity growth rate was accounted for by the increasing capital-labor input ratio, leaving residual rates of total factor productivity growth that were quite small by the standards of the early twentieth century, and a fortiori, by those of the
11

13 Seen from this historical perspective, therefore, the recent developments hardly appear unprecedented and paradoxical. It could be maintained that there is little that is really novel or surprising in the way in which the rise of computer capital, and OCAM (office, computing and accounting machinery) capital more generally, has been contributing to economic growth in the closing quarter of the twentieth century, except for the fact that this particular category of capital equipment still bulks small in the economys total stock of reproducible capital. As has been seen, Daniel Sichel (1997) proposed a resolution of the productivity paradox is just such terms, arguing that the imputed gross earnings on hardware and software stocks amount to such a small fraction of GDP that the rapid growth of real computer assets can hardly be expected to be making a very significant contribution to the real GDP growth rate.14

post-WW II era. See Abramovitz and David, (1997), (1998) for further discussion. 14 See Daniel E. Sichel, The Computer Revolution: An Economic Perspective, Washington D.C.: The Brookings Institution Press, 1997, esp. Ch.4, Table 4-2. The 1987-1993 growth rates of inputs of computer hardware and software (allowing for quality improvements) are put at approximately 17 and 15 percent per annum, respectively, by Sichel; with the gross returns to these inputs estimated at 0.9 and 0.7 percentage points on the assumption that the assets earn a normal net rate of return, the combined contribution to the annual real output growth is about 0.26 percentage points.

14 But, viewed from another angle, the spirit of the Solow computer paradox lingers on, like the Chesire Cats smile; if not a real paradox, there remains a puzzle occasioned by the surprisingly small a TFP residual. It should be noted that the explanations which stress the small the contribution made by the rise of computer capital do so under the dual assumption that the investments embodying the new information technologies have been earning only a normal 12 percent net rate of return, like the other forms of capital, and that those private returns to the investing firms fully capture their contributions to production throughout the economy. A different set of calculations made for the period 1987-1993, however, which allow for the possibility that computer assets might be generating net rates of return very much above the normal (economy wide) average, finds that the biggest plausible effect would add another 0.3-0.4 percentage points per annum to the growth rate of labor productivity.15

Sichel (1997: p.81) makes an allowance that increases the net rate of return about four-fold, and raising the gross share of computer (hardware and software) earnings to .034 from .016. This raises the contribution to output growth from 0.26 to .56 percentage points per annum. This 0.3 percent per annum increment would be the magnitude of the rise in labor productivity growth resulting from allowance for the gap between privately appropriated normal earnings and the hypothesized above normal rate of return. An alternative calculation would consider the contribution from computer capital-deepening, defined as the growth of computer capital (hardware and software) in relation to the growth of real output in the economy. For 1987-1993 the figures presented by Sichel (1997, Table 4.2) indicate that the real ratio of computer capital inputs to real GDP was rising at about 14 percentage points per annum. The contribution to the aggregate labor productivity growth rate is found by multiplying that rate by the ratio of the computer inputs share to the total labor input share in GDP. Calculated under the normal rate of returns assumption, that ratio is 0.025 whereas under the Sichel super-normal returns assumption it is 0.052. This implies that the effect of increasing computer input intensity in the economy could be said to be raising labor productivity by 0.34 percentage points per annum under the assumption of normal rates of return, whereas the contribution to the labor productivity growth rate on the higher rate of return assumption might be as much as 0.73 percentage points, or almost 0.4 percentage points higher. Thus, the text gives 0.3 to 0.4 as the range if a generous allowance is made for computer yields above the normal rate of return..

15

15 Yet, might one not have supposed that there were spillovers in the form of induced learning effects, benefitting the employers of workers trained on the equipment installed at the expense of other firms? Similarly, the use of computer equipment in generating new knowledge that found its way into the productive capabilities of those who did not purchase or rent those R&D investment goods, would constitute another plausible form of externality. These spill-overs, representing economic value that for one reason or another could not be appropriated by (internalized within) the entities that bought and operated the IT capital, imply that a gap would exist between the social and the private rates of return on information capital. Nevertheless, to the extent that they contributed to the productive capabilities of the firms to whom they accrued, these benefits would have to turn up on the real output side the growth accountants ledgers for the economy as a whole. Where, then, are the externalities that one might have expected from the accumulation of this new class of productive assets? Is the story of the computer revolution simply that every individual private investor has received, and can only hope to go on receiving exactly what they payed for, and nothing more from the collective accumulation effort? These are intriguing questions, indeed, particularly when set against the expectations that have been stirred by discussions of the emergence of the distributed intelligence, the networked economy and the national and global information infrastructure. Understandably, the puzzle continues to compel the attention of economists interested in the long term prospects of growth at the macrolevel, and those who seek to understand the microeconomics of the phases of information technology revolution through which we have been passing, and their relationship to those that may lie ahead.

4. Measurement Problems Those who would contend that the slowdown puzzle and computer productivity paradox are consequences of a mis-measurement problem must produce a consistent account of the timing and extent of the suspected errors in measurement. The estimation of productivity gorwth requires a consistent method for estimating growth rates in the quantities of inputs and outputs. With a few notable exceptions (e.g. electricity generation), the lack of homogeneity in industry output frustrates direct measures of physical output and makes it necessary to estimate physical output using a price deflator. Similar challenges arise, of course, in measuring the heterogeneous bundles of labor and capital services, but, for the moment attention is being directed to the problems that are suspected to persist in the measures of real product growth. Systematic overstatement of prices will introduce a persistent downward bias estimated output growth and, therefore, an understatement of both partial and total factor productivity improvements. Such overstatement may arise in several distinct ways. There are some industries, especially services, for which the concept of a unit of output itself is not well defined, and, consequently, where it is difficult if not impossible to obtain meaningful price indices. In other cases, such as the construction industry, the output is so heterogeneous that it requires special efforts to obtain price quotations for comparable products both at an one point in time

16 and over time. The introduction of new commodities again raises the problem of comparability in forming the price deflators for an industry whose output mix is changing radically, and the techniques that statistical agencies have adopted to cope with the temporal replacement of old staples with new items in the consumers shopping basket have been found to introduce systematic biases. These are only the simpler and more straightfoward potential worries about mismeasurement, but we shall begin with a consideration of there bearing upon the puzzle of the slowdown and the computer productivity paradox, before proceeding to some less tractable conceptual questions.

4.1 Over-Deflation of Output: Can this explain the Slowdown puzzle? The simplest formulation of a mis-mismeasurement explanation for the slowdown falls quantitatively short of the mark. This does not mean that there is no underestimation of the labor productivity or the total factor productivity growth rate. Perhaps the paradox of the conjuction of unpredecentedly sluggish productivity growth with an explosive pace of technological innovation can be resolved in those terms. There is indeed something to be said for taking that more limited approach a step further. The Boskin Commission Report (1997) on the accuracy of the official Consumer Price Index (CPI) prepared by the U.S. Bureau of Labor Statistics concluded that the statistical procedures in use were leading to an anverage overstatement of the annual rate of increase in "the real cost of living" by consumer prices by 1.1 percentage points. This might well be twice the magnitude of the error introduced by mismeasurement of the price deflators applied in estimating the real gross output of the private domestic economy over the period 1966-89.16 Were we to allow for this, by making an upward correction of the real output growth rate by as much as 0.6 to 1.1 percentage points, would lift the level of the Abramovitz-David (1998) estimate for the "super-refined" multifactor productivity growth rate back up to the positive range, putting it at 0.35 - 0.85 percentage points per annum for the 1966-89 period. Even so, that correction -- which entertains the extremely dubious assumption that the conjectured measurement biases in the output price deflators existed only after 1966 and not before -would still have us believe that between 1929-66 and 1966-89 there had been a very appreciable
The Boskin Commission (1997) was charged with examining the CPI, rather than the GNP and GDP deflators prepared by the U.S. Commerce Department Bureau of Economic Analysis, and some substantial part (perhaps 0.7 percentage points) of the estimated upward bias of the former is ascribed to the use of a fixed-weight scheme for aggregating the constituent price series to create the overall ( so-called Laspeyres type of ) price index. This criticism does not apply in the case of the national product deflators, and consequently the 1.1 percent per annum figurecould be regarded a generous allowance for the biase due to other, technical problems affecting those deflators. On the other hand, the Boskim Commission's findings have met with some critiicism from BLS staff, who have pointed out that the published CPI reflects corrections that already are regularly made to counteract some of the most prominent among the suspected procedural sources of overstatement --the methods of "splicing in" price series for new goods and services. See Maddrick (1997) ; it is claimed that , on this account, the magnitude of the continuing upward bias projected by the Boskin Commission may be too large. The latter controversey does not affect our present illustrative use of the the 1.1 percentage point per annum estimate, because we are applying it as a correction of the past trend in measured real output, and, in the nature of the argument, an upper-bound figure for the measurement bias is what is wanted.
16

17 slowdown in multifactor productivity growth -- a fall-off in the average annual pace of some 0.3 - 0. 8 percentage points. Moreover, there is nothing in the findings of the Boskin Commission (1997) to indicate that the causes of the putative current upward bias the price changes registered by the CPI have been operating only since the end of the 1960's. Plainly, what is needed to give the mismeasurement thesis greater bearing on the puzzle of the productivity slowdown, and thereby help us to resolve the information technology paradox, would be some quantitative indication that the suspected upward bias in the aggregate output deflators has been getting proportionally larger over time. Martin Bailey and Robert Gordon (1988) initially looked into this and came away without any conclusive answer, and Gordon (1996, 1998) moved towards a somewhat less skeptically dismissive position on the slowdown being attributable to the worsening of price index mismeasurement errors, some further efforts at quantification are required before the possibility can be dismissed. 4.2 Has the Growth of Hard-to-Measure Activities Led to a Growing Bias? It is reasonable to consider whether the nature of the on-going structural changes in the U.S. economy, and elsewhere, are such as would exaccerbate the problem of output underestimation, and thereby contribute to the appearance of a productivity slowdown. In this regard, Zvi Griliches' (1994) observation that there has been relative growth of output and employment in the "hard-to-measure" sectors of the economy is undisputable. The bloc of the U.S. private domestic economy comprising Construction, Trade, Finance, Insurance and Real Estate (FIRE), and miscellaneous other Services has indeed been growing in relative importance, and this trend in recent decades has been especially pronounced. Gross product originating in Griliches hard-to-measure bloc average 49.6 percent of the total over the years 1947-1969, but its average share was 59.7 percent in the years 1969-1990.17 Nevertheless, is the impact of the economys structural drift towards unmeasurability big enough to account for the appearance of a productivity slowdown between the pre- and post-1969 periods? Given the observed trend difference (over the whole period 1947-1990) between the labor productivity growth rates of the "hard-to-measure" and the "measureable" sectors that are identified by Griliches (1994, and 1995 ), the hypothetical effect on the weighted average rate of labor productivity of shifting the output shares in the way that they actually moved can be calculated readily. There is certainly a gap in the manhours productivity growth rates favoring the better-measured, commodity producing sections, but the shrinkage in that sectors relative size was too small to exert substantial downward leverage on the aggregate productivity growth rate. The simple re-weghting of the trend growth rates lowers the aggregate labor productivity growth rate by 0.13 percentage points between 1947-1969 and 1969-1990, but that represents less than 12 percent of the actual slowdown that Griliches was seeking to explain.18
See Griliches (1995 [Chapter 10]:Table 2), for the underlying NIPA figures from which the shares in the total private (non-government) product were obtained. The averages in the text were calculated as geometric means of the terminal year values in each of the two intervals. 18 The gap betwen the measured and the hard-to-measure sectors long term average rates of real output per
17

18

manhour amounted to about 1.40 percentage points per annum, but it was smaller than that during 1947-1969 period and widened thereafter. The more pronounced post-1969 retardation of the average labor productivity growth rate found for the hard-to-measure sector as a whole was thus is responsible in a statistical sense for a large part of the retarded growth of the aggregate labor productivity. But, as will be noted below, it would be quite misleading to suggest that every branch of activity within the major sectors labelled hard to measure by Griliches participated in the slowdown, and that the industries comprising the measured sectors did not.

19 A somewhat different illustrative calculation supporting the same conclusion has been carried out by Abramovitz and David (1998). For that purpose they begin by accepting the finding of the recent report of the Advisory Commission to Study the Consumer Price Index, the so-called Boskin Commission (1997), that the measurement biases affecting that index now tends to overstate current and prospective annual rates of price increase by 1.1 percent points on average, and that a plausible range around this upward bias extends from 0.8 to 1.6 percentage points. For the sake of argument, it can then be supposed that in the past the overall bias stood at the high end of this range, at 1.6 points a year, that an equal overstatement was present in the price deflator for the U.S. gross private domestic product. Abramovitz and David then suppose, further, that the upward bias arose entirely from deficiencies in the price deflators used to derived real gross productivy originating within the the group of hard-to-measure sectors identified by Griliches, which would imply that the over-statement of the growth of prices was much higher 1.1 percentage points a year in that sector. They then add a further assumption, namely that this large over-statement existed since the early post-WW II era in the case of the hard-to-measure sector, whereas prices and real output growth were properly measured for the rest of the economy. What is the upshot? Under the pre-suppositions just described, a simple calculation of the weighted bias -taking account of the increasing relative weight of the hard-to-measure sector in the value of current gross product for the private domestic economy, as before -- must find that the measurement bias for the whole economy grew more pronounced between the rapid-growth period, 1948-66 and the Slowdown period, 1966-89. But, once again, it is possible to put a measure to the extent of the hypthesized mismeasurement, and the answer found by Abramovitz and David (1998) is that is that only 12 percent of the slowdown in the labor productivity growth rate that actually was observed would be accounted for in this way. Although quite consistent with the preceding alternative quantitative assessment of the mismeasurement hypothesis based upon the structural shift away from commodity production, the assumptions underlying this version are extreme; this implies that the comparative minor impact found on this score represents an upper-bound estimate. Were the foregoing not enough to warrant turning to examine other formulations of the approach to understanding the productivity slowdown as an artefact of mismeasurement, Robert J. Gordon (1998a) recently has weighed in on the same point with the findings of some new calculations that use far more finely disaggregated data from the BLS. His findings establish that the slowdown in average real output per manhours has been a quite pervasive phenomenon, affecting many industries within the commodity producing groups as well as the services. Moreover, comparing pre-1973 with post-1973 rates of growth in output per at this more detailed industry level shows that the experience of a slowdown was not statistically significantly more frequent among the industries that fall into the hard-to-measure category that it was among those classified as being well-measured.

4.3 The Role of New Goods in Unmeasured Quality Change A rather different mechanism affecting the accuracy measured productivity is more

20 ubiquitous across industries, and may well have become more pronounced in its effect during the past two decades: the turnover of the product line arising from the introduction of new goods and services, and the discontinuation of old staples, that has been experienced widely among the advanced industrial economies. This is different from the shift of the product mix between sectors whose outputs are intrinsically easier or harder to measure. It is a measurement difficulty arising from the underestimation of the growth in new products of improved quality, vis-a-vis the older part of the product line, and consequently an understatement of the rate of growth of the aggregate output of each branch of production in which this is taking place. A change in product mix of this sort not only would heighten the problem of unmeasured quality improvement, but might be in some part traceable to the effects of the emerging ICT regime. In that way it would be seen that the information revolution itself could be contributing to the appearance of a productivity slowdown, and hence to the creation of its own paradoxically weak impact upon macroeconomic growth. This is an intriguing line of argument that bears closer consideration. New information technologies, and improved access to marketing data are enabling faster, less costly product innovation, manufacturing process redesign, and shorter product life cycles. Diewert and Fox (1997) present some evidence from Nakamura (1997) on the four-fold acceleration of the rate of introduction of new products in U.S. supermarkets during 1975-92 compared with 1964-75. By combining this with data from Bailey and Gordon (1988) on the rising number of products stocked by the average U.S. grocery supermarket, it is possible roughly to gauge the movement in the ratio between these flow and stock measures, thereby getting a better idea of the movement in the new product turnover rate. This indicates that in contrast with the essential stability prevailing between the mid 1960s and the mid 1970s, there was a marked rise in the new product fraction of the stock between 1975 and 1992: if only half of new products were stocked by the average supermarket, the share they represented in the stock would have increased from about .09 to .46. With such an elevation of the relative entry rate of new products, the potential is raised for underestimation of average output growth due to the delayed linkage of new to old price series. There is a further implication. The broadening of the product line by competitors may be likened to a common-pool/over-fishing problem, causing crowding of the product space, with the result that even the reduced fixed costs of research and product development must be spread over fewer units of sales. Moreover, to the extent that congestion in the product space raises the expected failure rate in new product launches, this reinforces the implication that initial margins are likely to be high when these products first appear and will be found to have fallen rapidly in the cases of the fortunate few that succeed in becoming a standard item.

During the 1970s, the Federal Trade Commission was actively interested in whether such product proliferation was a new form of anti-competitive behaviour and investigated the ready to eat breakfast cereal industry, see Schmalensee (1978). _

21 The mechanism of product proliferation involves innovations in both marketing and the utilisation of the distribution networks. Traditionally, the new product introduction involved the high fixed costs of major marketing campaigns and thereby high unit sales. These costs have been reduced by marketing strategies in which the existing mass market distribution system is being configured under umbrellas of 'brand name' recognition for particular classes of products (e.g. designer labels, 'lifestyle' brands, and products related to films or other cultural icons) or the reputation of particular retailers or service providers (e.g. prominent department store brands or financial services provided by large retail banks). Although, in the U.S., the mass market distribution system was well-established early in this century, utilising it for product and brand proliferation was frustrated by the high costs of tracking and appropriately distributing (and re-distributing) inventory. These costs that have fallen due to the use of information and communication technologies. It is not surprising that a statistical system designed to record productivity in mass production and distribution should be challenged when the 'business models' of this system change as the result of marketing innovation and the use of information and communication technologies. Of course, some progress has been made in resolving the computer productivity paradox by virtue of the introduction of so-called hedonic price indexes for the output of the computer and electronic business equipment industries themselves. These indexes reflect the spectacularly rapid decline in the price-performance ratios of such forms of capital, but, by the same token, they increased the measured growth of the computer capital services, which are intensively used as inputs in a number of sectors, including the services sectors. The implied rise in computer-capital intensity contributes the support the growth rate of labor productivity in those sectors, but in itself the substitution of this input for others does nothing for the multifactor productivity growth rate. Thus, correcting the excessiely deflationary bias of information technology price deflators, has done wonders for the growth rate of multifactor productivity in the computer equipment industy, and its effect has contributed to the revival of the manufacturing sectors productivity simply due to the growing weight carried by that industry, but the persitingly sluggish multifactor productivity growth rates elsewhere in the economy, wh including within other branches of manufacturing where computer equipment has been diffusing, continues to belie the expectations and frustrate the hopes that the information revolution would profoundly enhance the productive capacity of the U.S. economy.

5. Conceptual Challenges: What Are We Really Supposed to be Measuring? Beyond the technical problems of the way that the national income accountants are coping with accelerating product innovation and quality change lies several deeper conceptual issues. These have been always with us, in a sense, but the nature of the changes in the organization and conduce of production activities, and particularly the heighten role of information -- and changes in the information state -- in modern economic life, may be bringing these problematical questions to the surface in a way that forces reconsideration of what are

22 measures are intended to measure, and how they actually relate to those goals. In this section we look at three issues of this sort. First some surprises appeared when economists sought to illuminate the macrolevel puzzle through statistic studies of the impact of IT at the microeconomic level, using observations on individual enterprises peformance. This highlights the conceptual gap between task productivity measures, on the one hand, and profitability and revenue productivity measurements on the other. It will be seen that not only is there an important difference, but the relationship between the two conceptual measures may itself be undergoing a transformation, as a result of the impact of the way IT is being applied in businesses. Next, we must consider the way we wish to treat investments in learning to utilize a new technology, and the implications this has for national income accounting. A major technological discontinuity, which is what the advent of a new general purpose engine often has been seen to be, may induce more than the usual increment learning processes which are not counted as production of intangible assets, and not recorded in the national income and product accounts. If so, this has a potential to occasion a more serious distortion of our macroeconomic picture of the way resources are being used, and what is being produced. 5.1 The Microeconomic evidence about productivity impacts of IT--excess or subnormal? Further light on the puzzle has been gained through the path-breaking research into the productivity impacts of IT that have been conducted at levels below that of the macroeconomy has been pursued by Brynolfsson and Hitt ( see Ch. ?8, here, based on 1995, 1996; followed by Brynolfsson 1997) and Lichtenberg (see Ch: ?7 based on 1995, 1997). These studies make use of company level cross-section data. An initially puzzling feature of their findings was that the elasticity of product with respect to the use of IT capital by the company was so high that it implied supernormally high, or "excess returns" to computer investments by the large firms represented in the cross section samples. This was a surprise, as it flew in the face of those who suggested that corporate America was wastefully pouring too much investment into the purchase of electronic office equipment, and implied that still more intensive use of suc equipment would be warranted. But it also was anomalous when viewed against the apparent sluggish growth of productivity in some of the very same industries and sectors that were represented in these cross-sections. These puzzles were soon resolved by the recognition that what the cross-section production function analyses were measuring was the impact of computers not on task productivity, but, instead on revenue productivity. The only way to render the outputs of the firms drawn from different industries comparable for purposes of statistical analysis of that kind was to measure they all in monetary units. The gap between big (cross-section) impacts gauged in revenue productivity terms, and weak (time series) effects gauged in terms of task productivity means that there could be good private profit-making reasons for this investment that do not translate into the equivalently powerful social rate of returns, the latter being what is called for in assessing the contribution of computer capital to national product growth. In other words, the standard for gauging what represents "excess returns" on computer and office equipment investments must be adjusted for the rapid rate of anticipated depreciation

23 of capital value due to the falling price performance ratio. At the equilibrium point where there will be indifference between installing new equipment and waiting to do so, the firm's expected net operating benefits per dollar of reproduction costs for these durable assets must equal the opportunity cost rate of return on the firm's outlay, plus the expected annual rate of asset depreciation. If prices of equivalent performance computer equipment are declining at rates approaching 20 per cent per annum, then we should be thinking of excess pre-tax returns as starting somewhere above 30 percentage points. It also is the case that subsequent investigations along the same lines have found that there were additional correlative of IT intensity among firms, such as higher quality workers, and company reorganization programs. Taking those into account statistically led to a substantial elimination of the apparent excess of the estimated returns on IT capital vis-a-vis the returns on capital of other kinds. The significance of the latter findings will be considered more fully below. 5.2 Leaving Out Learning Investment: Is the Scope of the NIPA Overly Narrow? The computer revolution involves a transformation of the tools used in productive work and thereby a dramatic change in the skills needed within the workforce. A very important component of the adjustments made to inputs in accounting for the total factor productivity residual was a more adequate account of the changing quality of the workforce. In particular, educational attainment has been taken as the main indicator of the increasing quality of the workforce. While it is undeniable that educational attainment has improved, it is also possible that this variable overstates the change in worker skills relevant to employment. Skills that must be acquired 'on the job' represent an investment in intangible capital to organisations. These investments are not capitalised and therefore appear as current labour costs in the national income accounts. For these investments in intangible capital to be relevant to the productivity slowdown and the computer productivity paradox, it would be necessary to demonstrate that the 'match' between educational attainment and relevant employment skills was falling over time. If it is true that the real costs of these workers are higher to companies due to the extent of 'on the job' training necessary for their effective performance in increasingly information technology intensive work environments, the skills attributed to educational attainment, may be discontinuous, overstating the 'quality' of labour. To the extent that the relevant skills are non-cumulative (e.g., 'specific' to a given generation of information systems), the intangible investments of companies will remain high and will also be difficult to address through reforms in the educational system. By comparison, for earlier eras, there is evidence that the match between education and skills was better. During the 1890-1919 period, secondary school expansion was able to keep ahead of the rising requirements for more educated workers and the skills conveyed in secondary school may have been more relevant to those needed in the workplace. Correspondingly, evidence recently developed by Goldin and Katz (1996) suggests that changes in the educational system during the 1920s supported the growth of the new high-skill

24 industries on the eve of the Great Depression. The rapid expansion of higher education following World War II has been identified by Nelson (1990) as a source of U.S. leadership in technology and productivity during the 1950s and 60s. If the link between educational attainment and labour skills deteriorated during the 1970s and 1980s, this deterioration could be linked to both the productivity slowdown and the computer productivity paradox. The significance of this deterioration for 'mismeasurement' would, however, depend very much on the potential for its alleviation. If the mismatch is a persistent result stemming from rapid technological change and the need to 're-create' the complementary skills needed for employment in an information technology-intensive workplace, shortcomings in productivity growth are likely to persist indefinitely. This is, of course, consistent with the computer revolution sceptic point of view. On the other hand, if the investments that companies are making in information technology related skills are cumulative and complemented by the growing extent to which computer literacy is being incorporated in secondary and post-secondary education, the size of intangible investments made by companies should begin to decline with the surplus recorded as productivity improvement and, eventually, wages. If this interpretation is accurate, the 'cautious optimist' viewpoint would be vindicated. 5.3 Can Conventional Measurement Scales Be Trusted in a "Weightless Economy"? Biases in measurement of output growth within the conventional NIPA framework may have been contributing to both the puzzling slowdown and generating the appearance of paradoxically weak impacts from computerization, but, the case remains to be made that this is the form of 'magic bullet' that would conclusively dispatch both puzzles. Existing candidates proposed for this role are deficient, in either on grounds of timing or magnitude. While it is possible to expand the list of measurement problems in a way that would conjoin the two sides of the puzzle, and thus contribute further to their resolution, the most promising approach along those lines involves a head-on challenge to the existing productivity measurement framework; it argues that the scope of what is considered by the official accounts to be the economy's "gross output" is overly narrow, and the a properly consistent realignment of the definitions of outputs and inputs would raise the growth rate of the former in relationship to the latter. Moreover, it suggests that such a realignment is needed especially in view of the way that the information revolution has been affecting economic activity. This this challenge to the official statistics is worth examining more closely for if their usefulness in tracking macroeconomic performance is deteriorating, they will become increasingly misleading as a guide in fiscal and monetary management; increased scope for variant readings of signals whose accuracy has been impeached by experience is bound to introduce a greater degree of subjectivity in policy formation -- so that the public trust in the personal judgements of those in positions of responsibility for policy-making, rather than in the economic logic and empirical reproducibility of the bases for their judgements, must come increasing to the fore. At present, those, like Chairman Greenspan, who are 'cautious optimists' about the future returns on information technology investment may be hostages to fortune. A resurgence of inflation, perhaps induced by mechanisms unrelated to those discussed so far, would support the advice, if not necessarily the reasoning, of the computer

25 revolution sceptics. What does this line of approach have to add concerning the relation between productivity measurement and economic performance? Economists generally agree that a fundamental standard for assessing economic performance is the improvement of social welfare. Conventional measures of productivity growth have the virtue of being unambiguously linked to welfare improvement, at least if one is prepared to ignore or otherwise account for issues such as environmental externalities. This link may be summarised by the maxim that social welfare is increased by expanding output and that this welfare will be further increased if more "utility" can be obtained from the goods and services created by "transforming" inputs that represent the same or a smaller sacrifice of "utility." That formation parallels the traditional notion of productivity, or efficiency being gauged by the ratio of some physical measures of outputs to inputs. The shift to a "utility" metric, however, implicitly opens the door to a far wider scope for what should be counted as output, and correspondingly as inputs. Indeed, it might be thought begging the question to presuppose that the use of information technology necessarily carries very substantial conventional productivity gains at all, inasmuch as some of the central issues in the productivity paradox are acknowledged to revolve around the conceptual ambiguities and measureability of the economic impacts of the new computer and telecommunications technologies. The latter are historically unprecedented in the readiness with which they are finding applications in the information-intensive service sectors of the economy, where the notion of ept of "output of constant quality" itself is a slippery construct that continues to resist direct translation into simple, scalar measures of the task efficiency achieved in firms and industries. As has been pointed out already, a substantial degree of uniqueness and context-dependence inheres in the very nature of information products. The intuitive and conventionalized association of the notion of "productivity" with the engineering concept of standardized-task-efficiency thus is being seriously challenged by the increasing intangibility and mutability of "outputs" and "inputs." Rather than being easily "pinned down" for quantification, the new so-called "weightless economy" should be seen to be floating loose in statistical space. A range of concrete examples may help convey a better appreciation of the connection between the expansion and increasing sophistication of "the information economy" and the suspicion that outmoded national accounting concepts may be more and more seriously distorting our perception of the economy's productive performance. Were two lawyers each to have drafted identical legal codicils, each to be attached to the will of two different clients, the task efficiency concept of productivity would have us ask whether the effort involved -- in terms of the lawyers' time and office capital used -- was the same in the two instances. Were that found to be the case, the levels of productivity in the two situations would be judged identical. On the other hand, the circumstances of the two clients might have been different, and the value of having an expert draft the codicil might correspondingly differ between them, which could reflect itself in a difference in the fees the clients were willing to pay. Ought this to be taken into account, thereby measuring the amount of "constant quality legal services" as being different in the two instances? Other things being the same, the latter procedure would

26 register a difference in the associated productivity measures? The source of the conundrum just posed is not simply that the "meaning" or, the economic utility of the product in question is dependent on characteristics of the consumers (which include their circumstances and subjective states); in some degree that is true in general. The national income accountant's troubles stem from the possibility that each such transaction in what, from the service-provider's viewpoint might be the same commodity item, can be separately provided and differently priced according to the legal needs of the client. If this is a complication at one point in time when consumers are heterogeneous, it is equally a problem when the circumstances of consumers change accross time. New forms of insurance, and new financial derivatives that facilitate portfolio hedging may reduce the financial risks to which people are exposed. The increased security is worth something , and its value is reflected in the customers' willingness to pay the commisions and premiums charged by the vendors of these assets. With changes in external circumstances, willingness of a given customer may be subject to change. Should we say that the providers of such insurance instruments have become "less productive" when the premium rates paid by the purchaser(s) is lowered -- say, because the volatility of financial markets has been reduced by closer integration and lower arbitrage costs, or because people with greater wealth have become more psychologically tolerant of a given degree downside risk? In the same vein one might view the "quantity" of medical malpractice insurance "service" received by doctors (and hospital administrators) to be well-defined in economic welfare terms, measureable in principle by asking what real income increment that would compensate an individual practitioner for the withdrawal of a specified incremental amount of "cover." Do we then want to say that the volume of service represented by a given policy -and hence the "productivity" of the agent providing it -- has increased, or view it as remaining the same, when the insurer boosts the premium rate in response to an upward trend in jury-awarded damages to the plaintiff in suits alledging malpractice? Or should that change be disregarded by being treated as a pure price increase rather than a "service-quality" increase?20

Note that if the change in viewed as a price increase that exactly corresponded to the value of the increased actuarial risk assumed by the insurer, one might say that more "insurance service" had been provided by the policy in question. In making a productivity calculation, if at the margin the increased value to the customer of being relieved of incremental risk was matched by the increased cost to the insurer of assuming that risk (i.e., no profit occurring at the margin) then inclusion of the costs of riskbearing among the inputs should just offset the incresed output and there would be no recored gain in measured total input efficiency (multifactor productivity).

20

27 Like the preceding examples, this exhibits the conceptual distance between the engineer's and the economists intuitive notions of what an economys productivity is measuring. The engineering conceptualization of productivity is that it measures the efficiency with which the provision of a standardized, producer-defined good or service is carried out. By contrast, the economist is more disposed to think that an economy's "productivity" should be gauged by the ratio between the economic welfare ("utility", or "final user satisfaction") that is yielded by the goods and services produced, and a corresponding valuation of the inputs used up by the production process. Although latter conceptual framework is flexible enough to permit its implementation for measurement purposes despite the presence of heterogeneous, and increasingly intangible outputs and input, the problem posed by the growth in the importance of "information-goods" on both the input and output sides of the production process is that the "transformations" that the latter involves are more and more of the (immaterial) sort that go on in people's heads. Hence, they are becoming less subject to the familiar class of constraints that material transformations are expected to obey. At least two implications may be drawn from the latter perception of the nature of the digital technology revolution. First, the proliferation of products that we have considered in Section 4.2 would seem to be at odds with a traditional understanding of the link between productivity and welfare. More is being produced, but the 'more' in the case is not necessarily the same 'more' that would be measured by conventional productivity measures. What is being produced is greater diversity in the available array of goods and services, i.e., increased variety of choice for the consumer. Now it has long been recognised that the social welfare value of variety is ambiguous. Product differentiation may either increase or reduce social welfare, depending upon how consumers value the choices it provides, and the extent to which producers use the market power it provides to raise prices (incurring deadweight losses in consumer surplus in addition to transfers from consumer to producer surplus). Thus, even if the hypothesised effects of marketing innovation and information technology-related efficiencies in distribution are established, their influence on social welfare cannot be established without further assumptions and evidence. Secondly, another, similarly ambiguous conclusion results from the discussion of whether we should expand measured output to include investments in "learning." In the situation at hand, such sacrifices of current conventional goods and services production are made in the course of upgrading organizational and individual labor skills, so that in the future it will be possible to exploit the full potential of the latest vintages of computer hardware and software. If the information technology skills acquired by those who entered the labor force after 1970 are cumulative, social welfare is likely to improve as the average competence of employees is enhanced. This would be so, even if private enterprises might not be able individually to capture all the benefits of the upgrading the skills of their workers and managers. On the other hand, if the decision to introduce new equipment, and new computer programs is seen as having de-skilled individuals and organizations, by rendering their technology-specific competence obsolete, the matter become more problematic. If human capital and and organizational competence were to go on being rendered obsolete at something close to the same pace at which information technology relevant capabilities are being acquired, it would be fair to conclude that both the productivity and welfare promises of

28 the information technology revolution are likely to be seriously overstated. Neither of these ambiguities can be resolved through theoretical introspection. Instead, they call for a far more detailed empirical studies than could be drawn upon only a decade ago. It is therefore encouraging that investigations heading in this direction have been undertaken. In this regard, the work of Greenan and Mairesse (1997) is especially pertinent methodologically, and illuminating empirically. Their study utilises the results from a French survey of whether and how modern technologies are being used in a large number of service and manufacturing enterprises. The survey is conducted at the level of the individual employee and asks both about their own experience and their understanding of IT use in the enterprise. The substantial size of their survey (e.g. over 3,000 enterprises and about 5,000 employees for each of three years (1987, 1991 and 1993) are of the scale needed to address the ambiguities of the influence of information technology. Unfortunately, however, these survey are three successive cross-sections rather than panel data. Thus, while they do find a significant positive contributions of computer use to value added per worker (a measure of (net revenue productivity of labor), and to wages, they are unable to conclude whether skills are cumulative. Nor does their finding of a statistically significant and positive contribution to total factor productivity survive adjustment for labour quality. While such results are tantalising and suggestive, the scale of the measurement activity will have to be matched with an appropriate focus, an achievement that will require an imaginative statistical office in some national government. 6. The Regime Transition Hypothesis: A Darker Journey Towards the Brighter Future? The so-called regime transition hypothesis owes much in its general conception to the work of Freeman and Perez (1986) that many incremeant technological, institutional and social adjustments are required to realize the potential of a radical technological departure, and that those adaptations are neither instantaneous nor costless. David (1990, 1991a, 1991b)) took up this idea, which fitted preconceptions derived from studies of the economic history of previous developments involving the introduction of "general purpose engines" -- the vertical watermill, steam engines, the electrical dynamo, internal combustion engines -- and argued that it was quite plausible that an extended phase of "transition" would be required to fully accomodate and hence elaborate a technological and organizational regime built around a general purpose digital computing engine -- the computer. Recent work in the spirit of the new growth theory has sought to generalize on the idea (formulated by Bresnahan and Trajtenberg (1995), of general purpose technologies that transform an economy by finding many new lines of application, and fusing with existing technologies to rejuvenate other, pre-existing sectors of the economy. While the positive, long-run growth igniting ramifications of a fundamental technological breakthrough of that kind are stressed in the formalization of this vision by the new growth theory literature, the possible down-side of the process has not gone unrecognized. Mathematical models of such a multi-sector learning and technology diffusion process indicate that the resources absorbed in the increasing roundaboutness of the transition phase may result in the slowed growth of

29 productivity and real wages.


21

The earlier formulation of the regime transition argument by David (1990, 1991) was less ambitious, but also focused specifically upon the economic aspects of the initial phases of the transition dynamics that might contribute to slowing the measured growth of industrial productivity. There are two distinct facets of the "extended transition" explanation of the productivity paradox. One is concerned to show that lags in the diffusion process involving a general purpose technology can result in long delays in the acceleration of productivity growth in the economy at large; whereas the other emphasizes that in the transition process itself resources are devoted to purposes that escape being properly measured among the outputs of the economy. Each may bear some further elaboration here, briefly, taking them in turn. 6.1 Diffusion, Dynamos and Computers The first idea is that productivity advances stem from the substitution of new, ICT intensive production methods for older ones, and for improvements and enhancements of the new technologies themselves, but that because those improvements and the diffusion of ICT are mutually interdependent processes, it is possible for this dynamic process to be a quite long-drawn affair. By drawing the analogy between the computer and the dynamo, an earlier instance of a general-purpose engine, David (1991) sought to use the US historical experience of electrification to give a measure of concretness to this general observation. Although central generating stations for electric lighting systems were introduced first by Edison in 1881, electric motors still constituted well under than one-half of one percent of the mechanical horsepower capacity of the US manufacturing sector at the end of that decade. Electrification was proceeding rapidly at this time, especially in the subsitution of dynamos for other prime movers such as waterpower and steam engines, so that between 1899 and 1904 the electrified portion of total mechnical drive for manufacturing rose from roughly 5 per cent to 11 per cent (see David (1991a:Table 3)).Yet, it was not until the decade of the 1920's that this measure of diffusion, and the more significant measure of the penetration of secondary electric motors in manufacturing, both moved above the 50 percent mark. It was the transition to the use of secondary electric motors (the unit drive system) in industry that my analysis found to be strongly associated with the surge of total factor productivity in manufacturing during the decade 1919-1929.

See e.g.,chapters by Helpman and Trajtenberg (1998), Aghion and Howitt (1998), and other contributions in Helpman (1998).

21

30 Recent estimates of the growth of the growth of computer stocks and the flow of services therefrom are consistent with the view that when the "productivity paradox" debate began to attract attention, the US economy could be said to have still been in the early phase of the deployment of ICT. Dale Jorgenson and Kevin Stiroh (1995: Table 3) provide estimates for the flows of real services from the "computer" and "non-computer" components of the US stock of producers' durable equipment (PDE). 22 Computer equipment, for this purpose is taken just to include central processers as well as direct storage drives, displays and printers, and these estimates take into account the changing vintage structure of the computer stock and base the real reproduction costs valuations on the US BEA's "hedonic" or constant-quality price indexes. Corresponding estimates are made for the larger (more heterogeneous) category of OCAM captial, but it has not been possible to extend the coverage more broad and subsume digital switching equipment and other microprocessors used in telecommunications capital, let alone the microprocessors in household electronic equipment and the rapidly proliferating array of "smart" objects. One thing revealed by the available figures is that back in 1979, when computers had not yet evolved so far beyond their limited role in information processing machinery, the computer equipment and OCAM components were providing only 0.56 percent and 1.5 percent, respectively, of the total flow of real services from the (non-residential) producer durable equipment stock. But, these restricted measures of "computerization" rose at 4.9 per cent in 1985, and had ballooned to 13.8 percent by 1990, and 18.4 percent two years after that. If we extrapolate from the (slowed) rate at which it was rising during 1990-1992, the value of this index for 1997 would stand somewhere below the 38 percent level. The preceding numerical exercise supports the view that the extent of "computerization" that had been achieved in the whole economy by the late 1980s was roughly comparable with the degree to which the American manufacturing sector had become electrified at the beginning of the twentieth century. Does the parallel carry over also, as regards the pace of the transition in its early stages? An affirmative answer can be given to this question, but the route to it is a bit tortuous, as may be seen from the following. If we consider just the overall measure of industrial electrification index that has been referred to above, the pace of diffusion appears to have been rather slower during the "dynamo revolution" than that which has been experienced during the 1979-1997 phase of "the computer revolution"; it took 25 years for the electrified percent of mechanical drive in manufacturing to rise from roughly 0.5 percent to 38 percent, whereas, according to the index just presented, the
The capital service flows in question are measured gross of depreciation, corresponding to the gross output concept used in the measurement of labor and mulifactor productivity. Some economists who have voiced skepticism about the ability of computer capital formation to make a substantial contribution to raising output growth in the economy point to the rapid technological obsolesence in this kind of producer durables equipment, and argue that the consequent high depreciation rate prevents the stock from growing rapidly in relationship to the overal stock of producer capital in the economy. The latter argument would be relevant were one focussing on the impact on real net output, whereas the entire discussion of the productivity slowdown has been concerned with gross output measures. See Sichel (1997: pp. 101-103) for a useful comparison of alternative estimates of net and gross basis computer service "contributions to growth".
22

31 same quantitative transition has been compressed into a span of 18 years in the case of the computer. But, that really is not quite the right comparison to make in this connection. The index of the computerization of capital services that can be derived from the work of Jorgenson and Stiroh (1995) rises in part because it takes into account of the changing quality of the computer stock, whereas the electrification diffusion index simply compares horsepower rating of the stock of electric motors with total mechanical power sources in manufacturing. The latter neglects the significance for industrial productivity of the growth of secondary electric motors, as contrasted with prime movers; secondary motors were those which, being used to drive tools and machinery on the factory floor (and mechanical hoists between floors), are found to have had far more important impacts upon measured productivity in manufacturing (see David (1991a: Table 5)). Between 1899 and 1914 the ratio of secondary motor horsepower to the horsepower rating of all mechanical drive in US manufacturing was rising at an average compound rate of 26.2 percent per year. It is therefore striking to observe that, over the period from 1979 to 1997, our estimate for the average rate of growth estimated here for the ratio of computer equipment services to all producers' durable equipment services in the US turns out to be precisely the same, at 26.4 percent per annum. Such considerations should, at very least, serve as a constructive answer those who would casually dismiss efforts to draw some insights into the dynamics of the information revolution by examining the economic history of the dynamo revolution in the half century up to 1929. One argument advanced for the irrelevance of previous experience is that the pace at which the price performance ratio of computer equipment has been plummeting so far exceeds the rate of fall in the real unit costs of electric engery that there is little if anything to be inferred from the time scale of the transition to the application of the unit-drive system in manufacturing. Yet, Sichel (1997:Table 5-2) estimates the rate of change in real prices of computer services for 1987-1993 to have been -7.9 percent per annum, and compares that to -7.0 percent per annum trend rate of decline in the real price of electric power over the period 1899-1948. Comparing the rates of change for two such disparate time spans seems rather peculiar, so one might seek to put the entry for electricity in this comparison on a more appropriate footing by locating a comparably brief 6-10 year interval, and one that is positioned equivalently early in the evolution of the dynamo technologys evolution. Taking electrification to have commenced with the introduction of the filament lamp and the central power station in 1876-81, and noting that the 1987-1993 interval was situated 16 to 22 years after the introduction of the micro-processor and magnetic memory, the corresponding temporal interval would be 1892-1903.23

Fortuitously, these dates bound the period in which the possibility of a universal electrical supply system emerged in the US as a practical reality, based upon polyphase AC generators, AC motors, rotary converters, electric (DC) trams, and the principle of factory electrification bas ed upon the unit drivesystem. See David (1991) for further discussion.

23

32 But the early evolution of electrical technology proceeded somewhat more slowly than that of computer technology, and it was only at the close of that period that the technical components for implementation of universal electrical supply systems -- based central generating stations and extended distribution networks bring power to factories and transport systems -- could be said to have been put in place. The real prices of electric power declined at an annual rate of 1.3 percent per year during 1892-1902, but over the next decade the rate of decline accelerated to 6.8 percent per year, and from 1912 to 1920 it fell at an average rate of 10 percent per year. This would seem sufficient to make a very limited, negative point: comparing the movements of the prices of electricity and quality adjusted computer services hardly warrants dismissing the relevance of seeking some insights ino the dynamics of the transition to new general purpose technology by looking back at the dynamo revolution. In arguing the opposite view (against the possible relevance of the historical experience of industrial electrification), however, Jack Triplett (1998) suggests that Sichels (1997) findings --and, by implication, those just reported--may be misleading. He contends that the hedonic price indexes for computers that come bundled with sotware actually would have fallen faster than the (unbundled) price-performance ratios that have been used as deflators for investment in computer hardware, so that the price indexes of quality adjusted (hardward and software) computer services presented by Sichel (1997) seriously underestimate the relevant rate of decline. But, that argument seems to suppose that operationally relevant computer speed is appropriately indexed by CPU-speed, whereas McCarthy (1997) and others have pointed out that the bundled computer operating system has grown so large that more processing power does not translate into more effective operating power. In other words, one should be thinking about the movements in the ratio TEL/WIN, instead of their product: WINxTEL. Further, in the same vein, it may be noticed that the slower rate of fall in computer services prices as estimated by Sichel (1997) are more in accord with the observation that applications software packages also have ballooned in size, through the addtion of may features that typically remain unutilized; that CPU speed may be too heavily weighted by the hedonic indexes for hardware, inasumach as the utility of (net) computer power remains constrained by the slower speed of input-output functions; and that over much of the period since the 1960s the stock of legacy software running on mainframes continued to grow, without being re-written to optimally exploit the capacity available on the new and faster hardware. A deeper, but equally deserved comment would point out that the attempt to casually dismiss the regime transition hypothesis on the grounds that the analogy between computer and dynamo is flawed by the (putative) discrepancy between the rate at which prices associated with electricity and computer services, is itself an example of the casual mis-use of analogies. An understanding of the way in which the transmission of power in the form of electricity came to revolutionize industrial production processes tells us that far more was involved than the simple substitution of one, new form of productive input for and older alternative. The pace of the transformation must be seen to be governed, in both the past and the current regime transitions, by the ease or difficulty of altering many other, technologically and organizationally related features of the production systems that are involve.

33

6.2 The Proper Limits of Historical Analogy While there still seems to be considerable heuristic value in the historical analogy that has been drawn between "the computer and the dynamo," a cautious, even skeptical attitude is warranted regarding to the predictions for the future that some commentators have sought to extract from the existence of the foregoing, and still other close points of quantiative resemblence between the two transition experiences. For one thing, statistical coincidences in economic performance are more likely than not to be just matters of coincidence, rather than indications that the underlying causal mechanisms are really one and the same. It is true that one can show, merely as a matter of algebra, that only after the 50 percent mark in diffusion of a cost-saving technology will the latter have its maximum impact upon the rate of total factor productivity growth (see David (1991: Technical Appendix)). Pointing out that in the case of factory electrification in the U.S. the associated surge in multifactor productivity growth rates that occurred throughout the manufacturing sector during the 1920s followed the attainment of the 50+ percent stage in that diffusion process is then pertinent, because it underscores the empirical relevance of resisting the casual supposition that the biggest productivity payoffs should come when the pace of diffusion is fastest. One can use this historical evidence quite legitimately when suggesting that it may still be too soon to be disappointed that the computer revolution has not unleashed a sustained surge of readily discernable productivity growth thoughout the economy. But that is not the same thing as predicting that the continuing relative growth of computerized equipment vis-a-vis the rest of the capital stock eventually must cause a surge of productivity growth to materialize; nor does it say anything whatsoever about the future temporal pace of the digital computer's diffusion. Least of all does it tell us that the detailed shape of the diffusion path that lies ahead will mirror the curve that had been traced out by the electrical dynamo during the early decades of the twentieth century. There is nothing foreordained about the dynamic process through which a new, general purpose technology permeates and transforms the organization of production in many branches of an economy. One cannot simply infer the detailed future shape of the diffusion path in the case of the ICT revolution from the experience of previous analogous episodes, because the very nature of the underlying process renders that path contingent upon events flowing from private actions and public policy decisions, as well as upon the expectations that are thereby engendered -- all of which still lie before us in time. Moreover, the mixture of quality-enhancing and cost-saving effects is neither a pre-determined constant, nor does it always alter in the same way over the course of a general purpose technology's deployment. To the extent that quality-enhancing product innovations are particularly favored by the ICT applications, as has been suggested above, the productivity measurement problems that presently plague the national income statistics could grow relatively more serious as microprocessors and digital technologies, and "information goods"

34 become more pervasive among the outputs and inputs involved in economic activities.

7. Beauties and Beasts: Computers, Microprocessors and Revolutionary Production Systems Putting the whole burden of explanation on the notion that existing methods are inadequate in accounting for the effects of the computer revolution is, however, not entirely satisfactory. Even if a large share of these effects vanish into territory inadequately mapped using existing statistical measurement approaches, it is puzzling why more conventional indices of productivity in branches of industry that previously were not regarded to be "unmeasureable" have not been more positively affected by the advent of new information technologies. Here, we believe, there is a case to be made that the customary link between innovation in the development of technological artefacts and improvements in productivity for the users of those tools has indeed frayed; that there are real problems in delivering on the productivity promises of the computer revolution. 7.1 Component Performance and System Performance A common focus of attention in the computer revolution is the performance of microelectronic components. The widespread acceptance of Moore's Law shapes user expectations and technological planning, not only in the integrated circuit industry, but in all of the information and communication technology industries. For software designers, Moore's law promises that new computational resources will continue to grow and encourages the development of products embodying more features so that the diverse needs of an ever-growing user community can be fulfilled. It need not follow that any particular user will experience performance improvement as the result of component improvement. Even if the user adopts the new technology, the learning time in mastering new software, the greater number of choices that may need to make to navigate a growing array of options and the longer times required for the more complex software to execute will offset part or all of the gains from increasing component performance. It is now widely recognised that the costs of personal computer ownership to the business organisation may be tenfold the acquisition costs of the computer itself. Some of these costs are recorded directly while others are part of the learning investments being made by firms in formal and informal 'on the job' knowledge acquisition about information technology. Many of these costs are unrelated to the performance of microprocessor components and for many applications, the use of personal computers is therefore relatively unaffected by microprocessor performance improvements. Identifying which applications are subject to relatively constant costs and how great their extent is in the overall use of personal computers is an important research opportunity, but one that will obviously be very costly to conduct.

35 From a productivity measurement viewpoint, the relatively constant costs associated with personal computer ownership are further complicated by the continuing spread of these technologies throughout the organisation. In many cases, employees are being given general purpose tools which may be and often are useful for devising new ways to perform their work. At the same time, it is apparent to most sophisticated users of computers that the extension of these capabilities also creates a vast new array of problems that must be solved to achieve desired aims. Most organisations believe that learning to solve these problems will eventually create a greater range of organisational and individual capabilities that will improve profitability. In any case, it is now expected that a modern organisation will provide reasonably sophisticated information technology as part of the office equipment to which every employee is entitled. From a business process or activity accounting viewpoint, the spread of information and communication technologies has enormously complicated the functionality of the modern organisation. A task such as the creation of a business letter involves a considerable range of choices and efforts to define an efficient means of performing this task are seldom confined to the individual performing the task. Company formats and style sheets, equipment maintenance and troubleshooting, file server support and standards for archiving and backup of electronic copies of documents all now enter into the task of producing a business letter. The existence of new capabilities suggests the potential for creating greater order and precision while the reality of deploying these capabilities may substantially raise the costs of performing the task. For some, these observations may suggest a call for return to the 'bad old days' of smudged typescripts and hand addressed envelopes. The point, however, is that most organisations have neither the capability or interest in performing detailed activity accounting with regard to the new business processes arising from the use of information and communication technologies. Without attention to these issues, it is not surprising that they may often follow a version of Parkinson's law that work expands to fill the time allotted to it in which the ancillary complications of preparing to perform a task may fill the time allotted to completing it. Surely, this is not the average experience, but much more careful attention would be suggested in the management of information and communication resources if their costs were fully recognised. . This line of analysis suggests that much greater attention should be devoted to the 'task productivity' of information and technology use. Tasks that are repetitively performed using information and communication technologies are likely worth the same amount of analysis that is devoted to approval paths, logistics, or other features of the organisation that are the focus of quality assurance and business process re-engineering activities. From the productivity measurement viewpoint, it will not be possible to gather meaningful statistics about these aspects of productivity until organisations are performing these sorts of measurements themselves. Only when work processes are monitored and recorded are they likely to find their way into the adjunct studies that are performed to test the accuracy of more abstract productivity measurement systems. Thus, in the past, the number of cheques processed by a bank teller was used as a means of validating the price indexes developed for banking services. Today, it is as important that measures of task productivity be devised that can either validate or contest the price indexes accompanying computer user. Component performance is not a

36 satisfactory means of measuring task productivity. These difficulties do not appear (as yet) responsive to increases in the performance of the underlying components but instead are the consequences of human and organisational factors. Organisations are still learning how to manage these technologies and 'best practice' has been a moving target precisely because of the rapid pace of change in information and communication technologies. This pace seems unlikely to slacken in the foreseeable future. Thus, the process of realising gains from this technology depend upon the extent to which the experience of the past decade has been cumulative or not, and the degree to which further advances in digital technologies will render these information tools more task specific, and thus, by moving away from general purpose configurations, permit capital-savings to materialize.

Вам также может понравиться